From Former at physicist.net Fri Jan 1 19:04:46 2021 From: Former at physicist.net (Former Physicist) Date: Fri, 1 Jan 2021 19:04:46 -0500 Subject: [SciPy-Dev] Outstanding hyp2f1 PRs Message-ID: <3e647ddb-ba94-29bf-6ab9-4185667806a0@physicist.net> Hello scipy-dev, I have several outstanding (and now stale) PRs that fix some issues with the hypergeometric function. They are as follows: 8548 8151 8110 The first two (8548 and 8151) are old enough that they now have failing CI/CD pipelines.? (When I opened these PRs 3 years ago, I'm pretty sure they were passing. I'm guessing you guys updated or changed your pipelines in the meantime.) For 8548, there is a unit test that is failing in the scipy.signal module (the test_symmetry unit test). I tried troubleshooting this over the summer, but I was not able to find the issue.? This unit test passes locally on my laptop and only seems to fail in the azure pipeline for a specific environment. For 8151, another developer h-vetinari has created another duplicate PR (13310 ) of mine and has actually fixed the CI/CD issues.? I don't know if h-vetinari is on this list but I will leave a comment on github telling him how to subscribe. So my questions are as follows: * Can someone help me troubleshoot the failing unit tests in 8548? I do not know much about scipy's travis/azure pipelines or whatever and have no idea how to fix that stuff. * What is the right procedure for fixing 8151?? I'm perfectly fine with h-vetinari fixing those issues as I don't currently have time to follow-up. But would it make more sense just to merge h-vetinari's changes into the branch of my original PR? * In general, what can be done to speed up the closing of these PRs?? It's been around 2-3 years I think... Starting next week and up to the end of probably February, I'm going to be pretty busy and probably won't be able to work on those PRs. But after that, I should be free to respond to reviewer comments and help expedite the closing of these PRs. Adam (FormerPhysicist) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Jan 2 11:23:25 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 2 Jan 2021 17:23:25 +0100 Subject: [SciPy-Dev] Outstanding hyp2f1 PRs In-Reply-To: <3e647ddb-ba94-29bf-6ab9-4185667806a0@physicist.net> References: <3e647ddb-ba94-29bf-6ab9-4185667806a0@physicist.net> Message-ID: On Sat, Jan 2, 2021 at 1:05 AM Former Physicist wrote: > Hello scipy-dev, > > I have several outstanding (and now stale) PRs that fix some issues with > the hypergeometric function. They are as follows: > > 8548 > 8151 > 8110 > > The first two (8548 and 8151) are old enough that they now have failing > CI/CD pipelines. (When I opened these PRs 3 years ago, I'm pretty sure > they were passing. I'm guessing you guys updated or changed your pipelines > in the meantime.) > > For 8548, there is a unit test that is failing in the scipy.signal module > (the test_symmetry unit test). I tried troubleshooting this over the > summer, but I was not able to find the issue. This unit test passes > locally on my laptop and only seems to fail in the azure pipeline for a > specific environment. > > For 8151, another developer h-vetinari has created another duplicate PR ( > 13310 ) of mine and has > actually fixed the CI/CD issues. I don't know if h-vetinari is on this > list but I will leave a comment on github telling him how to subscribe. > > So my questions are as follows: > * Can someone help me troubleshoot the failing unit tests in 8548? I do > not know much about scipy's travis/azure pipelines or whatever and have no > idea how to fix that stuff. > The CI logs have disappearing (they get deleted after, I think, a month or so), so you'll have to re-run them. Maybe rebase on current master, or merge master into your branch. A scipy.signal failure sounds unrelated though, and if so you can ignore it. If you comment on the PR if it fails again I'll have a look. * What is the right procedure for fixing 8151? I'm perfectly fine with > h-vetinari fixing those issues as I don't currently have time to follow-up. > But would it make more sense just to merge h-vetinari's changes into the > branch of my original PR? > That would work too. Note that h-veterinari isn't a SciPy maintainer, so they weren't able to push to your branch directly. At this point you could make them a collaborator on your fork to push forward the original, or keep the new PR - either way is fine, you can work it out together. > * In general, what can be done to speed up the closing of these PRs? It's > been around 2-3 years I think... > Yeah, that's the trouble with PRs that are for highly specialized algorithmic code like hyp2f1 - we only have a few maintainers with deep knowledge on those, so if they're busy then review is difficult. From my perspective it's actually helpful if two people collaborate on a PR, like for PR 8151. If two contributors are both happy, that makes the job for the maintainer who needs to merge it a lot easier. > Starting next week and up to the end of probably February, I'm going to be > pretty busy and probably won't be able to work on those PRs. But after > that, I should be free to respond to reviewer comments and help expedite > the closing of these PRs. > Sounds good! Cheers, Ralf > Adam (FormerPhysicist) > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhaberla at calpoly.edu Sun Jan 3 21:18:57 2021 From: mhaberla at calpoly.edu (Matt Haberland) Date: Sun, 3 Jan 2021 18:18:57 -0800 Subject: [SciPy-Dev] Welcome Nicholas McKibben to the SciPy core team! In-Reply-To: References: Message-ID: Hi all, On behalf of the SciPy developers, I'd like to welcome Nicholas McKibben as a member of the core team. Nicholas has been contributing for just over a year, and although he only has a few PRs ( https://github.com/scipy/scipy/pulls/mckib2), one of those was very big: gh-12043 (https://github.com/scipy/scipy/pull/12043) wrapped the HiGHS C++ linear programming library, dramatically improving SciPy's linear programming capabilities. In addition to code reviews and bug fixes, he currently has several major enhancement projects in the works: wrappers for PROPACK, statistical distributions from Boost, and an interface to the HiGHS mixed-integer programming solver. Given all of his great work in such an unusual year, I'm really looking forward to seeing what he does in 2021! Happy New Year, everyone! Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Sun Jan 3 22:21:25 2021 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 3 Jan 2021 22:21:25 -0500 Subject: [SciPy-Dev] Welcome Nicholas McKibben to the SciPy core team! In-Reply-To: References: Message-ID: On 1/3/21, Matt Haberland wrote: > Hi all, > > On behalf of the SciPy developers, I'd like to welcome Nicholas McKibben as > a member of the core team. Nicholas has been contributing for just over a > year, and although he only has a few PRs ( > https://github.com/scipy/scipy/pulls/mckib2), one of those was very big: > gh-12043 (https://github.com/scipy/scipy/pull/12043) wrapped the HiGHS C++ > linear programming library, dramatically improving SciPy's linear > programming capabilities. In addition to code reviews and bug fixes, he > currently has several major enhancement projects in the works: wrappers for > PROPACK, statistical distributions from Boost, and an interface to the > HiGHS mixed-integer programming solver. > > Given all of his great work in such an unusual year, I'm really looking > forward to seeing what he does in 2021! > > Happy New Year, everyone! > Matt > Thanks for the all great work, Nicholas! Warren From andyfaff at gmail.com Mon Jan 4 03:22:16 2021 From: andyfaff at gmail.com (Andrew Nelson) Date: Mon, 4 Jan 2021 19:22:16 +1100 Subject: [SciPy-Dev] Welcome Nicholas McKibben to the SciPy core team! In-Reply-To: References: Message-ID: Welcome Nicholas. On Mon, 4 Jan 2021 at 14:22, Warren Weckesser wrote: > On 1/3/21, Matt Haberland wrote: > > Hi all, > > > > On behalf of the SciPy developers, I'd like to welcome Nicholas McKibben > as > > a member of the core team. Nicholas has been contributing for just over a > > year, and although he only has a few PRs ( > > https://github.com/scipy/scipy/pulls/mckib2), one of those was very big: > > gh-12043 (https://github.com/scipy/scipy/pull/12043) wrapped the HiGHS > C++ > > linear programming library, dramatically improving SciPy's linear > > programming capabilities. In addition to code reviews and bug fixes, he > > currently has several major enhancement projects in the works: wrappers > for > > PROPACK, statistical distributions from Boost, and an interface to the > > HiGHS mixed-integer programming solver. > > > > Given all of his great work in such an unusual year, I'm really looking > > forward to seeing what he does in 2021! > > > > Happy New Year, everyone! > > Matt > > > > > Thanks for the all great work, Nicholas! > > Warren > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Mon Jan 4 04:48:32 2021 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Mon, 4 Jan 2021 12:48:32 +0300 Subject: [SciPy-Dev] Welcome Nicholas McKibben to the SciPy core team! In-Reply-To: References: Message-ID: Welcome Nicholas! On Mon, Jan 4, 2021 at 5:19 AM Matt Haberland wrote: > > Hi all, > > On behalf of the SciPy developers, I'd like to welcome Nicholas McKibben as a member of the core team. Nicholas has been contributing for just over a year, and although he only has a few PRs (https://github.com/scipy/scipy/pulls/mckib2), one of those was very big: gh-12043 (https://github.com/scipy/scipy/pull/12043) wrapped the HiGHS C++ linear programming library, dramatically improving SciPy's linear programming capabilities. In addition to code reviews and bug fixes, he currently has several major enhancement projects in the works: wrappers for PROPACK, statistical distributions from Boost, and an interface to the HiGHS mixed-integer programming solver. > > Given all of his great work in such an unusual year, I'm really looking forward to seeing what he does in 2021! > > Happy New Year, everyone! > Matt > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From rlucas7 at vt.edu Mon Jan 4 08:54:31 2021 From: rlucas7 at vt.edu (rlucas7 at vt.edu) Date: Mon, 4 Jan 2021 08:54:31 -0500 Subject: [SciPy-Dev] Welcome Nicholas McKibben to the SciPy core team! In-Reply-To: References: Message-ID: Welcome to the team Nicholas! -Lucas Roberts > On Jan 4, 2021, at 4:49 AM, Evgeni Burovski wrote: > > ?Welcome Nicholas! > >> On Mon, Jan 4, 2021 at 5:19 AM Matt Haberland wrote: >> >> Hi all, >> >> On behalf of the SciPy developers, I'd like to welcome Nicholas McKibben as a member of the core team. Nicholas has been contributing for just over a year, and although he only has a few PRs (https://github.com/scipy/scipy/pulls/mckib2), one of those was very big: gh-12043 (https://github.com/scipy/scipy/pull/12043) wrapped the HiGHS C++ linear programming library, dramatically improving SciPy's linear programming capabilities. In addition to code reviews and bug fixes, he currently has several major enhancement projects in the works: wrappers for PROPACK, statistical distributions from Boost, and an interface to the HiGHS mixed-integer programming solver. >> >> Given all of his great work in such an unusual year, I'm really looking forward to seeing what he does in 2021! >> >> Happy New Year, everyone! >> Matt >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From warren.weckesser at gmail.com Tue Jan 5 11:53:01 2021 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Tue, 5 Jan 2021 11:53:01 -0500 Subject: [SciPy-Dev] Name for Page's L test Message-ID: Hey all, Matt Haberland has implemented Page's L test in the pull request https://github.com/scipy/scipy/pull/12531. I'd like to merge the PR, but Ralf has suggested that the name, `pagel`, is "a terrible name", and has suggested `page_test` or `page_l_test`. (See the comments at https://github.com/scipy/scipy/pull/12531#pullrequestreview-447123727.) Matt's reasoning for `pagel` is that it is consistent with the style used for many other tests in stats. I don't have a strong preference, and when that happens in such a case, I tend to go with the original author's preference. I think Ralf prefers to have "test" in the name. Some existing tests do, but many others don't. Anyone else have an opinion? Either about this specific case, or about the general question of objective criteria for a "good" name? Warren From mhaberla at calpoly.edu Tue Jan 5 12:03:47 2021 From: mhaberla at calpoly.edu (Matt Haberland) Date: Tue, 5 Jan 2021 09:03:47 -0800 Subject: [SciPy-Dev] Name for Page's L test In-Reply-To: References: Message-ID: Just a little more context - `page_l` was the original name, but I changed it for consistency with `kendalltau`, `spearmanr`, `pearsonr`, `johnsonsb`, `johnsonsu`, `mannwhitneyu`, and `friedmanchisquare`: surname adjoined with variable name. There are some functions that have a `_` after the surname, but they don't have the name of a variable after them; they're more of a description (e.g. `fisher_exact`, `yeojohnson_normmax`. Other statistical test names are here . And for completeness, Warren asked: "I'd like to merge this soon. Are you OK with pagel?" and Ralf's response was: "Sure. It's a terrible name, but at least it's consistent with the other terrible names - and page_l also won't tell anyone what the function does." I would suggest that ideas consider this test in the context of past and future hypothesis test names rather than singling out this one test. If we decide that we want to change the convention, that's fine, but I'd prefer that be a standard going forward. On Tue, Jan 5, 2021 at 8:53 AM Warren Weckesser wrote: > Hey all, > > Matt Haberland has implemented Page's L test in the pull request > https://github.com/scipy/scipy/pull/12531. I'd like to merge the PR, > but Ralf has suggested that the name, `pagel`, is "a terrible name", > and has suggested `page_test` or `page_l_test`. (See the comments at > https://github.com/scipy/scipy/pull/12531#pullrequestreview-447123727.) > Matt's reasoning for `pagel` is that it is consistent with the style > used for many other tests in stats. I don't have a strong preference, > and when that happens in such a case, I tend to go with the original > author's preference. I think Ralf prefers to have "test" in the name. > Some existing tests do, but many others don't. > > Anyone else have an opinion? Either about this specific case, or > about the general question of objective criteria for a "good" name? > > Warren > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- Matt Haberland Assistant Professor BioResource and Agricultural Engineering 08A-3K, Cal Poly -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jan 5 12:10:54 2021 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 5 Jan 2021 12:10:54 -0500 Subject: [SciPy-Dev] Name for Page's L test In-Reply-To: References: Message-ID: On Tue, Jan 5, 2021 at 11:53 AM Warren Weckesser wrote: > Hey all, > > Matt Haberland has implemented Page's L test in the pull request > https://github.com/scipy/scipy/pull/12531. I'd like to merge the PR, > but Ralf has suggested that the name, `pagel`, is "a terrible name", > and has suggested `page_test` or `page_l_test`. (See the comments at > https://github.com/scipy/scipy/pull/12531#pullrequestreview-447123727.) > Matt's reasoning for `pagel` is that it is consistent with the style > used for many other tests in stats. I don't have a strong preference, > and when that happens in such a case, I tend to go with the original > author's preference. I think Ralf prefers to have "test" in the name. > Some existing tests do, but many others don't. > > Anyone else have an opinion? Either about this specific case, or > about the general question of objective criteria for a "good" name? > Wikipedia suggests that it is also known as Page's trend test, which might make for a more informative function name. https://en.wikipedia.org/wiki/Page%27s_trend_test -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jan 5 12:13:07 2021 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 5 Jan 2021 12:13:07 -0500 Subject: [SciPy-Dev] Name for Page's L test In-Reply-To: References: Message-ID: On Tue, Jan 5, 2021 at 12:10 PM Robert Kern wrote: > On Tue, Jan 5, 2021 at 11:53 AM Warren Weckesser < > warren.weckesser at gmail.com> wrote: > >> Hey all, >> >> Matt Haberland has implemented Page's L test in the pull request >> https://github.com/scipy/scipy/pull/12531. I'd like to merge the PR, >> but Ralf has suggested that the name, `pagel`, is "a terrible name", >> and has suggested `page_test` or `page_l_test`. (See the comments at >> https://github.com/scipy/scipy/pull/12531#pullrequestreview-447123727.) >> Matt's reasoning for `pagel` is that it is consistent with the style >> used for many other tests in stats. I don't have a strong preference, >> and when that happens in such a case, I tend to go with the original >> author's preference. I think Ralf prefers to have "test" in the name. >> Some existing tests do, but many others don't. >> >> Anyone else have an opinion? Either about this specific case, or >> about the general question of objective criteria for a "good" name? >> > > Wikipedia suggests that it is also known as Page's trend test, which might > make for a more informative function name. > > https://en.wikipedia.org/wiki/Page%27s_trend_test > Counterpoint: "The Page test is not a trend test" https://cran.r-project.org/web/packages/cultevo/vignettes/page.test.html -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Jan 5 13:05:35 2021 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 5 Jan 2021 11:05:35 -0700 Subject: [SciPy-Dev] NumPy 1.19.5 released Message-ID: Hi All, On behalf of the NumPy team I am pleased to announce the release of NumPy 1.19.5. NumPy 1.19.5 is a short bugfix release. Apart from fixing several bugs, the main improvement is an update to OpenBLAS 0.3.13 that works around the Windows 2004 fmod bug while not breaking execution on other platforms. This release supports Python 3.6-3.9 and is planned to be the last release in the 1.19.x cycle. NumPy Wheels can be downloaded from PyPI , source archives, release notes, and wheel hashes are available on Github . Linux users will need pip >= 0.19.3 in order to install manylinux2010 and manylinux2014 wheels. *Contributors* A total of 8 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - Charles Harris - Christoph Gohlke - Matti Picus - Raghuveer Devulapalli - Sebastian Berg - Simon Graham + - Veniamin Petrenko + - Bernie Gray + *Pull requests merged* A total of 11 pull requests were merged for this release. - #17756: BUG: Fix segfault due to out of bound pointer in floatstatus... - #17774: BUG: fix np.timedelta64('nat').__format__ throwing an exception - #17775: BUG: Fixed file handle leak in array_tofile. - #17786: BUG: Raise recursion error during dimension discovery - #17917: BUG: Fix subarray dtype used with too large count in fromfile - #17918: BUG: 'bool' object has no attribute 'ndim' - #17919: BUG: ensure _UFuncNoLoopError can be pickled - #17924: BLD: use BUFFERSIZE=20 in OpenBLAS - #18026: BLD: update to OpenBLAS 0.3.13 - #18036: BUG: make a variable volatile to work around clang compiler bug - #18114: REL: Prepare for the NumPy 1.19.5 release. Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlucas7 at vt.edu Tue Jan 5 18:01:51 2021 From: rlucas7 at vt.edu (rlucas7 at vt.edu) Date: Tue, 5 Jan 2021 18:01:51 -0500 Subject: [SciPy-Dev] Name for Page's L test In-Reply-To: References: Message-ID: IIUC this is a test of monotonicity, that is what is implied w/the colloquial expression ?trending upward?, so I?m confused as to why this isn?t a trend. Perhaps the author has conflated the more specific ?Linear trend?? It seems like a *monotone* test, maybe something that indicates monotone? (That would at least be descriptive) e.g. ?page_monotone?, a ?_test? could be added if we want that part consistent but it?s a long name at that point. I could also see arguments for dropping ?page? from the name. The test isn?t a standard/ubiquitous one like t or binomial tests so the need for following a convention is less. Unless we want to match R in naming. My 2cents. -Lucas Roberts > On Jan 5, 2021, at 12:13 PM, Robert Kern wrote: > > ? >> On Tue, Jan 5, 2021 at 12:10 PM Robert Kern wrote: > >>> On Tue, Jan 5, 2021 at 11:53 AM Warren Weckesser wrote: >> >>> Hey all, >>> >>> Matt Haberland has implemented Page's L test in the pull request >>> https://github.com/scipy/scipy/pull/12531. I'd like to merge the PR, >>> but Ralf has suggested that the name, `pagel`, is "a terrible name", >>> and has suggested `page_test` or `page_l_test`. (See the comments at >>> https://github.com/scipy/scipy/pull/12531#pullrequestreview-447123727.) >>> Matt's reasoning for `pagel` is that it is consistent with the style >>> used for many other tests in stats. I don't have a strong preference, >>> and when that happens in such a case, I tend to go with the original >>> author's preference. I think Ralf prefers to have "test" in the name. >>> Some existing tests do, but many others don't. >>> >>> Anyone else have an opinion? Either about this specific case, or >>> about the general question of objective criteria for a "good" name? >> >> Wikipedia suggests that it is also known as Page's trend test, which might make for a more informative function name. >> >> https://en.wikipedia.org/wiki/Page%27s_trend_test > > Counterpoint: "The Page test is not a trend test" > > https://cran.r-project.org/web/packages/cultevo/vignettes/page.test.html > > -- > Robert Kern > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jan 5 19:40:42 2021 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 5 Jan 2021 19:40:42 -0500 Subject: [SciPy-Dev] Name for Page's L test In-Reply-To: References: Message-ID: On Tue, Jan 5, 2021 at 6:04 PM wrote: > IIUC this is a test of monotonicity, that is what is implied w/the > colloquial expression ?trending upward?, so I?m confused as to why this > isn?t a trend. > > Perhaps the author has conflated the more specific ?Linear trend?? > I think the point they are making is that the null hypothesis gets rejected for even a single treatment being (consistently) lower than the following one. Whereas one might expect a "trend" to span across the whole (or substantial part of) the treatment space. I'm afraid I don't care enough about this area of statistics to dive any deeper. I don't really mind one way or the other. I'd rather name it something that helps people find it even if some experts may quibble about the strict accuracy of the name. Some combination of `page` and `trend` seems to me to be better than just `page` or `pagel`. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlucas7 at vt.edu Wed Jan 6 09:08:57 2021 From: rlucas7 at vt.edu (rlucas7 at vt.edu) Date: Wed, 6 Jan 2021 09:08:57 -0500 Subject: [SciPy-Dev] Name for Page's L test In-Reply-To: References: Message-ID: > On Jan 5, 2021, at 7:41 PM, Robert Kern wrote: > > ? >> On Tue, Jan 5, 2021 at 6:04 PM wrote: > >> IIUC this is a test of monotonicity, that is what is implied w/the colloquial expression ?trending upward?, so I?m confused as to why this isn?t a trend. >> >> Perhaps the author has conflated the more specific ?Linear trend?? > > I think the point they are making is that the null hypothesis gets rejected for even a single treatment being (consistently) lower than the following one. Whereas one might expect a "trend" to span across the whole (or substantial part of) the treatment space. > I'm afraid I don't care enough about this area of statistics to dive any deeper. > > I don't really mind one way or the other. I'd rather name it something that helps people find it even if some experts may quibble about the strict accuracy of the name. Some combination of `page` and `trend` seems to me to be better than just `page` or `pagel`. > I concur. > -- > Robert Kern > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Jan 6 09:54:57 2021 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 Jan 2021 09:54:57 -0500 Subject: [SciPy-Dev] Name for Page's L test In-Reply-To: References: Message-ID: On Wed, Jan 6, 2021 at 9:09 AM wrote: > > > > On Jan 5, 2021, at 7:41 PM, Robert Kern wrote: > > ? > On Tue, Jan 5, 2021 at 6:04 PM wrote: >> >> IIUC this is a test of monotonicity, that is what is implied w/the colloquial expression ?trending upward?, so I?m confused as to why this isn?t a trend. >> >> Perhaps the author has conflated the more specific ?Linear trend?? > > > I think the point they are making is that the null hypothesis gets rejected for even a single treatment being (consistently) lower than the following one. Whereas one might expect a "trend" to span across the whole (or substantial part of) the treatment space. > > I'm afraid I don't care enough about this area of statistics to dive any deeper. > > I don't really mind one way or the other. I'd rather name it something that helps people find it even if some experts may quibble about the strict accuracy of the name. Some combination of `page` and `trend` seems to me to be better than just `page` or `pagel`. > > > I concur. I agree with "Some combination of `page` and `trend` seems to me to be better" I have seen "trend test" used in several cases for tests of equality with trending, ordered, monotonic alternatives. There might be other trend tests that end up in scipy.stats, so qualifying by "page" is appropriate. `page_l_test` is more like `mood`, not famous enough to remember what it does without looking it up. Aside In statsmodels I would use something that combines "rank" and "trend". (I ended up using `rank_compare_2indep` for my version of brunner_munzel test and statistic in statsmodels.) Josef > > -- > Robert Kern > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From nicholas.bgp at gmail.com Wed Jan 6 20:53:45 2021 From: nicholas.bgp at gmail.com (Nicholas McKibben) Date: Wed, 6 Jan 2021 18:53:45 -0700 Subject: [SciPy-Dev] Bump GCC 4.8 to GCC 5 Message-ID: Hi all, We propose to move the lowest GCC version supported from GCC 4.8 to GCC 5.5. PR here: https://github.com/scipy/scipy/pull/13347 This is to keep us on track with the SciPy toolchain roadmap and allow for new feature development using full C++14 support. The PR has a little bit of discussion about the move, but please send feedback if this will negatively impact projects and how we can mitigate trouble. Best, Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Wed Jan 6 21:25:32 2021 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 6 Jan 2021 21:25:32 -0500 Subject: [SciPy-Dev] ENH - New stat distribution | Generalized Hyperbolic In-Reply-To: References: Message-ID: On 12/30/20, Ralf Gommers wrote: > On Wed, Dec 30, 2020 at 9:11 AM Gabriele Bonomi > wrote: > >> Hello guys, >> >> I would like to socialize the fact that some work being currently done >> to include the Generalized >> Hyperbolic Distribution >> to >> scipy. >> >> In a nutshell, this is a distribution that generalize a few other >> distributions already in scipy (e.g. t, normal inverse gaussian, laplace >> - >> among others) >> >> I do not think this is a duplicate, but please shoot if you have any >> concerns/suggestions wrt the above. >> > > Thanks Gabriele, sounds like a good idea to add that distribution. > Agreed, I think this would be a good addition to SciPy. Warren > Cheers, > Ralf > From andrea.gavana at gmail.com Fri Jan 8 04:20:27 2021 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Fri, 8 Jan 2021 10:20:27 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks Message-ID: Dear SciPy Developers & Users, long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013. This new set of benchmarks pretty much superseeds - and greatly expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ . The approach I have taken this time is to select as many benchmark test suites as possible: most of them are characterized by test function *generators*, from which we can actually create almost an unlimited number of unique test problems. Biggest news are: 1. This whole exercise is made up of *6,825* test problems divided across *16* different test suites: most of these problems are of low dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ variables. With all the sensitivities performed during this exercise on those benchmarks, the overall grand total number of functions evaluations stands at *3,859,786,025* - close to *4 billion*. Not bad. 2. A couple of "new" optimization algorithms I have ported to Python: - MCS: Multilevel Coordinate Search , it?s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ) I have added a few, minor improvements compared to the original implementation. - BiteOpt: BITmask Evolution OPTimization , I have converted the C++ code into Python and added a few, minor modifications. Enough chatting for now. The 13 tested algorithms are described here: http://infinity77.net/go_2021/ High level description & results of the 16 benchmarks: http://infinity77.net/go_2021/thebenchmarks.html Each benchmark test suite has its own dedicated page, with more detailed results and sensitivities. List of tested algorithms: 1. *AMPGO*: Adaptive Memory Programming for Global Optimization: this is my Python implementation of the algorithm described here: http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdf I have added a few improvements here and there based on my Master Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez. After AMPGO was integrated in lmfit , I have improved it even more - in my opinion. 2. *BasinHopping*: Basin hopping is a random algorithm which attempts to find the global minimum of a smooth scalar function of one or more variables. The algorithm was originally described by David Wales: http://www-wales.ch.cam.ac.uk/ BasinHopping is now part of the standard SciPy distribution. 3. *BiteOpt*: BITmask Evolution OPTimization, based on the algorithm presented in this GitHub link: https://github.com/avaneev/biteopt I have converted the C++ code into Python and added a few, minor modifications. 4. *CMA-ES*: Covariance Matrix Adaptation Evolution Strategy, based on the following algorithm: http://www.lri.fr/~hansen/cmaesintro.html http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for the algorithm) 5. *CRS2*: Controlled Random Search with Local Mutation, as implemented in the NLOpt package: http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_Search_.28CRS.29_with_local_mutation 6. *DE*: Differential Evolution, described in the following page: http://www1.icsi.berkeley.edu/~storn/code.html DE is now part of the standard SciPy distribution, and I have taken the implementation as it stands in SciPy: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution 7. *DIRECT*: the DIviding RECTangles procedure, described in: https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLinGloOpt/doc/DIRECT_Lipschitzian%20optimization%20without%20the%20lipschitz%20constant.pdf http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python code for the algorithm) 8. *DualAnnealing*: the Dual Annealing algorithm, taken directly from the SciPy implementation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_annealing.html#scipy.optimize.dual_annealing 9. *LeapFrog*: the Leap Frog procedure, which I have been recommended for use, taken from: https://github.com/flythereddflagg/lpfgopt 10. *MCS*: Multilevel Coordinate Search, it?s my translation to Python of the original Matlab code from A. Neumaier and W. Huyer (giving then for free also GLS and MINQ ): https://www.mat.univie.ac.at/~neum/software/mcs/ I have added a few, minor improvements compared to the original implementation. See the MCS section for a quick and dirty comparison between the Matlab code and my Python conversion. 11. *PSWARM*: Particle Swarm optimization algorithm, it has been described in many online papers. I have used a compiled version of the C source code from: http://www.norg.uminho.pt/aivaz/pswarm/ 12. *SCE*: Shuffled Complex Evolution, described in: Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, 1015-1031, 1992. The version I used was graciously made available by Matthias Cuntz via a personal e-mail. 13. *SHGO*: Simplicial Homology Global Optimization, taken directly from the SciPy implementation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.html#scipy.optimize.shgo List of benchmark test suites: 1. SciPy Extended : 235 multivariate problems (where the number of independent variables ranges from 2 to 17), again with multiple local/global minima. I have added about 40 new functions to the standard SciPy benchmarks and fixed a few bugs in the existing benchmark models in the SciPy repository. 2. GKLS : 1,500 test functions, with dimensionality varying from 2 to 6, generated with the super famous GKLS Test Functions Generator . I have taken the original C code (available at http://netlib.org/toms/) and converted it to Python. 3. GlobOpt : 288 tough problems, with dimensionality varying from 2 to 5, created with another test function generator which I arbitrarily named ?GlobOpt?: https://www.researchgate.net/publication/225566516_A_new_class_of_test_functions_for_global_optimization . The original code is in C++ and I have bridged it to Python using Cython. *Many thanks* go to Professor Marco Locatelli for providing an updated copy of the C++ source code. 4. MMTFG : sort-of an acronym for ?Multi-Modal Test Function with multiple Global minima?, this test suite implements the work of Jani Ronkkonen: https://www.researchgate.net/publication/220265526_A_Generator_for_Multimodal_Test_Functions_with_Multiple_Global_Optima . It contains 981 test problems with dimensionality varying from 2 to 4. The original code is in C and I have bridge it to Python using Cython. 5. GOTPY : a generator of benchmark functions using the Bocharov-Feldbaum ?Method-Min?, containing 400 test problems with dimensionality varying from 2 to 5. I have taken the Python implementation from https://github.com/redb0/gotpy and improved it in terms of runtime. Original paper from http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng . 6. Huygens : this benchmark suite is very different from the rest, as it uses a ?fractal? approach to generate test functions. It is based on the work of Cara MacNish on Fractal Functions . The original code is in Java, and at the beginning I just converted it to Python: given it was slow as a turtle, I have re-implemented it in Fortran and wrapped it using f2py , then generating 600 2-dimensional test problems out of it. 7. LGMVG : not sure about the meaning of the acronym, but the implementation follows the ?Max-Set of Gaussians Landscape Generator? described in http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is given in Matlab, but it?s fairly easy to convert it to Python. This test suite contains 304 problems with dimensionality varying from 2 to 5. 8. NgLi : Stemming from the work of Chi-Kong Ng and Duan Li, this is a test problem generator for unconstrained optimization, but it?s fairly easy to assign bound constraints to it. The methodology is described in https://www.sciencedirect.com/science/article/pii/S0305054814001774 , while the Matlab source code can be found in http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab script to generate 240 problems with dimensionality varying from 2 to 5 by outputting the generator parameters in text files, then used Python to create the objective functions based on those parameters and the benchmark methodology. 9. MPM2 : Implementing the ?Multiple Peaks Model 2?, there is a Python implementation at https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is a test problem generator also used in the smoof library, I have taken the code almost as is and generated 480 benchmark functions with dimensionality varying from 2 to 5. 10. RandomFields : as described in https://www.researchgate.net/publication/301940420_Global_optimization_test_problems_based_on_random_field_composition , it generates benchmark functions by ?smoothing? one or more multidimensional discrete random fields and composing them. No source code is given, but the implementation is fairly straightforward from the article itself. 11. NIST : not exactly the realm of Global Optimization solvers, but the NIST StRD dataset can be used to generate a single objective function as ?sum of squares?. I have used the NIST dataset as implemented in lmfit , thus creating 27 test problems with dimensionality ranging from 2 to 9. 12. GlobalLib : Arnold Neumaier maintains a suite of test problems termed ?COCONUT Benchmark? and Sahinidis has converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C parser to convert the benchmarks from C to Python. The global minima are taken from Sahinidis or from Neumaier or refined using the NEOS server when the accuracy of the reported minima is too low. The suite contains 181 test functions with dimensionality varying between 2 and 9. 13. CVMG : another ?landscape generator?, I had to dig it out using the Wayback Machine at http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/multi.kennedy.html , the acronym stands for ?Continuous Valued Multimodality Generator?. Source code is in C++ but it?s fairly easy to port it to Python. In addition to the original implementation (that uses the Sigmoid as a softmax/transformation function) I have added a few others to create varied landscapes. 360 test problems have been generated, with dimensionality ranging from 2 to 5. 14. NLSE : again, not really the realm of Global optimization solvers, but Nonlinear Systems of Equations can be transformed to single objective functions to optimize. I have drawn from many different sources (Publications , ALIAS/COPRIN and many others) to create 44 systems of nonlinear equations with dimensionality ranging from 2 to 8. 15. Schoen : based on the early work of Fabio Schoen and his short note on a simple but interesting idea on a test function generator, I have taken the C code in the note and converted it into Python, thus creating 285 benchmark functions with dimensionality ranging from 2 to 6. *Many thanks* go to Professor Fabio Schoen for providing an updated copy of the source code and for the email communications. 16. Robust : the last benchmark test suite for this exercise, it is actually composed of 5 different kind-of analytical test function generators, containing deceptive, multimodal, flat functions depending on the settings. Matlab source code is available at http://www.alimirjalili.com/RO.html , I simply converted it to Python and created 420 benchmark functions with dimensionality ranging from 2 to 6. Enjoy, and Happy 2021 :-) . Andrea. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Fri Jan 8 04:41:04 2021 From: andyfaff at gmail.com (Andrew Nelson) Date: Fri, 8 Jan 2021 20:41:04 +1100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: Dear Andrea, that's a great resource you provide. From a quick skim over you mention a few bugs in the scipy codebase for the global benchmark problems. It would be great if you could provide more details (or even provide a PR?) so we can fix them. It would be even better if we could provide the 40 extra functions that you added on top of that (alternatively I could go through the list and figure out which ones scipy is missing). Note, that the Momil arxiv paper has quite a few mistakes in it as well, there are several functions where I actually found a better global minimum, or it was in a different place to that specified. I found this out by making a test setup for the suite. I also vaguely remember that there were also some functions where the definition was wrong, and I had to go to other papers to fix the issue. On Fri, 8 Jan 2021 at 20:21, Andrea Gavana wrote: > Dear SciPy Developers & Users, > > long time no see :-) . I thought to start 2021 with a bit of a bang, > to try and forget how bad 2020 has been... So I am happy to present you > with a revamped version of the Global Optimization Benchmarks from my > previous exercise in 2013. > > This new set of benchmarks pretty much superseeds - and greatly expands - > the previous analysis that you can find at this location: > http://infinity77.net/global_optimization/ . > > The approach I have taken this time is to select as many benchmark test > suites as possible: most of them are characterized by test function > *generators*, from which we can actually create almost an unlimited > number of unique test problems. Biggest news are: > > > 1. This whole exercise is made up of *6,825* test problems divided > across *16* different test suites: most of these problems are of low > dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ > variables. With all the sensitivities performed during this exercise on > those benchmarks, the overall grand total number of functions evaluations > stands at *3,859,786,025* - close to *4 billion*. Not bad. > 2. A couple of "new" optimization algorithms I have ported to Python: > > > - MCS: Multilevel Coordinate Search > , it?s my > translation to Python of the original Matlab code from A. Neumaier and W. > Huyer (giving then for free also GLS and MINQ) I have added a few, minor > improvements compared to the original implementation. > - BiteOpt: BITmask Evolution OPTimization > , I have converted the C++ > code into Python and added a few, minor modifications. > > > Enough chatting for now. The 13 tested algorithms are described here: > > http://infinity77.net/go_2021/ > > High level description & results of the 16 benchmarks: > > http://infinity77.net/go_2021/thebenchmarks.html > > Each benchmark test suite has its own dedicated page, with more detailed > results and sensitivities. > > List of tested algorithms: > > 1. > > *AMPGO*: Adaptive Memory Programming for Global Optimization: this is > my Python implementation of the algorithm described here: > > > http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdf > > I have added a few improvements here and there based on my Master > Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and > Gomez. After AMPGO was integrated in lmfit > , I have improved it even more - in > my opinion. > 2. > > *BasinHopping*: Basin hopping is a random algorithm which attempts to > find the global minimum of a smooth scalar function of one or more > variables. The algorithm was originally described by David Wales: > > http://www-wales.ch.cam.ac.uk/ > > BasinHopping is now part of the standard SciPy distribution. > 3. > > *BiteOpt*: BITmask Evolution OPTimization, based on the algorithm > presented in this GitHub link: > > https://github.com/avaneev/biteopt > > I have converted the C++ code into Python and added a few, minor > modifications. > 4. > > *CMA-ES*: Covariance Matrix Adaptation Evolution Strategy, based on > the following algorithm: > > http://www.lri.fr/~hansen/cmaesintro.html > > http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for > the algorithm) > 5. > > *CRS2*: Controlled Random Search with Local Mutation, as implemented > in the NLOpt package: > > > http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_Search_.28CRS.29_with_local_mutation > 6. > > *DE*: Differential Evolution, described in the following page: > > http://www1.icsi.berkeley.edu/~storn/code.html > > DE is now part of the standard SciPy distribution, and I have taken > the implementation as it stands in SciPy: > > > https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution > 7. > > *DIRECT*: the DIviding RECTangles procedure, described in: > > > https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLinGloOpt/doc/DIRECT_Lipschitzian%20optimization%20without%20the%20lipschitz%20constant.pdf > > > http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python > code for the algorithm) > 8. > > *DualAnnealing*: the Dual Annealing algorithm, taken directly from the > SciPy implementation: > > > https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_annealing.html#scipy.optimize.dual_annealing > 9. > > *LeapFrog*: the Leap Frog procedure, which I have been recommended for > use, taken from: > > https://github.com/flythereddflagg/lpfgopt > 10. > > *MCS*: Multilevel Coordinate Search, it?s my translation to Python of > the original Matlab code from A. Neumaier and W. Huyer (giving then for > free also GLS and > MINQ ): > > https://www.mat.univie.ac.at/~neum/software/mcs/ > > I have added a few, minor improvements compared to the original > implementation. See the MCS > section for a quick and > dirty comparison between the Matlab code and my Python conversion. > 11. > > *PSWARM*: Particle Swarm optimization algorithm, it has been described > in many online papers. I have used a compiled version of the C source code > from: > > http://www.norg.uminho.pt/aivaz/pswarm/ > 12. > > *SCE*: Shuffled Complex Evolution, described in: > > Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global > optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, > 1015-1031, 1992. > > The version I used was graciously made available by Matthias Cuntz via > a personal e-mail. > 13. > > *SHGO*: Simplicial Homology Global Optimization, taken directly from > the SciPy implementation: > > > https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.html#scipy.optimize.shgo > > > List of benchmark test suites: > > 1. > > SciPy Extended > : > 235 multivariate problems (where the number of independent variables ranges > from 2 to 17), again with multiple local/global minima. > > I have added about 40 new functions to the standard SciPy benchmarks > and > fixed a few bugs in the existing benchmark models in the SciPy repository. > 2. > > GKLS : 1,500 test > functions, with dimensionality varying from 2 to 6, generated with the > super famous GKLS Test Functions Generator > . I have taken the original C code > (available at http://netlib.org/toms/) and converted it to Python. > 3. > > GlobOpt : 288 > tough problems, with dimensionality varying from 2 to 5, created with > another test function generator which I arbitrarily named ?GlobOpt?: > https://www.researchgate.net/publication/225566516_A_new_class_of_test_functions_for_global_optimization . > The original code is in C++ and I have bridged it to Python using Cython. > > *Many thanks* go to Professor Marco Locatelli for providing an updated > copy of the C++ source code. > 4. > > MMTFG : sort-of an > acronym for ?Multi-Modal Test Function with multiple Global minima?, this > test suite implements the work of Jani Ronkkonen: > https://www.researchgate.net/publication/220265526_A_Generator_for_Multimodal_Test_Functions_with_Multiple_Global_Optima . > It contains 981 test problems with dimensionality varying from 2 to 4. The > original code is in C and I have bridge it to Python using Cython. > 5. > > GOTPY : a generator of > benchmark functions using the Bocharov-Feldbaum ?Method-Min?, containing > 400 test problems with dimensionality varying from 2 to 5. I have taken the > Python implementation from https://github.com/redb0/gotpy and improved > it in terms of runtime. > > Original paper from > http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng > . > 6. > > Huygens : this > benchmark suite is very different from the rest, as it uses a ?fractal? > approach to generate test functions. It is based on the work of Cara > MacNish on Fractal Functions > . > The original code is in Java, and at the beginning I just converted it to > Python: given it was slow as a turtle, I have re-implemented it in Fortran > and wrapped it using f2py , then generating > 600 2-dimensional test problems out of it. > 7. > > LGMVG : not sure about > the meaning of the acronym, but the implementation follows the ?Max-Set of > Gaussians Landscape Generator? described in > http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is > given in Matlab, but it?s fairly easy to convert it to Python. This test > suite contains 304 problems with dimensionality varying from 2 to 5. > 8. > > NgLi : Stemming from the > work of Chi-Kong Ng and Duan Li, this is a test problem generator for > unconstrained optimization, but it?s fairly easy to assign bound > constraints to it. The methodology is described in > https://www.sciencedirect.com/science/article/pii/S0305054814001774 , > while the Matlab source code can be found in > http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab > script to generate 240 problems with dimensionality varying from 2 to 5 by > outputting the generator parameters in text files, then used Python to > create the objective functions based on those parameters and the benchmark > methodology. > 9. > > MPM2 : Implementing the > ?Multiple Peaks Model 2?, there is a Python implementation at > https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This > is a test problem generator also used in the smoof > library, I have taken the code > almost as is and generated 480 benchmark functions with dimensionality > varying from 2 to 5. > 10. > > RandomFields > : as > described in > https://www.researchgate.net/publication/301940420_Global_optimization_test_problems_based_on_random_field_composition , > it generates benchmark functions by ?smoothing? one or more > multidimensional discrete random fields and composing them. No source code > is given, but the implementation is fairly straightforward from the article > itself. > 11. > > NIST : not exactly the > realm of Global Optimization solvers, but the NIST StRD > dataset can > be used to generate a single objective function as ?sum of squares?. I have > used the NIST dataset as implemented in lmfit > , thus > creating 27 test problems with dimensionality ranging from 2 to 9. > 12. > > GlobalLib : > Arnold Neumaier maintains > a > suite of test problems termed ?COCONUT Benchmark? and Sahinidis has > converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran > code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a > simple C parser to convert the benchmarks from C to Python. > > The global minima are taken from Sahinidis > or from > Neumaier or refined using the NEOS server > when the accuracy of the reported > minima is too low. The suite contains 181 test functions with > dimensionality varying between 2 and 9. > 13. > > CVMG : another > ?landscape generator?, I had to dig it out using the Wayback Machine at > http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/multi.kennedy.html , > the acronym stands for ?Continuous Valued Multimodality Generator?. Source > code is in C++ but it?s fairly easy to port it to Python. In addition to > the original implementation (that uses the Sigmoid > as a > softmax/transformation function) I have added a few others to create varied > landscapes. 360 test problems have been generated, with dimensionality > ranging from 2 to 5. > 14. > > NLSE : again, not really > the realm of Global optimization solvers, but Nonlinear Systems of > Equations can be transformed to single objective functions to optimize. I > have drawn from many different sources (Publications > , > ALIAS/COPRIN > and > many others) to create 44 systems of nonlinear equations with > dimensionality ranging from 2 to 8. > 15. > > Schoen : based on > the early work of Fabio Schoen and his short note > on a simple > but interesting idea on a test function generator, I have taken the C code > in the note and converted it into Python, thus creating 285 benchmark > functions with dimensionality ranging from 2 to 6. > > *Many thanks* go to Professor Fabio Schoen for providing an updated > copy of the source code and for the email communications. > 16. > > Robust : the last > benchmark test suite for this exercise, it is actually composed of 5 > different kind-of analytical test function generators, containing > deceptive, multimodal, flat functions depending on the settings. Matlab > source code is available at http://www.alimirjalili.com/RO.html , I > simply converted it to Python and created 420 benchmark functions with > dimensionality ranging from 2 to 6. > > > Enjoy, and Happy 2021 :-) . > > > Andrea. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Fri Jan 8 04:54:46 2021 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Fri, 8 Jan 2021 10:54:46 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: Hi Andrew, On Fri, 8 Jan 2021 at 10:41, Andrew Nelson wrote: > Dear Andrea, > that's a great resource you provide. From a quick skim over you mention a > few bugs in the scipy codebase for the global benchmark problems. It would > be great if you could provide more details (or even provide a PR?) so we > can fix them. It would be even better if we could provide the 40 extra > functions that you added on top of that (alternatively I could go through > the list and figure out which ones scipy is missing). > > Note, that the Momil arxiv paper has quite a few mistakes in it as well, > there are several functions where I actually found a better global minimum, > or it was in a different place to that specified. I found this out by > making a test setup for the suite. I also vaguely remember that there were > also some functions where the definition was wrong, and I had to go > to other papers to fix the issue. > Yes, in fact for the SciPy Extended benchmarks I have taken the functions from the SciPy repository, with all the fixes and corrections you already made. That said, a few things I have found out are: 1. The global optimum for the Alpine02 function, with more significant digits, is self.fglob = -6.12950389 2. The global optimum for the Cola function, with more significant digits, is self.fglob = 11.74639029 3. The DevilliersGlasser02 objective function turned out to be impossible to solve because the *bounds* are wrong. it should be: self._bounds = list(zip([0.5] * self.N, [60.0] * self.N)) instead of self._bounds = list(zip([1.0] * self.N, [60.0] * self.N)) 4. The global optimum for the Hansen function, with more significant digits, is self.fglob = -176.54179313 I can't easily make a PR as I don't have much time to set up a proper environment on my computer (nor the privileges to do so...). However, I can easily find out which new functions I have added and potentially send the code to you, if you wish so. Andrea. > > > On Fri, 8 Jan 2021 at 20:21, Andrea Gavana > wrote: > >> Dear SciPy Developers & Users, >> >> long time no see :-) . I thought to start 2021 with a bit of a bang, >> to try and forget how bad 2020 has been... So I am happy to present you >> with a revamped version of the Global Optimization Benchmarks from my >> previous exercise in 2013. >> >> This new set of benchmarks pretty much superseeds - and greatly expands - >> the previous analysis that you can find at this location: >> http://infinity77.net/global_optimization/ . >> >> The approach I have taken this time is to select as many benchmark test >> suites as possible: most of them are characterized by test function >> *generators*, from which we can actually create almost an unlimited >> number of unique test problems. Biggest news are: >> >> >> 1. This whole exercise is made up of *6,825* test problems divided >> across *16* different test suites: most of these problems are of low >> dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ >> variables. With all the sensitivities performed during this exercise on >> those benchmarks, the overall grand total number of functions evaluations >> stands at *3,859,786,025* - close to *4 billion*. Not bad. >> 2. A couple of "new" optimization algorithms I have ported to Python: >> >> >> - MCS: Multilevel Coordinate Search >> , it?s my >> translation to Python of the original Matlab code from A. Neumaier and W. >> Huyer (giving then for free also GLS and MINQ) I have added a few, minor >> improvements compared to the original implementation. >> - BiteOpt: BITmask Evolution OPTimization >> , I have converted the C++ >> code into Python and added a few, minor modifications. >> >> >> Enough chatting for now. The 13 tested algorithms are described here: >> >> http://infinity77.net/go_2021/ >> >> High level description & results of the 16 benchmarks: >> >> http://infinity77.net/go_2021/thebenchmarks.html >> >> Each benchmark test suite has its own dedicated page, with more detailed >> results and sensitivities. >> >> List of tested algorithms: >> >> 1. >> >> *AMPGO*: Adaptive Memory Programming for Global Optimization: this is >> my Python implementation of the algorithm described here: >> >> >> http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdf >> >> I have added a few improvements here and there based on my Master >> Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and >> Gomez. After AMPGO was integrated in lmfit >> , I have improved it even more - >> in my opinion. >> 2. >> >> *BasinHopping*: Basin hopping is a random algorithm which attempts to >> find the global minimum of a smooth scalar function of one or more >> variables. The algorithm was originally described by David Wales: >> >> http://www-wales.ch.cam.ac.uk/ >> >> BasinHopping is now part of the standard SciPy distribution. >> 3. >> >> *BiteOpt*: BITmask Evolution OPTimization, based on the algorithm >> presented in this GitHub link: >> >> https://github.com/avaneev/biteopt >> >> I have converted the C++ code into Python and added a few, minor >> modifications. >> 4. >> >> *CMA-ES*: Covariance Matrix Adaptation Evolution Strategy, based on >> the following algorithm: >> >> http://www.lri.fr/~hansen/cmaesintro.html >> >> http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for >> the algorithm) >> 5. >> >> *CRS2*: Controlled Random Search with Local Mutation, as implemented >> in the NLOpt package: >> >> >> http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_Search_.28CRS.29_with_local_mutation >> 6. >> >> *DE*: Differential Evolution, described in the following page: >> >> http://www1.icsi.berkeley.edu/~storn/code.html >> >> DE is now part of the standard SciPy distribution, and I have taken >> the implementation as it stands in SciPy: >> >> >> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution >> 7. >> >> *DIRECT*: the DIviding RECTangles procedure, described in: >> >> >> https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLinGloOpt/doc/DIRECT_Lipschitzian%20optimization%20without%20the%20lipschitz%20constant.pdf >> >> >> http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python >> code for the algorithm) >> 8. >> >> *DualAnnealing*: the Dual Annealing algorithm, taken directly from >> the SciPy implementation: >> >> >> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_annealing.html#scipy.optimize.dual_annealing >> 9. >> >> *LeapFrog*: the Leap Frog procedure, which I have been recommended >> for use, taken from: >> >> https://github.com/flythereddflagg/lpfgopt >> 10. >> >> *MCS*: Multilevel Coordinate Search, it?s my translation to Python of >> the original Matlab code from A. Neumaier and W. Huyer (giving then for >> free also GLS and >> MINQ ): >> >> https://www.mat.univie.ac.at/~neum/software/mcs/ >> >> I have added a few, minor improvements compared to the original >> implementation. See the MCS >> section for a quick and >> dirty comparison between the Matlab code and my Python conversion. >> 11. >> >> *PSWARM*: Particle Swarm optimization algorithm, it has been >> described in many online papers. I have used a compiled version of the C >> source code from: >> >> http://www.norg.uminho.pt/aivaz/pswarm/ >> 12. >> >> *SCE*: Shuffled Complex Evolution, described in: >> >> Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global >> optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, >> 1015-1031, 1992. >> >> The version I used was graciously made available by Matthias Cuntz >> via a personal e-mail. >> 13. >> >> *SHGO*: Simplicial Homology Global Optimization, taken directly from >> the SciPy implementation: >> >> >> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.html#scipy.optimize.shgo >> >> >> List of benchmark test suites: >> >> 1. >> >> SciPy Extended >> : >> 235 multivariate problems (where the number of independent variables ranges >> from 2 to 17), again with multiple local/global minima. >> >> I have added about 40 new functions to the standard SciPy benchmarks >> and >> fixed a few bugs in the existing benchmark models in the SciPy repository. >> 2. >> >> GKLS : 1,500 test >> functions, with dimensionality varying from 2 to 6, generated with the >> super famous GKLS Test Functions Generator >> . I have taken the original C code >> (available at http://netlib.org/toms/) and converted it to Python. >> 3. >> >> GlobOpt : 288 >> tough problems, with dimensionality varying from 2 to 5, created with >> another test function generator which I arbitrarily named ?GlobOpt?: >> https://www.researchgate.net/publication/225566516_A_new_class_of_test_functions_for_global_optimization . >> The original code is in C++ and I have bridged it to Python using Cython. >> >> *Many thanks* go to Professor Marco Locatelli for providing an >> updated copy of the C++ source code. >> 4. >> >> MMTFG : sort-of an >> acronym for ?Multi-Modal Test Function with multiple Global minima?, this >> test suite implements the work of Jani Ronkkonen: >> https://www.researchgate.net/publication/220265526_A_Generator_for_Multimodal_Test_Functions_with_Multiple_Global_Optima . >> It contains 981 test problems with dimensionality varying from 2 to 4. The >> original code is in C and I have bridge it to Python using Cython. >> 5. >> >> GOTPY : a generator >> of benchmark functions using the Bocharov-Feldbaum ?Method-Min?, containing >> 400 test problems with dimensionality varying from 2 to 5. I have taken the >> Python implementation from https://github.com/redb0/gotpy and >> improved it in terms of runtime. >> >> Original paper from >> http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng >> . >> 6. >> >> Huygens : this >> benchmark suite is very different from the rest, as it uses a ?fractal? >> approach to generate test functions. It is based on the work of Cara >> MacNish on Fractal Functions >> . >> The original code is in Java, and at the beginning I just converted it to >> Python: given it was slow as a turtle, I have re-implemented it in Fortran >> and wrapped it using f2py , then >> generating 600 2-dimensional test problems out of it. >> 7. >> >> LGMVG : not sure >> about the meaning of the acronym, but the implementation follows the >> ?Max-Set of Gaussians Landscape Generator? described in >> http://boyuan.global-optimization.com/LGMVG/index.htm . Source code >> is given in Matlab, but it?s fairly easy to convert it to Python. This test >> suite contains 304 problems with dimensionality varying from 2 to 5. >> 8. >> >> NgLi : Stemming from >> the work of Chi-Kong Ng and Duan Li, this is a test problem generator for >> unconstrained optimization, but it?s fairly easy to assign bound >> constraints to it. The methodology is described in >> https://www.sciencedirect.com/science/article/pii/S0305054814001774 , >> while the Matlab source code can be found in >> http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab >> script to generate 240 problems with dimensionality varying from 2 to 5 by >> outputting the generator parameters in text files, then used Python to >> create the objective functions based on those parameters and the benchmark >> methodology. >> 9. >> >> MPM2 : Implementing the >> ?Multiple Peaks Model 2?, there is a Python implementation at >> https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This >> is a test problem generator also used in the smoof >> library, I have taken the code >> almost as is and generated 480 benchmark functions with dimensionality >> varying from 2 to 5. >> 10. >> >> RandomFields >> : as >> described in >> https://www.researchgate.net/publication/301940420_Global_optimization_test_problems_based_on_random_field_composition , >> it generates benchmark functions by ?smoothing? one or more >> multidimensional discrete random fields and composing them. No source code >> is given, but the implementation is fairly straightforward from the article >> itself. >> 11. >> >> NIST : not exactly the >> realm of Global Optimization solvers, but the NIST StRD >> dataset can >> be used to generate a single objective function as ?sum of squares?. I have >> used the NIST dataset as implemented in lmfit >> , thus >> creating 27 test problems with dimensionality ranging from 2 to 9. >> 12. >> >> GlobalLib : >> Arnold Neumaier maintains >> a >> suite of test problems termed ?COCONUT Benchmark? and Sahinidis has >> converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran >> code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a >> simple C parser to convert the benchmarks from C to Python. >> >> The global minima are taken from Sahinidis >> or from >> Neumaier or refined using the NEOS server >> when the accuracy of the reported >> minima is too low. The suite contains 181 test functions with >> dimensionality varying between 2 and 9. >> 13. >> >> CVMG : another >> ?landscape generator?, I had to dig it out using the Wayback Machine at >> http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/multi.kennedy.html , >> the acronym stands for ?Continuous Valued Multimodality Generator?. Source >> code is in C++ but it?s fairly easy to port it to Python. In addition to >> the original implementation (that uses the Sigmoid >> as a >> softmax/transformation function) I have added a few others to create varied >> landscapes. 360 test problems have been generated, with dimensionality >> ranging from 2 to 5. >> 14. >> >> NLSE : again, not >> really the realm of Global optimization solvers, but Nonlinear Systems of >> Equations can be transformed to single objective functions to optimize. I >> have drawn from many different sources (Publications >> >> , ALIAS/COPRIN >> and >> many others) to create 44 systems of nonlinear equations with >> dimensionality ranging from 2 to 8. >> 15. >> >> Schoen : based on >> the early work of Fabio Schoen and his short note >> on a simple >> but interesting idea on a test function generator, I have taken the C code >> in the note and converted it into Python, thus creating 285 benchmark >> functions with dimensionality ranging from 2 to 6. >> >> *Many thanks* go to Professor Fabio Schoen for providing an updated >> copy of the source code and for the email communications. >> 16. >> >> Robust : the last >> benchmark test suite for this exercise, it is actually composed of 5 >> different kind-of analytical test function generators, containing >> deceptive, multimodal, flat functions depending on the settings. Matlab >> source code is available at http://www.alimirjalili.com/RO.html , I >> simply converted it to Python and created 420 benchmark functions with >> dimensionality ranging from 2 to 6. >> >> >> Enjoy, and Happy 2021 :-) . >> >> >> Andrea. >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > > > -- > _____________________________________ > Dr. Andrew Nelson > > > _____________________________________ > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Jan 8 05:06:33 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 8 Jan 2021 11:06:33 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana wrote: > Dear SciPy Developers & Users, > > long time no see :-) . I thought to start 2021 with a bit of a bang, > to try and forget how bad 2020 has been... So I am happy to present you > with a revamped version of the Global Optimization Benchmarks from my > previous exercise in 2013. > Hi Andrea, this is awesome! Thanks for sharing! This could be really useful to link to and use as a guide for providing recommendations for solvers to use in the scipy.optimize tutorials. It's good to see that SciPy overall is much more competitive than it was in 2013. Overall it seems SHGO is our most accurate solver, and making it faster seems worthwhile. That shouldn't be very difficult, given that it's all pure Python still. MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed and seem the best candidates to be considered for inclusion in SciPy. If you have recommendations or takeaways from all this work for SciPy, I'd love to hear them. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Fri Jan 8 05:35:10 2021 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Fri, 8 Jan 2021 11:35:10 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: Hi Ralf, On Fri, 8 Jan 2021 at 11:07, Ralf Gommers wrote: > > > On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana > wrote: > >> Dear SciPy Developers & Users, >> >> long time no see :-) . I thought to start 2021 with a bit of a bang, >> to try and forget how bad 2020 has been... So I am happy to present you >> with a revamped version of the Global Optimization Benchmarks from my >> previous exercise in 2013. >> > > Hi Andrea, this is awesome! Thanks for sharing! > I am happy you like it :-) . > This could be really useful to link to and use as a guide for providing > recommendations for solvers to use in the scipy.optimize tutorials. It's > good to see that SciPy overall is much more competitive than it was in > 2013. Overall it seems SHGO is our most accurate solver, and making it > faster seems worthwhile. That shouldn't be very difficult, given that it's > all pure Python still. > I have to say that, compared to back in 2013, the addition of SHGO and DualAnnealing to SciPy has made the global optimization world in SciPy much more powerful, pretty much at the top of what can currently be done with open source solvers. > MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed and > seem the best candidates to be considered for inclusion in SciPy. > I couldn't find a license restriction for MCS, but maybe I haven't looked hard enough... Do you have a link for it? I am just curious. > If you have recommendations or takeaways from all this work for SciPy, I'd > love to hear them. > I have a few thoughts, but please bear in mind that it's my opinion and only based on this exercise plus a couple of real-life problems I have been working on recently: 1. Assuming we are dealing with a low dimensional problem, SHGO is close to unbeatable. I have found a glitch when the number of variables gets 10+, or for repeated continuous optimizations, SHGO seems to require enormous amounts of memory: in the SciPy Extended benchmark, trying all the 100 restarts got my RAM consumption to 190 GB for SHGO only - not something you want to ty unless you have a monster machine like mine. 2. DualAnnealing is also extremely powerful - ranking consistently close to the top for most of the benchmarks. It probably requires some tuning of the parameters (which I haven't done), especially when the allowable number of functions evaluations is large. That said, you can clearly see DualAnnealing shining in the SciPy Extended, GKLS, LGMVG and RandomFields benchmarks. 3. BasinHopping and DifferentialEvolution are generally slightly weaker, at least on these benchmarks. That said, I have used both of them with great success on real life problems - albeit with generally generous budgets of functions evaluations. 4. Real-life-wise, I recently had three very tough problems to work on: one 9-dimensional objective function describing multi-phase decline curves for oil/gas wells, which I am now satisfactorily fitting with DualAnnealing. Another one on optimization of 3D well trajectories, which I am happily handing over to SHGO or MCS depending on the problem. And another one related to wind and renewable data fitting which DifferentialEvolution is handling quite well. So, all in all, benchmarks only give you so much information: real-life problems sometimes defy the accepted wisdom that occurs because of contrived (but synthetic) objective functions. That said, if I had to attack a new problem and I had no idea where to start, I would definitely give SHGO and DualAnnealing the first go, as they are quite powerful across a large spectrum of problems. In the end, I believe that SciPy will definitely benefit from the addition of a couple (few?) more robust global solvers, especially if they implement techniques that are completely different from the existing ones (such as DIRECT, MCS, BiteOpt, of course). Giving more options to users is always going to make people happy - but of course you have to balance it with the maintenance efforts in the library. Andrea. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Jan 8 06:14:59 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 8 Jan 2021 12:14:59 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana wrote: > Hi Ralf, > > On Fri, 8 Jan 2021 at 11:07, Ralf Gommers wrote: > >> >> >> On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana >> wrote: >> >>> Dear SciPy Developers & Users, >>> >>> long time no see :-) . I thought to start 2021 with a bit of a bang, >>> to try and forget how bad 2020 has been... So I am happy to present you >>> with a revamped version of the Global Optimization Benchmarks from my >>> previous exercise in 2013. >>> >> >> Hi Andrea, this is awesome! Thanks for sharing! >> > > I am happy you like it :-) . > > > >> This could be really useful to link to and use as a guide for providing >> recommendations for solvers to use in the scipy.optimize tutorials. It's >> good to see that SciPy overall is much more competitive than it was in >> 2013. Overall it seems SHGO is our most accurate solver, and making it >> faster seems worthwhile. That shouldn't be very difficult, given that it's >> all pure Python still. >> > > I have to say that, compared to back in 2013, the addition of SHGO and > DualAnnealing to SciPy has made the global optimization world in SciPy much > more powerful, pretty much at the top of what can currently be done with > open source solvers. > > >> MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed and >> seem the best candidates to be considered for inclusion in SciPy. >> > > I couldn't find a license restriction for MCS, but maybe I haven't looked > hard enough... Do you have a link for it? I am just curious. > MCS itself doesn't contain any license information, but it depends on MINQ which has a link in "All versions of MINQ are licensed" on this page: https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for non-commercial use. > >> If you have recommendations or takeaways from all this work for SciPy, >> I'd love to hear them. >> > > I have a few thoughts, but please bear in mind that it's my opinion and > only based on this exercise plus a couple of real-life problems I have been > working on recently: > Thanks! > 1. Assuming we are dealing with a low dimensional problem, SHGO is close > to unbeatable. I have found a glitch when the number of variables gets 10+, > or for repeated continuous optimizations, SHGO seems to require enormous > amounts of memory: in the SciPy Extended benchmark, trying all the 100 > restarts got my RAM consumption to 190 GB for SHGO only - not something you > want to ty unless you have a monster machine like mine. > That seems like something we should improve. Cheers, Ralf > 2. DualAnnealing is also extremely powerful - ranking consistently close > to the top for most of the benchmarks. It probably requires some tuning of > the parameters (which I haven't done), especially when the allowable number > of functions evaluations is large. That said, you can clearly see > DualAnnealing shining in the SciPy Extended, GKLS, LGMVG and RandomFields > benchmarks. > > 3. BasinHopping and DifferentialEvolution are generally slightly weaker, > at least on these benchmarks. That said, I have used both of them with > great success on real life problems - albeit with generally generous > budgets of functions evaluations. > > 4. Real-life-wise, I recently had three very tough problems to work on: > one 9-dimensional objective function describing multi-phase decline curves > for oil/gas wells, which I am now satisfactorily fitting with > DualAnnealing. Another one on optimization of 3D well trajectories, which I > am happily handing over to SHGO or MCS depending on the problem. And > another one related to wind and renewable data fitting which > DifferentialEvolution is handling quite well. > > So, all in all, benchmarks only give you so much information: real-life > problems sometimes defy the accepted wisdom that occurs because of > contrived (but synthetic) objective functions. That said, if I had to > attack a new problem and I had no idea where to start, I would definitely > give SHGO and DualAnnealing the first go, as they are quite powerful across > a large spectrum of problems. > > In the end, I believe that SciPy will definitely benefit from the addition > of a couple (few?) more robust global solvers, especially if they > implement techniques that are completely different from the existing ones > (such as DIRECT, MCS, BiteOpt, of course). Giving more options to users is > always going to make people happy - but of course you have to balance it > with the maintenance efforts in the library. > > Andrea. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Fri Jan 8 06:20:08 2021 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Fri, 8 Jan 2021 12:20:08 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: Hi Ralf, On Fri, 8 Jan 2021 at 12:15, Ralf Gommers wrote: > > > On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana > wrote: > >> Hi Ralf, >> >> On Fri, 8 Jan 2021 at 11:07, Ralf Gommers wrote: >> >>> >>> >>> On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana >>> wrote: >>> >>>> Dear SciPy Developers & Users, >>>> >>>> long time no see :-) . I thought to start 2021 with a bit of a >>>> bang, to try and forget how bad 2020 has been... So I am happy to present >>>> you with a revamped version of the Global Optimization Benchmarks from my >>>> previous exercise in 2013. >>>> >>> >>> Hi Andrea, this is awesome! Thanks for sharing! >>> >> >> I am happy you like it :-) . >> >> >> >>> This could be really useful to link to and use as a guide for providing >>> recommendations for solvers to use in the scipy.optimize tutorials. It's >>> good to see that SciPy overall is much more competitive than it was in >>> 2013. Overall it seems SHGO is our most accurate solver, and making it >>> faster seems worthwhile. That shouldn't be very difficult, given that it's >>> all pure Python still. >>> >> >> I have to say that, compared to back in 2013, the addition of SHGO and >> DualAnnealing to SciPy has made the global optimization world in SciPy much >> more powerful, pretty much at the top of what can currently be done with >> open source solvers. >> >> >>> MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed and >>> seem the best candidates to be considered for inclusion in SciPy. >>> >> >> I couldn't find a license restriction for MCS, but maybe I haven't looked >> hard enough... Do you have a link for it? I am just curious. >> > > MCS itself doesn't contain any license information, but it depends on MINQ > which has a link in "All versions of MINQ are licensed" on this page: > https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for > non-commercial use. > Ah, OK, thank you, I didn't think about that. Of course, assuming SciPy had another, different "bound constrained indefinite quadratic programming" module then we could easily swap it :-) . Andrea. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Jan 8 06:27:44 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 8 Jan 2021 12:27:44 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: On Fri, Jan 8, 2021 at 12:20 PM Andrea Gavana wrote: > Hi Ralf, > > On Fri, 8 Jan 2021 at 12:15, Ralf Gommers wrote: > >> >> >> On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana >> wrote: >> >>> Hi Ralf, >>> >>> On Fri, 8 Jan 2021 at 11:07, Ralf Gommers >>> wrote: >>> >>>> >>>> >>>> On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana >>>> wrote: >>>> >>>>> Dear SciPy Developers & Users, >>>>> >>>>> long time no see :-) . I thought to start 2021 with a bit of a >>>>> bang, to try and forget how bad 2020 has been... So I am happy to present >>>>> you with a revamped version of the Global Optimization Benchmarks from my >>>>> previous exercise in 2013. >>>>> >>>> >>>> Hi Andrea, this is awesome! Thanks for sharing! >>>> >>> >>> I am happy you like it :-) . >>> >>> >>> >>>> This could be really useful to link to and use as a guide for providing >>>> recommendations for solvers to use in the scipy.optimize tutorials. It's >>>> good to see that SciPy overall is much more competitive than it was in >>>> 2013. Overall it seems SHGO is our most accurate solver, and making it >>>> faster seems worthwhile. That shouldn't be very difficult, given that it's >>>> all pure Python still. >>>> >>> >>> I have to say that, compared to back in 2013, the addition of SHGO and >>> DualAnnealing to SciPy has made the global optimization world in SciPy much >>> more powerful, pretty much at the top of what can currently be done with >>> open source solvers. >>> >>> >>>> MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed and >>>> seem the best candidates to be considered for inclusion in SciPy. >>>> >>> >>> I couldn't find a license restriction for MCS, but maybe I haven't >>> looked hard enough... Do you have a link for it? I am just curious. >>> >> >> MCS itself doesn't contain any license information, but it depends on >> MINQ which has a link in "All versions of MINQ are licensed" on this page: >> https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for >> non-commercial use. >> > > > Ah, OK, thank you, I didn't think about that. Of course, assuming SciPy > had another, different "bound constrained indefinite quadratic programming" > module then we could easily swap it :-) . > MCS and MINQ are from the same author, so I'd expect the same restriction to apply to MCS though. We could ask for permission to license all that under BSD/MIT, sometimes that works - the author seems like the typical academic who doesn't understand open source licensing. In the past we've had success with explaining; given how much extra exposure/users MCS gets if it would be included in SciPy, it may be worth doing if someone is motivated to work on integrating MCS into SciPy. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaistriega at gmail.com Fri Jan 8 20:56:39 2021 From: kaistriega at gmail.com (Kai Striega) Date: Sat, 9 Jan 2021 09:56:39 +0800 Subject: [SciPy-Dev] ENH - New stat distribution | Generalized Hyperbolic In-Reply-To: References: Message-ID: +1 from me too, sounds like a welcome addition On Thu, 7 Jan 2021 at 10:26, Warren Weckesser wrote: > On 12/30/20, Ralf Gommers wrote: > > On Wed, Dec 30, 2020 at 9:11 AM Gabriele Bonomi > > wrote: > > > >> Hello guys, > >> > >> I would like to socialize the fact that some work being currently done > >> to include the Generalized > >> Hyperbolic Distribution > >> to > >> scipy. > >> > >> In a nutshell, this is a distribution that generalize a few other > >> distributions already in scipy (e.g. t, normal inverse gaussian, laplace > >> - > >> among others) > >> > >> I do not think this is a duplicate, but please shoot if you have any > >> concerns/suggestions wrt the above. > >> > > > > Thanks Gabriele, sounds like a good idea to add that distribution. > > > > > Agreed, I think this would be a good addition to SciPy. > > Warren > > > > > Cheers, > > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielschmitzsiegen at googlemail.com Sat Jan 9 02:37:13 2021 From: danielschmitzsiegen at googlemail.com (Daniel Schmitz) Date: Sat, 9 Jan 2021 08:37:13 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: Awesome work, Andrea! Would it be possible for you to make your implementations of mcs and biteopt publicly available? And more out of curiosity, not directly related to scipy: since you work in an industry setting, did you compare these open source optimizers against commercial ones like knitro? Cheers, Daniel On Fri, 8 Jan 2021 at 12:28, Ralf Gommers wrote: > > > On Fri, Jan 8, 2021 at 12:20 PM Andrea Gavana > wrote: > >> Hi Ralf, >> >> On Fri, 8 Jan 2021 at 12:15, Ralf Gommers wrote: >> >>> >>> >>> On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana >>> wrote: >>> >>>> Hi Ralf, >>>> >>>> On Fri, 8 Jan 2021 at 11:07, Ralf Gommers >>>> wrote: >>>> >>>>> >>>>> >>>>> On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana >>>>> wrote: >>>>> >>>>>> Dear SciPy Developers & Users, >>>>>> >>>>>> long time no see :-) . I thought to start 2021 with a bit of a >>>>>> bang, to try and forget how bad 2020 has been... So I am happy to present >>>>>> you with a revamped version of the Global Optimization Benchmarks from my >>>>>> previous exercise in 2013. >>>>>> >>>>> >>>>> Hi Andrea, this is awesome! Thanks for sharing! >>>>> >>>> >>>> I am happy you like it :-) . >>>> >>>> >>>> >>>>> This could be really useful to link to and use as a guide for >>>>> providing recommendations for solvers to use in the scipy.optimize >>>>> tutorials. It's good to see that SciPy overall is much more competitive >>>>> than it was in 2013. Overall it seems SHGO is our most accurate solver, and >>>>> making it faster seems worthwhile. That shouldn't be very difficult, given >>>>> that it's all pure Python still. >>>>> >>>> >>>> I have to say that, compared to back in 2013, the addition of SHGO and >>>> DualAnnealing to SciPy has made the global optimization world in SciPy much >>>> more powerful, pretty much at the top of what can currently be done with >>>> open source solvers. >>>> >>>> >>>>> MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed >>>>> and seem the best candidates to be considered for inclusion in SciPy. >>>>> >>>> >>>> I couldn't find a license restriction for MCS, but maybe I haven't >>>> looked hard enough... Do you have a link for it? I am just curious. >>>> >>> >>> MCS itself doesn't contain any license information, but it depends on >>> MINQ which has a link in "All versions of MINQ are licensed" on this page: >>> https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for >>> non-commercial use. >>> >> >> >> Ah, OK, thank you, I didn't think about that. Of course, assuming SciPy >> had another, different "bound constrained indefinite quadratic programming" >> module then we could easily swap it :-) . >> > > MCS and MINQ are from the same author, so I'd expect the same restriction > to apply to MCS though. We could ask for permission to license all that > under BSD/MIT, sometimes that works - the author seems like the typical > academic who doesn't understand open source licensing. In the past we've > had success with explaining; given how much extra exposure/users MCS gets > if it would be included in SciPy, it may be worth doing if someone is > motivated to work on integrating MCS into SciPy. > > Cheers, > Ralf > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Sat Jan 9 03:12:54 2021 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Sat, 9 Jan 2021 09:12:54 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: Hi Daniel, On Sat, 9 Jan 2021 at 08.37, Daniel Schmitz < danielschmitzsiegen at googlemail.com> wrote: > Awesome work, Andrea! > Thank you, I?m happy you liked it :-) . I have been told to try and condense the work in a paper and submit it to the Journal of Global Optimization, although I?m currently doubting that such a prestigious publication will accept a publication from a nobody like me. > Would it be possible for you to make your implementations of mcs and > biteopt publicly available? > For BiteOpt I might be able to - after a bit of polishing, nothing major. For both I?ll have to ask for permission from my employer, as it was a little tour the force getting MCS right. And more out of curiosity, not directly related to scipy: since you work in > an industry setting, did you compare these open source optimizers against > commercial ones like knitro? > Unfortunately we don?t have any commercial solver available - as far as I know - but I would love to redo the work using other solvers (open source or not). The very interesting exercise from Sahinidis et al: https://link.springer.com/content/pdf/10.1007/s10898-012-9951-y.pdf Although limited to a single set of benchmarks, show that commercial solvers like Tomlab are very efficient, tightly followed by MCS. It would be nice to try and apply the solvers in KNITRO, Tomlab, GlobSol (others?) to all the 16 benchmarks to see where open source stands, but I have no idea how it could be done as I have no licenses for those. Andrea. > Cheers, > > Daniel > > On Fri, 8 Jan 2021 at 12:28, Ralf Gommers wrote: > >> >> >> On Fri, Jan 8, 2021 at 12:20 PM Andrea Gavana >> wrote: >> >>> Hi Ralf, >>> >>> On Fri, 8 Jan 2021 at 12:15, Ralf Gommers >>> wrote: >>> >>>> >>>> >>>> On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana >>>> wrote: >>>> >>>>> Hi Ralf, >>>>> >>>>> On Fri, 8 Jan 2021 at 11:07, Ralf Gommers >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana < >>>>>> andrea.gavana at gmail.com> wrote: >>>>>> >>>>>>> Dear SciPy Developers & Users, >>>>>>> >>>>>>> long time no see :-) . I thought to start 2021 with a bit of a >>>>>>> bang, to try and forget how bad 2020 has been... So I am happy to present >>>>>>> you with a revamped version of the Global Optimization Benchmarks from my >>>>>>> previous exercise in 2013. >>>>>>> >>>>>> >>>>>> Hi Andrea, this is awesome! Thanks for sharing! >>>>>> >>>>> >>>>> I am happy you like it :-) . >>>>> >>>>> >>>>> >>>>>> This could be really useful to link to and use as a guide for >>>>>> providing recommendations for solvers to use in the scipy.optimize >>>>>> tutorials. It's good to see that SciPy overall is much more competitive >>>>>> than it was in 2013. Overall it seems SHGO is our most accurate solver, and >>>>>> making it faster seems worthwhile. That shouldn't be very difficult, given >>>>>> that it's all pure Python still. >>>>>> >>>>> >>>>> I have to say that, compared to back in 2013, the addition of SHGO and >>>>> DualAnnealing to SciPy has made the global optimization world in SciPy much >>>>> more powerful, pretty much at the top of what can currently be done with >>>>> open source solvers. >>>>> >>>>> >>>>>> MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed >>>>>> and seem the best candidates to be considered for inclusion in SciPy. >>>>>> >>>>> >>>>> I couldn't find a license restriction for MCS, but maybe I haven't >>>>> looked hard enough... Do you have a link for it? I am just curious. >>>>> >>>> >>>> MCS itself doesn't contain any license information, but it depends on >>>> MINQ which has a link in "All versions of MINQ are licensed" on this page: >>>> https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for >>>> non-commercial use. >>>> >>> >>> >>> Ah, OK, thank you, I didn't think about that. Of course, assuming SciPy >>> had another, different "bound constrained indefinite quadratic programming" >>> module then we could easily swap it :-) . >>> >> >> MCS and MINQ are from the same author, so I'd expect the same restriction >> to apply to MCS though. We could ask for permission to license all that >> under BSD/MIT, sometimes that works - the author seems like the typical >> academic who doesn't understand open source licensing. In the past we've >> had success with explaining; given how much extra exposure/users MCS gets >> if it would be included in SciPy, it may be worth doing if someone is >> motivated to work on integrating MCS into SciPy. >> >> Cheers, >> Ralf >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Sat Jan 9 03:30:34 2021 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Sat, 9 Jan 2021 09:30:34 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: On Sat, 9 Jan 2021 at 09.12, Andrea Gavana wrote: > Hi Daniel, > > On Sat, 9 Jan 2021 at 08.37, Daniel Schmitz < > danielschmitzsiegen at googlemail.com> wrote: > >> Awesome work, Andrea! >> > > Thank you, I?m happy you liked it :-) . I have been told to try and > condense the work in a paper and submit it to the Journal of Global > Optimization, although I?m currently doubting that such a prestigious > publication will accept a publication from a nobody like me. > > >> Would it be possible for you to make your implementations of mcs and >> biteopt publicly available? >> > > For BiteOpt I might be able to - after a bit of polishing, nothing major. > For both I?ll have to ask for permission from my employer, as it was a > little tour the force getting MCS right. > That said, there is a publicly available Python wrapper for BiteOpt here: https://github.com/leonidk/biteopt I haven?t used it myself, I decided to rewrite the algorithm in Python as I wanted to understand what the code was doing and I wanted to add a couple of (minor) modifications in order to try and make the solver more efficient. Andrea. > > And more out of curiosity, not directly related to scipy: since you work >> in an industry setting, did you compare these open source optimizers >> against commercial ones like knitro? >> > > Unfortunately we don?t have any commercial solver available - as far as I > know - but I would love to redo the work using other solvers (open source > or not). The very interesting exercise from Sahinidis et al: > > https://link.springer.com/content/pdf/10.1007/s10898-012-9951-y.pdf > > Although limited to a single set of benchmarks, show that commercial > solvers like Tomlab are very efficient, tightly followed by MCS. It would > be nice to try and apply the solvers in KNITRO, Tomlab, GlobSol (others?) > to all the 16 benchmarks to see where open source stands, but I have no > idea how it could be done as I have no licenses for those. > > Andrea. > > > >> Cheers, >> >> Daniel >> >> On Fri, 8 Jan 2021 at 12:28, Ralf Gommers wrote: >> >>> >>> >>> On Fri, Jan 8, 2021 at 12:20 PM Andrea Gavana >>> wrote: >>> >>>> Hi Ralf, >>>> >>>> On Fri, 8 Jan 2021 at 12:15, Ralf Gommers >>>> wrote: >>>> >>>>> >>>>> >>>>> On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana >>>>> wrote: >>>>> >>>>>> Hi Ralf, >>>>>> >>>>>> On Fri, 8 Jan 2021 at 11:07, Ralf Gommers >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana < >>>>>>> andrea.gavana at gmail.com> wrote: >>>>>>> >>>>>>>> Dear SciPy Developers & Users, >>>>>>>> >>>>>>>> long time no see :-) . I thought to start 2021 with a bit of a >>>>>>>> bang, to try and forget how bad 2020 has been... So I am happy to present >>>>>>>> you with a revamped version of the Global Optimization Benchmarks from my >>>>>>>> previous exercise in 2013. >>>>>>>> >>>>>>> >>>>>>> Hi Andrea, this is awesome! Thanks for sharing! >>>>>>> >>>>>> >>>>>> I am happy you like it :-) . >>>>>> >>>>>> >>>>>> >>>>>>> This could be really useful to link to and use as a guide for >>>>>>> providing recommendations for solvers to use in the scipy.optimize >>>>>>> tutorials. It's good to see that SciPy overall is much more competitive >>>>>>> than it was in 2013. Overall it seems SHGO is our most accurate solver, and >>>>>>> making it faster seems worthwhile. That shouldn't be very difficult, given >>>>>>> that it's all pure Python still. >>>>>>> >>>>>> >>>>>> I have to say that, compared to back in 2013, the addition of SHGO >>>>>> and DualAnnealing to SciPy has made the global optimization world in SciPy >>>>>> much more powerful, pretty much at the top of what can currently be done >>>>>> with open source solvers. >>>>>> >>>>>> >>>>>>> MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed >>>>>>> and seem the best candidates to be considered for inclusion in SciPy. >>>>>>> >>>>>> >>>>>> I couldn't find a license restriction for MCS, but maybe I haven't >>>>>> looked hard enough... Do you have a link for it? I am just curious. >>>>>> >>>>> >>>>> MCS itself doesn't contain any license information, but it depends on >>>>> MINQ which has a link in "All versions of MINQ are licensed" on this page: >>>>> https://www.mat.univie.ac.at/~neum/software/minq/. It's only free for >>>>> non-commercial use. >>>>> >>>> >>>> >>>> Ah, OK, thank you, I didn't think about that. Of course, assuming SciPy >>>> had another, different "bound constrained indefinite quadratic programming" >>>> module then we could easily swap it :-) . >>>> >>> >>> MCS and MINQ are from the same author, so I'd expect the same >>> restriction to apply to MCS though. We could ask for permission to license >>> all that under BSD/MIT, sometimes that works - the author seems like the >>> typical academic who doesn't understand open source licensing. In the past >>> we've had success with explaining; given how much extra exposure/users MCS >>> gets if it would be included in SciPy, it may be worth doing if someone is >>> motivated to work on integrating MCS into SciPy. >>> >>> Cheers, >>> Ralf >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielschmitzsiegen at googlemail.com Sat Jan 9 04:11:09 2021 From: danielschmitzsiegen at googlemail.com (Daniel Schmitz) Date: Sat, 9 Jan 2021 10:11:09 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: Thanks Andrea, will give biteopt a try. At first glance though, despite the "sparse" documentation it should be possible to work after checking the examples. Regarding a publication, I would not be so pessimistic: the sheer amount of benchmarks you carried out is very interesting for the optimization community! The paper you attached is very interesting, Tomlab seems to be superior for hard problems in general. Best, Daniel On Sat, 9 Jan 2021 at 09:31, Andrea Gavana wrote: > > On Sat, 9 Jan 2021 at 09.12, Andrea Gavana > wrote: > >> Hi Daniel, >> >> On Sat, 9 Jan 2021 at 08.37, Daniel Schmitz < >> danielschmitzsiegen at googlemail.com> wrote: >> >>> Awesome work, Andrea! >>> >> >> Thank you, I?m happy you liked it :-) . I have been told to try and >> condense the work in a paper and submit it to the Journal of Global >> Optimization, although I?m currently doubting that such a prestigious >> publication will accept a publication from a nobody like me. >> >> >>> Would it be possible for you to make your implementations of mcs and >>> biteopt publicly available? >>> >> >> For BiteOpt I might be able to - after a bit of polishing, nothing major. >> For both I?ll have to ask for permission from my employer, as it was a >> little tour the force getting MCS right. >> > > That said, there is a publicly available Python wrapper for BiteOpt here: > > https://github.com/leonidk/biteopt > > I haven?t used it myself, I decided to rewrite the algorithm in Python as > I wanted to understand what the code was doing and I wanted to add a couple > of (minor) modifications in order to try and make the solver more efficient. > > Andrea. > > > > >> >> And more out of curiosity, not directly related to scipy: since you work >>> in an industry setting, did you compare these open source optimizers >>> against commercial ones like knitro? >>> >> >> Unfortunately we don?t have any commercial solver available - as far as I >> know - but I would love to redo the work using other solvers (open source >> or not). The very interesting exercise from Sahinidis et al: >> >> https://link.springer.com/content/pdf/10.1007/s10898-012-9951-y.pdf >> >> Although limited to a single set of benchmarks, show that commercial >> solvers like Tomlab are very efficient, tightly followed by MCS. It would >> be nice to try and apply the solvers in KNITRO, Tomlab, GlobSol (others?) >> to all the 16 benchmarks to see where open source stands, but I have no >> idea how it could be done as I have no licenses for those. >> >> Andrea. >> >> >> >>> Cheers, >>> >>> Daniel >>> >>> On Fri, 8 Jan 2021 at 12:28, Ralf Gommers >>> wrote: >>> >>>> >>>> >>>> On Fri, Jan 8, 2021 at 12:20 PM Andrea Gavana >>>> wrote: >>>> >>>>> Hi Ralf, >>>>> >>>>> On Fri, 8 Jan 2021 at 12:15, Ralf Gommers >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Fri, Jan 8, 2021 at 11:35 AM Andrea Gavana < >>>>>> andrea.gavana at gmail.com> wrote: >>>>>> >>>>>>> Hi Ralf, >>>>>>> >>>>>>> On Fri, 8 Jan 2021 at 11:07, Ralf Gommers >>>>>>> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana < >>>>>>>> andrea.gavana at gmail.com> wrote: >>>>>>>> >>>>>>>>> Dear SciPy Developers & Users, >>>>>>>>> >>>>>>>>> long time no see :-) . I thought to start 2021 with a bit of a >>>>>>>>> bang, to try and forget how bad 2020 has been... So I am happy to present >>>>>>>>> you with a revamped version of the Global Optimization Benchmarks from my >>>>>>>>> previous exercise in 2013. >>>>>>>>> >>>>>>>> >>>>>>>> Hi Andrea, this is awesome! Thanks for sharing! >>>>>>>> >>>>>>> >>>>>>> I am happy you like it :-) . >>>>>>> >>>>>>> >>>>>>> >>>>>>>> This could be really useful to link to and use as a guide for >>>>>>>> providing recommendations for solvers to use in the scipy.optimize >>>>>>>> tutorials. It's good to see that SciPy overall is much more competitive >>>>>>>> than it was in 2013. Overall it seems SHGO is our most accurate solver, and >>>>>>>> making it faster seems worthwhile. That shouldn't be very difficult, given >>>>>>>> that it's all pure Python still. >>>>>>>> >>>>>>> >>>>>>> I have to say that, compared to back in 2013, the addition of SHGO >>>>>>> and DualAnnealing to SciPy has made the global optimization world in SciPy >>>>>>> much more powerful, pretty much at the top of what can currently be done >>>>>>> with open source solvers. >>>>>>> >>>>>>> >>>>>>>> MCS isn't open source, but both DIRECT and BiteOpt are MIT-licensed >>>>>>>> and seem the best candidates to be considered for inclusion in SciPy. >>>>>>>> >>>>>>> >>>>>>> I couldn't find a license restriction for MCS, but maybe I haven't >>>>>>> looked hard enough... Do you have a link for it? I am just curious. >>>>>>> >>>>>> >>>>>> MCS itself doesn't contain any license information, but it depends on >>>>>> MINQ which has a link in "All versions of MINQ are licensed" on this page: >>>>>> https://www.mat.univie.ac.at/~neum/software/minq/. It's only free >>>>>> for non-commercial use. >>>>>> >>>>> >>>>> >>>>> Ah, OK, thank you, I didn't think about that. Of course, assuming >>>>> SciPy had another, different "bound constrained indefinite quadratic >>>>> programming" module then we could easily swap it :-) . >>>>> >>>> >>>> MCS and MINQ are from the same author, so I'd expect the same >>>> restriction to apply to MCS though. We could ask for permission to license >>>> all that under BSD/MIT, sometimes that works - the author seems like the >>>> typical academic who doesn't understand open source licensing. In the past >>>> we've had success with explaining; given how much extra exposure/users MCS >>>> gets if it would be included in SciPy, it may be worth doing if someone is >>>> motivated to work on integrating MCS into SciPy. >>>> >>>> Cheers, >>>> Ralf >>>> >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at python.org >>>> https://mail.python.org/mailman/listinfo/scipy-dev >>>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >> _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hans.dembinski at gmail.com Mon Jan 11 05:46:42 2021 From: hans.dembinski at gmail.com (Hans Dembinski) Date: Mon, 11 Jan 2021 11:46:42 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: <4E83BBA7-ECE4-4929-9468-C76FD4136E99@gmail.com> Hi Andrea, > On 8. Jan 2021, at 10:20, Andrea Gavana wrote: > > long time no see :-) . I thought to start 2021 with a bit of a bang, to try and forget how bad 2020 has been... So I am happy to present you with a revamped version of the Global Optimization Benchmarks from my previous exercise in 2013. > > This new set of benchmarks pretty much superseeds - and greatly expands - the previous analysis that you can find at this location: http://infinity77.net/global_optimization/ . thank you for sharing this. I was going to point out the "No Free Lunch Theorem" but you mention it yourself on your website, good. I have a few questions/comments: - The exclusion of gradient-based solvers is unfortunate. It is of course up to you what you investigate, but gradient-based solvers are surely useful in practice. Not all real-life problems involve a (non-analytical) simulation. - "This effort stems from the fact that I got fed up with the current attitude of most mathematicians/numerical optimization experts, who tend to demonstrate the advantages of an algorithm based on ?elapsed time? or ?CPU time? or similar meaningless performance indicators." I don't know where you get that from, what are your sources? I have never seen an academic paper that used elapsed time or CPU time. The scientific papers I have read use the number of function evaluations to compare performance, which is a meaningful machine-independent performance measure if the total time is dominated by the time spend in the function, as it usually is. - I feel uneasy about your performance measure. It is whether the solver finds the minimum value of the function (why not the location of the minimum? Isn't that usually of interest?) within some fixed tolerance in 2000 function evaluations. a) The maximum number of function evaluations that you use does not depend on the dimensionality of the problem, but it clearly should. The search space is larger in higher dimensional problems, so more evaluations are needed by any algorithm. That is also obvious from your results. b) Instead of recording a binary outcome (success/failure to find the minimum in N evaluations), I think it would be more useful to record the number of evaluations until the minimum is reached and then give the mean or median number of function evaluations over many trials as well as the percentage of successful convergences. Algorithms can be ranked by robustness (the number of correctly solved problems) and convergence rate (the average/median number of function evaluations). Robustness and convergence rate may be anti-correlated. You are mixing the two in your performance measure, which makes it more difficult to interpret. - http://infinity77.net/global_optimization/ does not list the same algorithms as the table in http://infinity77.net/go_2021/thebenchmarks.html#info-general-results - While I think we agree that CPU-time is not a useful means to compare algorithms, it is then quite surprising to see Fig 0.6 with the CPU times. I suppose the pure-Python implementations perform so badly because the benchmark functions are all rather small python functions which are quick to evaluate so that the time spend inside the solver does matter. - I see some an inconsistency between your declared goals and your benchmark. You say you only care about optimization of non-analytical functions in which the function computation involves a simulation, but then most of your benchmark functions are analytical functions. In other words, you do not measure the performance on non-analytical functions. It is quite possible that the ranking would look different if you used non-analytical functions in the benchmarks. - I would prefer to read technical documents written in a more professional detached writing style and I think others would do, too. Regards, Hans From andrea.gavana at gmail.com Mon Jan 11 06:37:43 2021 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Mon, 11 Jan 2021 12:37:43 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: <4E83BBA7-ECE4-4929-9468-C76FD4136E99@gmail.com> References: <4E83BBA7-ECE4-4929-9468-C76FD4136E99@gmail.com> Message-ID: Hi Hans, On Mon, 11 Jan 2021 at 11:47, Hans Dembinski wrote: > Hi Andrea, > > > On 8. Jan 2021, at 10:20, Andrea Gavana wrote: > > > > long time no see :-) . I thought to start 2021 with a bit of a bang, > to try and forget how bad 2020 has been... So I am happy to present you > with a revamped version of the Global Optimization Benchmarks from my > previous exercise in 2013. > > > > This new set of benchmarks pretty much superseeds - and greatly expands > - the previous analysis that you can find at this location: > http://infinity77.net/global_optimization/ . > > thank you for sharing this. I was going to point out the "No Free Lunch > Theorem" but you mention it yourself on your website, good. > Thank you for your comments. Before trying to answer your concerns, let me clarify a few things as it seems to me we are mixing two sets of results in the comments: 1. This page: http://infinity77.net/global_optimization/ represents my old work on global optimization (dated 2013), and I consider it now obsolete and superseded by the work at point (2) 2. This page: http://infinity77.net/go_2021/index.html (and subsequent pages) is now my latest reference. If you read the description (and in particular the section about "The Rules"), you will see that they are not quite the same as they were in point (1). > > I have a few questions/comments: > > - The exclusion of gradient-based solvers is unfortunate. It is of course > up to you what you investigate, but gradient-based solvers are surely > useful in practice. Not all real-life problems involve a (non-analytical) > simulation. > I welcome suggestions on which *global* optimization algorithms are gradient-based and can be applied relatively easily using Python, NumPy, SciPy. Note that there is no mention of gradient-based or non gradient-based algorithms in the page http://infinity77.net/go_2021/index.html. > > - "This effort stems from the fact that I got fed up with the current > attitude of most mathematicians/numerical optimization experts, who tend to > demonstrate the advantages of an algorithm based on ?elapsed time? or ?CPU > time? or similar meaningless performance indicators." I don't know where > you get that from, what are your sources? I have never seen an academic > paper that used elapsed time or CPU time. The scientific papers I have read > use the number of function evaluations to compare performance, which is a > meaningful machine-independent performance measure if the total time is > dominated by the time spend in the function, as it usually is. > While I agree that in recent years the shift to use functions evaluations instead of CPU time has greatly prevailed, that was not the case a while back. A few examples: https://www.mat.univie.ac.at/~neum/ms/comparison.pdf https://arxiv.org/pdf/1709.08242.pdf But since that sentence seems to be annoying, I'll just remove it :-) . > > - I feel uneasy about your performance measure. It is whether the solver > finds the minimum value of the function (why not the location of the > minimum? Isn't that usually of interest?) within some fixed tolerance in > 2000 function evaluations. > I have both the function value and the location of the minimum. Of course they are both important, and of course I will use the location of the minimum to do further analysis. The point of the exercise is: I know where the global optimum (optima) lies, can the solver find it with a specific tolerance? Most benchmarks are designed like this. The 2,000 is not a hard limit anymore per se, as you will notice I have run all the 16 benchmarks with different stopping conditions in terms of maximum number of function evaluations. Specifically, all the benchmarks have been run 22 times, for each run limiting the function evaluations budget to 100, then 200, 300, 400, ..., 2000, 5000, 10000. Some of the benchmarks have been extended to 50,000 (http://infinity77.net/go_2021/globopt.html). > a) The maximum number of function evaluations that you use does not depend > on the dimensionality of the problem, but it clearly should. The search > space is larger in higher dimensional problems, so more evaluations are > needed by any algorithm. That is also obvious from your results. > I agree, and this is why I have run all the benchmarks multiple times with varying budgets of function evaluations, and which is also why many benchmarks have a "Dimensionality Effect" chapter to look at what happens when the problem grows in size ( http://infinity77.net/go_2021/gkls.html#size-dimensionality-effects, http://infinity77.net/go_2021/mmtfg.html#size-dimensionality-effects , many others). That said, most of the problems I usually deal with are computationally expensive - so if one day my model has two parameters to tune and I allow the algorithm to have 500 function evaluations, that may take me one week to solve. If the month after it is decided that the model is better described by a function of 10 parameters, I am not going to ask for 2,500 simulations - it's going to take me a month and a half to get the results back. The solver will have to deal with a 10-parameter model in 500-600 evaluations anyway. > b) Instead of recording a binary outcome (success/failure to find the > minimum in N evaluations), I think it would be more useful to record the > number of evaluations until the minimum is reached and then give the mean > or median number of function evaluations over many trials as well as the > percentage of successful convergences. Algorithms can be ranked by > robustness (the number of correctly solved problems) and convergence rate > (the average/median number of function evaluations). Robustness and > convergence rate may be anti-correlated. You are mixing the two in your > performance measure, which makes it more difficult to interpret. > This is exactly what has been done for the SciPy Extended benchmark. http://infinity77.net/go_2021/scipy_extended.html#test-functions-general-solvers-performances tells you that this specific benchmark has been run considering for every benchmark function 100 random starting points, and the reported statistics (overall success, number of functions evaluations) refer to successful optimizations only. I haven't repeated the 100 random starting points for the other 15 benchmarks because it would take me forever to run them like that. > > - http://infinity77.net/global_optimization/ does not list the same > algorithms as the table in > http://infinity77.net/go_2021/thebenchmarks.html#info-general-results please see above: http://infinity77.net/global_optimization is old and should not be looked at anymore. > - While I think we agree that CPU-time is not a useful means to compare > algorithms, it is then quite surprising to see Fig 0.6 with the CPU times. > I suppose the pure-Python implementations perform so badly because the > benchmark functions are all rather small python functions which are quick > to evaluate so that the time spend inside the solver does matter. > That's generally correct. It was just a way for me to say that some of the solvers are inherently slower than others, although for a real life problem it shouldn't matter so much. > > - I see some an inconsistency between your declared goals and your > benchmark. You say you only care about optimization of non-analytical > functions in which the function computation involves a simulation, but then > most of your benchmark functions are analytical functions. In other words, > you do not measure the performance on non-analytical functions. It is quite > possible that the ranking would look different if you used non-analytical > functions in the benchmarks. > The problems in the benchmarks are analytical (or almost analytical, not all benchmarks are like that - did you take a look at http://infinity77.net/go_2021/huygens.html ? ) because the benchmarks are designed to be that way. In global optimization, a benchmark is designed to try and reproduce features that a real-life objective function may exhibit. In the event that I will ever manage to get a paper published on this exercise I will definitely include real-life problems in the analysis, as this is always my aim. The usefulness of this set of benchmarks come from the fact that, after all this monster analysis with 16 different test suites, I can approach a new, real-life study by saying: "look, I don't know which solvers will be the best, but I will definitely start with MCS, SHGO or Dual Annealing".. > > - I would prefer to read technical documents written in a more > professional detached writing style and I think others would do, too. > That will probably follow if and when (if ever) a paper may be published about this. That said, I welcome, any time, corrections or modifications to the writings - I understand that some of the paragraphs I have written can be seen as less professional than needed, so I will be happy to change them. Andrea. > Regards, > Hans > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhaog6 at lsec.cc.ac.cn Thu Jan 14 23:24:27 2021 From: zhaog6 at lsec.cc.ac.cn (=?UTF-8?B?6LW15Yia?=) Date: Fri, 15 Jan 2021 12:24:27 +0800 (GMT+08:00) Subject: [SciPy-Dev] Willing to contribute to SciPy Message-ID: <14cf2847.5b15.177044994de.Coremail.zhaog6@lsec.cc.ac.cn> Dear SciPy team, I have two questions when I read SciPy documents and source codes. 1. In the directory "scipy/sparse/linalg/isolve/", I see the implementations of some sparse iterative solver and parameter interfaces with preconditioner, such as CG/PCG, BiCGSTAB/PBiCGSTAB, GMRES/PGMRES, FGMRES/PFGMRES, etc., it's great. But it seems that the detailed preconditioner has not been implemented currently (If multigrid preconditioners had been implemented, please let me know, thanks). So I'd like to ask if I can help you implement some Krylov methods by Multigrid preconditioned (the fastest preconditioner for SPD) to contribute to the SciPy community (From the perspective of necessity and value). 2. On the other hand, I would also like to know if the other parts of SciPy need me to do some improvements and enhancements. I am very willing to make a contribution to SciPy. Best Wishes, Gang Zhao From jonathan.guyer at nist.gov Fri Jan 15 09:25:45 2021 From: jonathan.guyer at nist.gov (Guyer, Jonathan E. Dr. (Fed)) Date: Fri, 15 Jan 2021 14:25:45 +0000 Subject: [SciPy-Dev] Willing to contribute to SciPy In-Reply-To: <14cf2847.5b15.177044994de.Coremail.zhaog6@lsec.cc.ac.cn> References: <14cf2847.5b15.177044994de.Coremail.zhaog6@lsec.cc.ac.cn> Message-ID: <32306B4E-8E88-4BF1-836C-636EAA012387@nist.gov> I think attention to the sparse solvers would be great. The SciPy solvers are OK, but in my experience are much slower than petsc4py or PyTrilinos, and dramatically slower than PySparse (which, sadly, will likely never make the leap to Py3k). The inner workings of any of these packages is beyond my capabilities, but if you have the know-how, I?d personally appreciate you looking into it. - Jon > On Jan 14, 2021, at 11:24 PM, ?? wrote: > > Dear SciPy team, > > I have two questions when I read SciPy documents and source codes. > > 1. In the directory "scipy/sparse/linalg/isolve/", I see the implementations of some sparse iterative solver and parameter interfaces with preconditioner, such as CG/PCG, BiCGSTAB/PBiCGSTAB, GMRES/PGMRES, FGMRES/PFGMRES, etc., it's great. But it seems that the detailed preconditioner has not been implemented currently (If multigrid preconditioners had been implemented, please let me know, thanks). So I'd like to ask if I can help you implement some Krylov methods by Multigrid preconditioned (the fastest preconditioner for SPD) to contribute to the SciPy community (From the perspective of necessity and value). > > 2. On the other hand, I would also like to know if the other parts of SciPy need me to do some improvements and enhancements. I am very willing to make a contribution to SciPy. > > > Best Wishes, > Gang Zhao > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.python.org%2Fmailman%2Flistinfo%2Fscipy-dev&data=04%7C01%7Cjonathan.guyer%40nist.gov%7Ce7044bbea28f48080b0108d8b90e729f%7C2ab5d82fd8fa4797a93e054655c61dec%7C1%7C0%7C637462819003590571%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=6WR0AGsR4QEO53qw0AK9qCB5BDTWR7p4HN561OBeF8c%3D&reserved=0 From mr_plum at mail.ru Sat Jan 16 08:52:28 2021 From: mr_plum at mail.ru (Vlad Blazhko) Date: Sat, 16 Jan 2021 14:52:28 +0100 Subject: [SciPy-Dev] Time efficient code to calculate range of Bessel Functions Message-ID: Hello, I would like to contribute time efficient and numerically stable cython code for calculating a range of Bessel's functions of first and second order (jv and yv) to the subpackage special in scipy. It is useful since often you have a sum that uses jv of the one argument, but of several orders that increase by one. They can be efficiently computed through the recurrence relation. While recurrence relation works fine for the yv function, it quickly diverges for the jv function, so for the jv function I have implemented Miller's recurrence algorithm, which is stable. In my experience it is usually faster 5-6 times then using just jv and yv, and such code usually lies in the heart of simulations (in hot places). So I propose the following: Functions names: jv_range, yv_range Example of code: 1) jv_range(v_from=0.3, n=5, z=np.ndarray) -> computes values for orders 0.3, 1.3, 2.3, 3.3, 4.3 and puts them in last axis, i.e. the shape of result is z.shape + (n,) 2) jv_range(v_from=-2.3, n=6, z=np.ndarray) -> computes values for orders -2.3, -1.3, -0.3, 0.7, 1.7, 2.7 If the functions don't have enough accuracy for given orders, then they will fallback to jv and yv functions. Do you think such functions will be a good contribution to the special subpackage? Feel free to give a suggestion on the functions interface And please let me know if you would be so kind to review my PR. Best regards, Vlad Blazhko -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.c.endres at gmail.com Sun Jan 17 06:10:00 2021 From: stefan.c.endres at gmail.com (Stefan Endres) Date: Sun, 17 Jan 2021 12:10:00 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: Dear Andrea, Thank you very much for this detailed analysis. I don't think I've seen such a large collection of benchmark test suites or collection of DFO algorithms since the publication by Rios and Sahinidis in 2013. Some questions: - Many of the commercial algorithms offer free licenses for benchmarking problems of less than 10 dimensions. Would you be willing to include some of these in your benchmarks at some point? It would be a great reference to use. - The collection of test suites you've garnered could be immensely useful for further algorithm development. Is there a possibility of releasing the code publicly (presumably after you've published the results in a journal)? - - In this case I would also like to volunteer to run some of the commercial solvers on the benchmark suite. - It would also help to have a central repository for fixing bugs and adding lower global minima when they are found (of which there are quite few ). Comments on shgo: - High RAM use in higher dimensions: - In the higher dimensions the new simplicial sampling can be used (not pushed to scipy yet; I still need to update some documentation before the PR). This alleviates, but does not eliminate the memory leak issue. As you've said SHGO is best suited to problems below 10 dimensions as any higher leaves the realm of DFO problems and starts to enter the domain of NLP problems. My personal preference in this case is to use the stochastic algorithms (basinhopping and differential evolution) on problems where it is known that a gradient based solver won't work. - An exception to this "rule" is when special grey box information such as symmetry of the objective function (something that can be supplied to shgo to push the applicability of the algorithm up to ~100 variables) or pre-computed bounds on the Lipschitz constants is known. - In the symmetry case SHGO can solve these by supplying the `symmetry` option (which was used in the previous benchmarks done by me for the JOGO publication, although I did not specifically check if performance was actually improved on those problems, but shgo did converge on all benchmark problems in the scipy test suite). - I have had a few reports of memory leaks from various users. I have spoken to a few collaborators about the possibility of finding a Masters student to cythonize some of the code or otherwise improve it. Hopefully, this will happen in the summer semester of 2021. Thank you again for compiling this large set of benchmark results. Best regards, Stefan On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana wrote: > Dear SciPy Developers & Users, > > long time no see :-) . I thought to start 2021 with a bit of a bang, > to try and forget how bad 2020 has been... So I am happy to present you > with a revamped version of the Global Optimization Benchmarks from my > previous exercise in 2013. > > This new set of benchmarks pretty much superseeds - and greatly expands - > the previous analysis that you can find at this location: > http://infinity77.net/global_optimization/ . > > The approach I have taken this time is to select as many benchmark test > suites as possible: most of them are characterized by test function > *generators*, from which we can actually create almost an unlimited > number of unique test problems. Biggest news are: > > > 1. This whole exercise is made up of *6,825* test problems divided > across *16* different test suites: most of these problems are of low > dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ > variables. With all the sensitivities performed during this exercise on > those benchmarks, the overall grand total number of functions evaluations > stands at *3,859,786,025* - close to *4 billion*. Not bad. > 2. A couple of "new" optimization algorithms I have ported to Python: > > > - MCS: Multilevel Coordinate Search > , it?s my > translation to Python of the original Matlab code from A. Neumaier and W. > Huyer (giving then for free also GLS and MINQ) I have added a few, minor > improvements compared to the original implementation. > - BiteOpt: BITmask Evolution OPTimization > , I have converted the C++ > code into Python and added a few, minor modifications. > > > Enough chatting for now. The 13 tested algorithms are described here: > > http://infinity77.net/go_2021/ > > High level description & results of the 16 benchmarks: > > http://infinity77.net/go_2021/thebenchmarks.html > > Each benchmark test suite has its own dedicated page, with more detailed > results and sensitivities. > > List of tested algorithms: > > 1. > > *AMPGO*: Adaptive Memory Programming for Global Optimization: this is > my Python implementation of the algorithm described here: > > > http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdf > > I have added a few improvements here and there based on my Master > Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and > Gomez. After AMPGO was integrated in lmfit > , I have improved it even more - in > my opinion. > 2. > > *BasinHopping*: Basin hopping is a random algorithm which attempts to > find the global minimum of a smooth scalar function of one or more > variables. The algorithm was originally described by David Wales: > > http://www-wales.ch.cam.ac.uk/ > > BasinHopping is now part of the standard SciPy distribution. > 3. > > *BiteOpt*: BITmask Evolution OPTimization, based on the algorithm > presented in this GitHub link: > > https://github.com/avaneev/biteopt > > I have converted the C++ code into Python and added a few, minor > modifications. > 4. > > *CMA-ES*: Covariance Matrix Adaptation Evolution Strategy, based on > the following algorithm: > > http://www.lri.fr/~hansen/cmaesintro.html > > http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for > the algorithm) > 5. > > *CRS2*: Controlled Random Search with Local Mutation, as implemented > in the NLOpt package: > > > http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_Search_.28CRS.29_with_local_mutation > 6. > > *DE*: Differential Evolution, described in the following page: > > http://www1.icsi.berkeley.edu/~storn/code.html > > DE is now part of the standard SciPy distribution, and I have taken > the implementation as it stands in SciPy: > > > https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution > 7. > > *DIRECT*: the DIviding RECTangles procedure, described in: > > > https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLinGloOpt/doc/DIRECT_Lipschitzian%20optimization%20without%20the%20lipschitz%20constant.pdf > > > http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python > code for the algorithm) > 8. > > *DualAnnealing*: the Dual Annealing algorithm, taken directly from the > SciPy implementation: > > > https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_annealing.html#scipy.optimize.dual_annealing > 9. > > *LeapFrog*: the Leap Frog procedure, which I have been recommended for > use, taken from: > > https://github.com/flythereddflagg/lpfgopt > 10. > > *MCS*: Multilevel Coordinate Search, it?s my translation to Python of > the original Matlab code from A. Neumaier and W. Huyer (giving then for > free also GLS and > MINQ ): > > https://www.mat.univie.ac.at/~neum/software/mcs/ > > I have added a few, minor improvements compared to the original > implementation. See the MCS > section for a quick and > dirty comparison between the Matlab code and my Python conversion. > 11. > > *PSWARM*: Particle Swarm optimization algorithm, it has been described > in many online papers. I have used a compiled version of the C source code > from: > > http://www.norg.uminho.pt/aivaz/pswarm/ > 12. > > *SCE*: Shuffled Complex Evolution, described in: > > Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global > optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, > 1015-1031, 1992. > > The version I used was graciously made available by Matthias Cuntz via > a personal e-mail. > 13. > > *SHGO*: Simplicial Homology Global Optimization, taken directly from > the SciPy implementation: > > > https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.html#scipy.optimize.shgo > > > List of benchmark test suites: > > 1. > > SciPy Extended > : > 235 multivariate problems (where the number of independent variables ranges > from 2 to 17), again with multiple local/global minima. > > I have added about 40 new functions to the standard SciPy benchmarks > and > fixed a few bugs in the existing benchmark models in the SciPy repository. > 2. > > GKLS : 1,500 test > functions, with dimensionality varying from 2 to 6, generated with the > super famous GKLS Test Functions Generator > . I have taken the original C code > (available at http://netlib.org/toms/) and converted it to Python. > 3. > > GlobOpt : 288 > tough problems, with dimensionality varying from 2 to 5, created with > another test function generator which I arbitrarily named ?GlobOpt?: > https://www.researchgate.net/publication/225566516_A_new_class_of_test_functions_for_global_optimization . > The original code is in C++ and I have bridged it to Python using Cython. > > *Many thanks* go to Professor Marco Locatelli for providing an updated > copy of the C++ source code. > 4. > > MMTFG : sort-of an > acronym for ?Multi-Modal Test Function with multiple Global minima?, this > test suite implements the work of Jani Ronkkonen: > https://www.researchgate.net/publication/220265526_A_Generator_for_Multimodal_Test_Functions_with_Multiple_Global_Optima . > It contains 981 test problems with dimensionality varying from 2 to 4. The > original code is in C and I have bridge it to Python using Cython. > 5. > > GOTPY : a generator of > benchmark functions using the Bocharov-Feldbaum ?Method-Min?, containing > 400 test problems with dimensionality varying from 2 to 5. I have taken the > Python implementation from https://github.com/redb0/gotpy and improved > it in terms of runtime. > > Original paper from > http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng > . > 6. > > Huygens : this > benchmark suite is very different from the rest, as it uses a ?fractal? > approach to generate test functions. It is based on the work of Cara > MacNish on Fractal Functions > . > The original code is in Java, and at the beginning I just converted it to > Python: given it was slow as a turtle, I have re-implemented it in Fortran > and wrapped it using f2py , then generating > 600 2-dimensional test problems out of it. > 7. > > LGMVG : not sure about > the meaning of the acronym, but the implementation follows the ?Max-Set of > Gaussians Landscape Generator? described in > http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is > given in Matlab, but it?s fairly easy to convert it to Python. This test > suite contains 304 problems with dimensionality varying from 2 to 5. > 8. > > NgLi : Stemming from the > work of Chi-Kong Ng and Duan Li, this is a test problem generator for > unconstrained optimization, but it?s fairly easy to assign bound > constraints to it. The methodology is described in > https://www.sciencedirect.com/science/article/pii/S0305054814001774 , > while the Matlab source code can be found in > http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab > script to generate 240 problems with dimensionality varying from 2 to 5 by > outputting the generator parameters in text files, then used Python to > create the objective functions based on those parameters and the benchmark > methodology. > 9. > > MPM2 : Implementing the > ?Multiple Peaks Model 2?, there is a Python implementation at > https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This > is a test problem generator also used in the smoof > library, I have taken the code > almost as is and generated 480 benchmark functions with dimensionality > varying from 2 to 5. > 10. > > RandomFields > : as > described in > https://www.researchgate.net/publication/301940420_Global_optimization_test_problems_based_on_random_field_composition , > it generates benchmark functions by ?smoothing? one or more > multidimensional discrete random fields and composing them. No source code > is given, but the implementation is fairly straightforward from the article > itself. > 11. > > NIST : not exactly the > realm of Global Optimization solvers, but the NIST StRD > dataset can > be used to generate a single objective function as ?sum of squares?. I have > used the NIST dataset as implemented in lmfit > , thus > creating 27 test problems with dimensionality ranging from 2 to 9. > 12. > > GlobalLib : > Arnold Neumaier maintains > a > suite of test problems termed ?COCONUT Benchmark? and Sahinidis has > converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran > code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a > simple C parser to convert the benchmarks from C to Python. > > The global minima are taken from Sahinidis > or from > Neumaier or refined using the NEOS server > when the accuracy of the reported > minima is too low. The suite contains 181 test functions with > dimensionality varying between 2 and 9. > 13. > > CVMG : another > ?landscape generator?, I had to dig it out using the Wayback Machine at > http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/multi.kennedy.html , > the acronym stands for ?Continuous Valued Multimodality Generator?. Source > code is in C++ but it?s fairly easy to port it to Python. In addition to > the original implementation (that uses the Sigmoid > as a > softmax/transformation function) I have added a few others to create varied > landscapes. 360 test problems have been generated, with dimensionality > ranging from 2 to 5. > 14. > > NLSE : again, not really > the realm of Global optimization solvers, but Nonlinear Systems of > Equations can be transformed to single objective functions to optimize. I > have drawn from many different sources (Publications > , > ALIAS/COPRIN > and > many others) to create 44 systems of nonlinear equations with > dimensionality ranging from 2 to 8. > 15. > > Schoen : based on > the early work of Fabio Schoen and his short note > on a simple > but interesting idea on a test function generator, I have taken the C code > in the note and converted it into Python, thus creating 285 benchmark > functions with dimensionality ranging from 2 to 6. > > *Many thanks* go to Professor Fabio Schoen for providing an updated > copy of the source code and for the email communications. > 16. > > Robust : the last > benchmark test suite for this exercise, it is actually composed of 5 > different kind-of analytical test function generators, containing > deceptive, multimodal, flat functions depending on the settings. Matlab > source code is available at http://www.alimirjalili.com/RO.html , I > simply converted it to Python and created 420 benchmark functions with > dimensionality ranging from 2 to 6. > > > Enjoy, and Happy 2021 :-) . > > > Andrea. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering) Wissenchaftlicher Mitarbeiter: Leibniz Institute for Materials Engineering IWT, Badgasteiner Stra?e 3, 28359 Bremen, Germany Work phone (DE): +49 (0) 421 218 51238 Cellphone (DE): +49 (0) 160 949 86417 Cellphone (ZA): +27 (0) 82 972 42 89 E-mail (work): s.endres at iwt.uni-bremen.de Website: https://stefan-endres.github.io/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Sun Jan 17 08:32:40 2021 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Sun, 17 Jan 2021 14:32:40 +0100 Subject: [SciPy-Dev] Global Optimization Benchmarks In-Reply-To: References: Message-ID: Hi Stefan, You?re most welcome :-) . I?m happy the experts in the community are commenting and suggesting things, and constructive criticism is also always welcome. On Sun, 17 Jan 2021 at 12.11, Stefan Endres wrote: > Dear Andrea, > > Thank you very much for this detailed analysis. I don't think I've seen > such a large collection of benchmark test suites or collection of DFO > algorithms since the publication by Rios and Sahinidis in 2013. Some > questions: > > - Many of the commercial algorithms offer free licenses for > benchmarking problems of less than 10 dimensions. Would you be willing to > include some of these in your benchmarks at some point? It would be a great > reference to use. > > I?m definitely willing to include those commercial algorithms. The test suite per se is almost completely automated, so it?s not that complicated to add one or more solvers. I?m generally more inclined in testing open source algorithms but there?s nothing stopping the inclusion of commercial ones. I welcome any suggestions related to commercial solvers, as long as they can run on Python 2 / Python 3 and on Windows (I might be able to setup a Linux virtual machine if absolutely needed but that would defy part of the purpose of the exercise - SHGO, Dual Annealing and the other SciPy solvers run on all platforms that support SciPy). > - > - The collection of test suites you've garnered could be immensely > useful for further algorithm development. Is there a possibility of > releasing the code publicly (presumably after you've published the results > in a journal)? > - > - In this case I would also like to volunteer to run some of the > commercial solvers on the benchmark suite. > - It would also help to have a central repository for fixing bugs > and adding lower global minima when they are found (of which there are > quite few ). > > I?m still sorting out all the implications related to a potential paper with my employer, but as far as I can see there shouldn?t be any problem with that: assuming everything goes as it should, I will definitely push for making the code open source. > - > > Comments on shgo: > > - High RAM use in higher dimensions: > - In the higher dimensions the new simplicial sampling can be used > (not pushed to scipy yet; I still need to update some documentation before > the PR). This alleviates, but does not eliminate the memory leak issue. As > you've said SHGO is best suited to problems below 10 dimensions as any > higher leaves the realm of DFO problems and starts to enter the domain of > NLP problems. My personal preference in this case is to use the stochastic > algorithms (basinhopping and differential evolution) on problems where it > is known that a gradient based solver won't work. > - An exception to this "rule" is when special grey box > information such as symmetry of the objective function (something that can > be supplied to shgo to push the applicability of the algorithm up to ~100 > variables) or pre-computed bounds on the Lipschitz constants is known. > - In the symmetry case SHGO can solve these by supplying the > `symmetry` option (which was used in the previous benchmarks done by me for > the JOGO publication, although I did not specifically check if performance > was actually improved on those problems, but shgo did converge on all > benchmark problems in the scipy test suite). > - I have had a few reports of memory leaks from various users. I > have spoken to a few collaborators about the possibility of finding a > Masters student to cythonize some of the code or otherwise improve it. > Hopefully, this will happen in the summer semester of 2021. > > To be honest I wouldn?t be so concerned in general: SHGO is an excellent global optimization algorithm and it consistently ranks at the top, no matter what problems you throw at it. Together with Dual Annealing, SciPy has gained two phenomenal nonlinear solvers and I?m very happy to see that SciPy is now at the cutting edge of the open source global optimization universe. Andrea. > - > > Thank you again for compiling this large set of benchmark results. > > Best regards, > Stefan > On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana > wrote: > >> Dear SciPy Developers & Users, >> >> long time no see :-) . I thought to start 2021 with a bit of a bang, >> to try and forget how bad 2020 has been... So I am happy to present you >> with a revamped version of the Global Optimization Benchmarks from my >> previous exercise in 2013. >> >> This new set of benchmarks pretty much superseeds - and greatly expands - >> the previous analysis that you can find at this location: >> http://infinity77.net/global_optimization/ . >> >> The approach I have taken this time is to select as many benchmark test >> suites as possible: most of them are characterized by test function >> *generators*, from which we can actually create almost an unlimited >> number of unique test problems. Biggest news are: >> >> >> 1. This whole exercise is made up of *6,825* test problems divided >> across *16* different test suites: most of these problems are of low >> dimensionality (2 to 6 variables) with a few benchmarks extending to 9+ >> variables. With all the sensitivities performed during this exercise on >> those benchmarks, the overall grand total number of functions evaluations >> stands at *3,859,786,025* - close to *4 billion*. Not bad. >> 2. A couple of "new" optimization algorithms I have ported to Python: >> >> >> - MCS: Multilevel Coordinate Search >> , it?s my >> translation to Python of the original Matlab code from A. Neumaier and W. >> Huyer (giving then for free also GLS and MINQ) I have added a few, minor >> improvements compared to the original implementation. >> - BiteOpt: BITmask Evolution OPTimization >> , I have converted the C++ >> code into Python and added a few, minor modifications. >> >> >> Enough chatting for now. The 13 tested algorithms are described here: >> >> http://infinity77.net/go_2021/ >> >> High level description & results of the 16 benchmarks: >> >> http://infinity77.net/go_2021/thebenchmarks.html >> >> Each benchmark test suite has its own dedicated page, with more detailed >> results and sensitivities. >> >> List of tested algorithms: >> >> 1. >> >> *AMPGO*: Adaptive Memory Programming for Global Optimization: this is >> my Python implementation of the algorithm described here: >> >> >> http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdf >> >> I have added a few improvements here and there based on my Master >> Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and >> Gomez. After AMPGO was integrated in lmfit >> , I have improved it even more - >> in my opinion. >> 2. >> >> *BasinHopping*: Basin hopping is a random algorithm which attempts to >> find the global minimum of a smooth scalar function of one or more >> variables. The algorithm was originally described by David Wales: >> >> http://www-wales.ch.cam.ac.uk/ >> >> BasinHopping is now part of the standard SciPy distribution. >> 3. >> >> *BiteOpt*: BITmask Evolution OPTimization, based on the algorithm >> presented in this GitHub link: >> >> https://github.com/avaneev/biteopt >> >> I have converted the C++ code into Python and added a few, minor >> modifications. >> 4. >> >> *CMA-ES*: Covariance Matrix Adaptation Evolution Strategy, based on >> the following algorithm: >> >> http://www.lri.fr/~hansen/cmaesintro.html >> >> http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for >> the algorithm) >> 5. >> >> *CRS2*: Controlled Random Search with Local Mutation, as implemented >> in the NLOpt package: >> >> >> http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_Search_.28CRS.29_with_local_mutation >> 6. >> >> *DE*: Differential Evolution, described in the following page: >> >> http://www1.icsi.berkeley.edu/~storn/code.html >> >> DE is now part of the standard SciPy distribution, and I have taken >> the implementation as it stands in SciPy: >> >> >> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution >> 7. >> >> *DIRECT*: the DIviding RECTangles procedure, described in: >> >> >> https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLinGloOpt/doc/DIRECT_Lipschitzian%20optimization%20without%20the%20lipschitz%20constant.pdf >> >> >> http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L (Python >> code for the algorithm) >> 8. >> >> *DualAnnealing*: the Dual Annealing algorithm, taken directly from >> the SciPy implementation: >> >> >> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_annealing.html#scipy.optimize.dual_annealing >> 9. >> >> *LeapFrog*: the Leap Frog procedure, which I have been recommended >> for use, taken from: >> >> https://github.com/flythereddflagg/lpfgopt >> 10. >> >> *MCS*: Multilevel Coordinate Search, it?s my translation to Python of >> the original Matlab code from A. Neumaier and W. Huyer (giving then for >> free also GLS and >> MINQ ): >> >> https://www.mat.univie.ac.at/~neum/software/mcs/ >> >> I have added a few, minor improvements compared to the original >> implementation. See the MCS >> section for a quick and >> dirty comparison between the Matlab code and my Python conversion. >> 11. >> >> *PSWARM*: Particle Swarm optimization algorithm, it has been >> described in many online papers. I have used a compiled version of the C >> source code from: >> >> http://www.norg.uminho.pt/aivaz/pswarm/ >> 12. >> >> *SCE*: Shuffled Complex Evolution, described in: >> >> Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global >> optimization for conceptual rainfall-runoff models, Water Resour. Res., 28, >> 1015-1031, 1992. >> >> The version I used was graciously made available by Matthias Cuntz >> via a personal e-mail. >> 13. >> >> *SHGO*: Simplicial Homology Global Optimization, taken directly from >> the SciPy implementation: >> >> >> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.html#scipy.optimize.shgo >> >> >> List of benchmark test suites: >> >> 1. >> >> SciPy Extended >> : >> 235 multivariate problems (where the number of independent variables ranges >> from 2 to 17), again with multiple local/global minima. >> >> I have added about 40 new functions to the standard SciPy benchmarks >> and >> fixed a few bugs in the existing benchmark models in the SciPy repository. >> 2. >> >> GKLS : 1,500 test >> functions, with dimensionality varying from 2 to 6, generated with the >> super famous GKLS Test Functions Generator >> . I have taken the original C code >> (available at http://netlib.org/toms/) and converted it to Python. >> 3. >> >> GlobOpt : 288 >> tough problems, with dimensionality varying from 2 to 5, created with >> another test function generator which I arbitrarily named ?GlobOpt?: >> https://www.researchgate.net/publication/225566516_A_new_class_of_test_functions_for_global_optimization . >> The original code is in C++ and I have bridged it to Python using Cython. >> >> *Many thanks* go to Professor Marco Locatelli for providing an >> updated copy of the C++ source code. >> 4. >> >> MMTFG : sort-of an >> acronym for ?Multi-Modal Test Function with multiple Global minima?, this >> test suite implements the work of Jani Ronkkonen: >> https://www.researchgate.net/publication/220265526_A_Generator_for_Multimodal_Test_Functions_with_Multiple_Global_Optima . >> It contains 981 test problems with dimensionality varying from 2 to 4. The >> original code is in C and I have bridge it to Python using Cython. >> 5. >> >> GOTPY : a generator >> of benchmark functions using the Bocharov-Feldbaum ?Method-Min?, containing >> 400 test problems with dimensionality varying from 2 to 5. I have taken the >> Python implementation from https://github.com/redb0/gotpy and >> improved it in terms of runtime. >> >> Original paper from >> http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng >> . >> 6. >> >> Huygens : this >> benchmark suite is very different from the rest, as it uses a ?fractal? >> approach to generate test functions. It is based on the work of Cara >> MacNish on Fractal Functions >> . >> The original code is in Java, and at the beginning I just converted it to >> Python: given it was slow as a turtle, I have re-implemented it in Fortran >> and wrapped it using f2py , then >> generating 600 2-dimensional test problems out of it. >> 7. >> >> LGMVG : not sure >> about the meaning of the acronym, but the implementation follows the >> ?Max-Set of Gaussians Landscape Generator? described in >> http://boyuan.global-optimization.com/LGMVG/index.htm . Source code >> is given in Matlab, but it?s fairly easy to convert it to Python. This test >> suite contains 304 problems with dimensionality varying from 2 to 5. >> 8. >> >> NgLi : Stemming from >> the work of Chi-Kong Ng and Duan Li, this is a test problem generator for >> unconstrained optimization, but it?s fairly easy to assign bound >> constraints to it. The methodology is described in >> https://www.sciencedirect.com/science/article/pii/S0305054814001774 , >> while the Matlab source code can be found in >> http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab >> script to generate 240 problems with dimensionality varying from 2 to 5 by >> outputting the generator parameters in text files, then used Python to >> create the objective functions based on those parameters and the benchmark >> methodology. >> 9. >> >> MPM2 : Implementing the >> ?Multiple Peaks Model 2?, there is a Python implementation at >> https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This >> is a test problem generator also used in the smoof >> library, I have taken the code >> almost as is and generated 480 benchmark functions with dimensionality >> varying from 2 to 5. >> 10. >> >> RandomFields >> : as >> described in >> https://www.researchgate.net/publication/301940420_Global_optimization_test_problems_based_on_random_field_composition , >> it generates benchmark functions by ?smoothing? one or more >> multidimensional discrete random fields and composing them. No source code >> is given, but the implementation is fairly straightforward from the article >> itself. >> 11. >> >> NIST : not exactly the >> realm of Global Optimization solvers, but the NIST StRD >> dataset can >> be used to generate a single objective function as ?sum of squares?. I have >> used the NIST dataset as implemented in lmfit >> , thus >> creating 27 test problems with dimensionality ranging from 2 to 9. >> 12. >> >> GlobalLib : >> Arnold Neumaier maintains >> a >> suite of test problems termed ?COCONUT Benchmark? and Sahinidis has >> converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran >> code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a >> simple C parser to convert the benchmarks from C to Python. >> >> The global minima are taken from Sahinidis >> or from >> Neumaier or refined using the NEOS server >> when the accuracy of the reported >> minima is too low. The suite contains 181 test functions with >> dimensionality varying between 2 and 9. >> 13. >> >> CVMG : another >> ?landscape generator?, I had to dig it out using the Wayback Machine at >> http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/multi.kennedy.html , >> the acronym stands for ?Continuous Valued Multimodality Generator?. Source >> code is in C++ but it?s fairly easy to port it to Python. In addition to >> the original implementation (that uses the Sigmoid >> as a >> softmax/transformation function) I have added a few others to create varied >> landscapes. 360 test problems have been generated, with dimensionality >> ranging from 2 to 5. >> 14. >> >> NLSE : again, not >> really the realm of Global optimization solvers, but Nonlinear Systems of >> Equations can be transformed to single objective functions to optimize. I >> have drawn from many different sources (Publications >> >> , ALIAS/COPRIN >> and >> many others) to create 44 systems of nonlinear equations with >> dimensionality ranging from 2 to 8. >> 15. >> >> Schoen : based on >> the early work of Fabio Schoen and his short note >> on a simple >> but interesting idea on a test function generator, I have taken the C code >> in the note and converted it into Python, thus creating 285 benchmark >> functions with dimensionality ranging from 2 to 6. >> >> *Many thanks* go to Professor Fabio Schoen for providing an updated >> copy of the source code and for the email communications. >> 16. >> >> Robust : the last >> benchmark test suite for this exercise, it is actually composed of 5 >> different kind-of analytical test function generators, containing >> deceptive, multimodal, flat functions depending on the settings. Matlab >> source code is available at http://www.alimirjalili.com/RO.html , I >> simply converted it to Python and created 420 benchmark functions with >> dimensionality ranging from 2 to 6. >> >> >> Enjoy, and Happy 2021 :-) . >> >> >> Andrea. >> > _______________________________________________ > > >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > > > -- > Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering) > > Wissenchaftlicher Mitarbeiter: Leibniz Institute for Materials Engineering > IWT, Badgasteiner Stra?e 3, 28359 Bremen, Germany > > Work phone (DE): +49 (0) 421 218 51238 > Cellphone (DE): +49 (0) 160 949 86417 > Cellphone (ZA): +27 (0) 82 972 42 89 > E-mail (work): s.endres at iwt.uni-bremen.de > Website: https://stefan-endres.github.io/ > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perimosocordiae at gmail.com Mon Jan 18 15:36:15 2021 From: perimosocordiae at gmail.com (CJ Carey) Date: Mon, 18 Jan 2021 15:36:15 -0500 Subject: [SciPy-Dev] Cannot generate very large very sparse random matrix In-Reply-To: References: Message-ID: Sorry for such a late response to this thread, but I wanted to point out another workaround that should help users with numpy 1.17+. You can pass a `random_state` parameter to scipy.sparse.random, which will accept a new-style Generator object. So if you amend your example to: scipy.sparse.random(1_000_000, 1_000_000, density = 1e-11, random_state = np.random.default_rng()) then you'll get the fast behavior. On Fri, Nov 13, 2020 at 6:29 PM Emanuele Olivetti wrote: > Thank you for your response. Indeed numpy.random.Generator.choice() solves > the problem: > ---- > rng = np.random.default_rng() > rng.choice(1_000_0000_000_000_000, size=10, replace=False) > > array([7363643319410659, 1001129358099623, 7384908776761990, > 3610742892883208, 9484192959193500, 6273686405826185, > 1550972534180773, 1845765940909299, 144504113475750, > 7853188631204629]) > ---- > while: > ---- > np.random.choice(1_000_0000_000_000_000, size=10, replace=False) > --------------------------------------------------------------------------- > MemoryError Traceback (most recent call last) > in > ----> 1 np.random.choice(1_000_0000_000_000_000, size=10, replace=False) > > mtrand.pyx in numpy.random.mtrand.RandomState.choice() > > mtrand.pyx in numpy.random.mtrand.RandomState.permutation() > > MemoryError: Unable to allocate 71.1 PiB for an array with shape > (10000000000000000,) and data type int64 > ---- > > According to the latest comment on the github issue you mentioned: "It > looks like np.random.Generator should be available from numpy 1.17 on, and > the current minimum numpy version is 1.16.5."... So this may require a > little while... > > As a quick fix but also meaningful new feature, would it be possible to > extend the API of scipy.sparse.random() and to add the option > "replace=False" (then piped to np.random.choice()) which, if set to True, > would give the liberty to the user to solve the issue for very large very > sparse matrices at the cost of some (rare) collisions? I would gladly > accept it - and that's also my current fix on my local SciPy. > > Best, > > Emanuele > > > > On Fri, Nov 13, 2020 at 4:23 PM CJ Carey > wrote: > >> This is a known issue, see https://github.com/scipy/scipy/issues/9699. >> >> I haven't checked on the status of numpy.random.Generator.choice() in a >> while, so maybe the issue can be resolved now. >> >> On Wed, Nov 11, 2020 at 6:46 PM Emanuele Olivetti >> wrote: >> >>> Hi, >>> >>> I've just noticed that it is not possible to generate very large very >>> sparse random matrices with scipy.sparse.random(). For example: >>> scipy.sparse.random(1_000_000, 1_000_000, density = 1e-11) >>> should create a sparse matrix with only 10 non-zero values... but >>> instead triggers a MemoryError: >>> ---- >>> MemoryError Traceback (most recent call >>> last) >>> in >>> ----> 1 scipy.sparse.random(1_000_000, 1_000_000, density = 1e-11) >>> >>> ~/miniconda3/envs/lap/lib/python3.8/site-packages/scipy/sparse/construct.py >>> in random(m, n, density, format, dtype, random_state, data_rvs) >>> 787 data_rvs = partial(random_state.uniform, 0., 1.) >>> 788 >>> --> 789 ind = random_state.choice(mn, size=k, replace=False) >>> 790 >>> 791 j = np.floor(ind * 1. / m).astype(tp, copy=False) >>> >>> mtrand.pyx in numpy.random.mtrand.RandomState.choice() >>> >>> mtrand.pyx in numpy.random.mtrand.RandomState.permutation() >>> >>> MemoryError: Unable to allocate 7.28 TiB for an array with shape >>> (1000000000000,) and data type int64 >>> ---- >>> >>> Here is the problematic line in current master branch of SciPy: >>> >>> https://github.com/scipy/scipy/blob/master/scipy/sparse/construct.py#L806 >>> >>> In short, the issue is due to random_state.choice(... replace=False) >>> which needs to allocate the humongous array in order to pick the ten random >>> numbers... >>> >>> I understand the technical difficulty of generating random numbers >>> without replacement, but it is quite counterintuitive that in order to >>> generate a sparse random matrix it is necessary to create an equally large >>> but *dense* vector first. >>> >>> Is there a solution to this problem? >>> >>> Thanks in advance, >>> >>> Emanuele >>> >>> >>> >>> >>> -- >>> Le informazioni contenute nella presente comunicazione sono di natura privata >>> e come tali sono da considerarsi riservate ed indirizzate esclusivamente >>> ai destinatari indicati e per le finalit? strettamente legate al >>> relativo contenuto. Se avete ricevuto questo messaggio per errore, vi >>> preghiamo di eliminarlo e di inviare una comunicazione all?indirizzo >>> e-mail del mittente. >>> -- >>> The information transmitted is intended only for the person or entity to >>> which it is addressed and may contain confidential and/or privileged >>> material. If you received this in error, please contact the sender and >>> delete the material. >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > > -- > Le informazioni contenute nella presente comunicazione sono di natura privata > e come tali sono da considerarsi riservate ed indirizzate esclusivamente > ai destinatari indicati e per le finalit? strettamente legate al relativo > contenuto. Se avete ricevuto questo messaggio per errore, vi preghiamo di > eliminarlo e di inviare una comunicazione all?indirizzo e-mail del > mittente. > -- > The information transmitted is intended only for the person or entity to > which it is addressed and may contain confidential and/or privileged > material. If you received this in error, please contact the sender and > delete the material. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivetti at fbk.eu Tue Jan 19 05:22:52 2021 From: olivetti at fbk.eu (Emanuele Olivetti) Date: Tue, 19 Jan 2021 11:22:52 +0100 Subject: [SciPy-Dev] Cannot generate very large very sparse random matrix In-Reply-To: References: Message-ID: Thanks for the tip! Emanuele On Mon, Jan 18, 2021 at 9:36 PM CJ Carey wrote: > Sorry for such a late response to this thread, but I wanted to point out > another workaround that should help users with numpy 1.17+. You can pass a > `random_state` parameter to scipy.sparse.random, which will accept a > new-style Generator object. > > So if you amend your example to: > > scipy.sparse.random(1_000_000, 1_000_000, density = 1e-11, random_state = > np.random.default_rng()) > > then you'll get the fast behavior. > > On Fri, Nov 13, 2020 at 6:29 PM Emanuele Olivetti wrote: > >> Thank you for your response. Indeed numpy.random.Generator.choice() >> solves the problem: >> ---- >> rng = np.random.default_rng() >> rng.choice(1_000_0000_000_000_000, size=10, replace=False) >> >> array([7363643319410659, 1001129358099623, 7384908776761990, >> 3610742892883208, 9484192959193500, 6273686405826185, >> 1550972534180773, 1845765940909299, 144504113475750, >> 7853188631204629]) >> ---- >> while: >> ---- >> np.random.choice(1_000_0000_000_000_000, size=10, replace=False) >> >> --------------------------------------------------------------------------- >> MemoryError Traceback (most recent call >> last) >> in >> ----> 1 np.random.choice(1_000_0000_000_000_000, size=10, replace=False) >> >> mtrand.pyx in numpy.random.mtrand.RandomState.choice() >> >> mtrand.pyx in numpy.random.mtrand.RandomState.permutation() >> >> MemoryError: Unable to allocate 71.1 PiB for an array with shape >> (10000000000000000,) and data type int64 >> ---- >> >> According to the latest comment on the github issue you mentioned: "It >> looks like np.random.Generator should be available from numpy 1.17 on, and >> the current minimum numpy version is 1.16.5."... So this may require a >> little while... >> >> As a quick fix but also meaningful new feature, would it be possible to >> extend the API of scipy.sparse.random() and to add the option >> "replace=False" (then piped to np.random.choice()) which, if set to True, >> would give the liberty to the user to solve the issue for very large very >> sparse matrices at the cost of some (rare) collisions? I would gladly >> accept it - and that's also my current fix on my local SciPy. >> >> Best, >> >> Emanuele >> >> >> >> On Fri, Nov 13, 2020 at 4:23 PM CJ Carey >> wrote: >> >>> This is a known issue, see https://github.com/scipy/scipy/issues/9699. >>> >>> I haven't checked on the status of numpy.random.Generator.choice() in a >>> while, so maybe the issue can be resolved now. >>> >>> On Wed, Nov 11, 2020 at 6:46 PM Emanuele Olivetti >>> wrote: >>> >>>> Hi, >>>> >>>> I've just noticed that it is not possible to generate very large very >>>> sparse random matrices with scipy.sparse.random(). For example: >>>> scipy.sparse.random(1_000_000, 1_000_000, density = 1e-11) >>>> should create a sparse matrix with only 10 non-zero values... but >>>> instead triggers a MemoryError: >>>> ---- >>>> MemoryError Traceback (most recent call >>>> last) >>>> in >>>> ----> 1 scipy.sparse.random(1_000_000, 1_000_000, density = 1e-11) >>>> >>>> ~/miniconda3/envs/lap/lib/python3.8/site-packages/scipy/sparse/construct.py >>>> in random(m, n, density, format, dtype, random_state, data_rvs) >>>> 787 data_rvs = partial(random_state.uniform, 0., 1.) >>>> 788 >>>> --> 789 ind = random_state.choice(mn, size=k, replace=False) >>>> 790 >>>> 791 j = np.floor(ind * 1. / m).astype(tp, copy=False) >>>> >>>> mtrand.pyx in numpy.random.mtrand.RandomState.choice() >>>> >>>> mtrand.pyx in numpy.random.mtrand.RandomState.permutation() >>>> >>>> MemoryError: Unable to allocate 7.28 TiB for an array with shape >>>> (1000000000000,) and data type int64 >>>> ---- >>>> >>>> Here is the problematic line in current master branch of SciPy: >>>> >>>> https://github.com/scipy/scipy/blob/master/scipy/sparse/construct.py#L806 >>>> >>>> In short, the issue is due to random_state.choice(... replace=False) >>>> which needs to allocate the humongous array in order to pick the ten random >>>> numbers... >>>> >>>> I understand the technical difficulty of generating random numbers >>>> without replacement, but it is quite counterintuitive that in order to >>>> generate a sparse random matrix it is necessary to create an equally large >>>> but *dense* vector first. >>>> >>>> Is there a solution to this problem? >>>> >>>> Thanks in advance, >>>> >>>> Emanuele >>>> >>>> >>>> >>>> >>>> -- >>>> Le informazioni contenute nella presente comunicazione sono di natura privata >>>> e come tali sono da considerarsi riservate ed indirizzate esclusivamente >>>> ai destinatari indicati e per le finalit? strettamente legate al >>>> relativo contenuto. Se avete ricevuto questo messaggio per errore, vi >>>> preghiamo di eliminarlo e di inviare una comunicazione all?indirizzo >>>> e-mail del mittente. >>>> -- >>>> The information transmitted is intended only for the person or entity >>>> to which it is addressed and may contain confidential and/or privileged >>>> material. If you received this in error, please contact the sender and >>>> delete the material. >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at python.org >>>> https://mail.python.org/mailman/listinfo/scipy-dev >>>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >> >> -- >> Le informazioni contenute nella presente comunicazione sono di natura privata >> e come tali sono da considerarsi riservate ed indirizzate esclusivamente >> ai destinatari indicati e per le finalit? strettamente legate al >> relativo contenuto. Se avete ricevuto questo messaggio per errore, vi >> preghiamo di eliminarlo e di inviare una comunicazione all?indirizzo >> e-mail del mittente. >> -- >> The information transmitted is intended only for the person or entity to >> which it is addressed and may contain confidential and/or privileged >> material. If you received this in error, please contact the sender and >> delete the material. >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- -- Le informazioni contenute nella presente comunicazione sono di natura? privata e come tali sono da considerarsi riservate ed indirizzate? esclusivamente ai destinatari indicati e per le finalit? strettamente? legate al relativo contenuto. Se avete ricevuto questo messaggio per? errore, vi preghiamo di eliminarlo e di inviare una comunicazione? all?indirizzo e-mail del mittente. -- The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. If you received this in error, please contact the sender and delete the material. -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Thu Jan 21 16:48:29 2021 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 21 Jan 2021 16:48:29 -0500 Subject: [SciPy-Dev] Name for Page's L test In-Reply-To: References: Message-ID: On 1/6/21, josef.pktd at gmail.com wrote: > On Wed, Jan 6, 2021 at 9:09 AM wrote: >> >> >> >> On Jan 5, 2021, at 7:41 PM, Robert Kern wrote: >> >> ? >> On Tue, Jan 5, 2021 at 6:04 PM wrote: >>> >>> IIUC this is a test of monotonicity, that is what is implied w/the >>> colloquial expression ?trending upward?, so I?m confused as to why this >>> isn?t a trend. >>> >>> Perhaps the author has conflated the more specific ?Linear trend?? >> >> >> I think the point they are making is that the null hypothesis gets >> rejected for even a single treatment being (consistently) lower than the >> following one. Whereas one might expect a "trend" to span across the whole >> (or substantial part of) the treatment space. >> >> I'm afraid I don't care enough about this area of statistics to dive any >> deeper. >> >> I don't really mind one way or the other. I'd rather name it something >> that helps people find it even if some experts may quibble about the >> strict accuracy of the name. Some combination of `page` and `trend` seems >> to me to be better than just `page` or `pagel`. >> >> >> I concur. > > I agree with "Some combination of `page` and `trend` seems to me to be > better" > > I have seen "trend test" used in several cases for tests of equality > with trending, ordered, monotonic alternatives. > There might be other trend tests that end up in scipy.stats, so > qualifying by "page" is appropriate. > > `page_l_test` is more like `mood`, not famous enough to remember what > it does without looking it up. > > Aside > In statsmodels I would use something that combines "rank" and "trend". > (I ended up using `rank_compare_2indep` for my version of > brunner_munzel test and statistic in statsmodels.) > > Josef > >> >> -- >> Robert Kern >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > Thanks everyone. After reviewing the comments, it looks like `page_trend_test` is a good name (descriptive and recognizable, despite the technical issue with "trend"), so we'll go with that. Warren From marcus.pernow at gmail.com Sat Jan 23 13:05:09 2021 From: marcus.pernow at gmail.com (Marcus P) Date: Sat, 23 Jan 2021 19:05:09 +0100 Subject: [SciPy-Dev] Proposal to add Takagi factorisation Message-ID: Hi, I have written some code to perform a Takagi factorisation of complex symmetric matrices (similar to singular value decomposition but it only exists for symmetric matrices and decomposes a matrix A as A = U @ S @ U.T where U is unitary and S is the diagonal matrix of singular values), based on an algorithm published in doi.org/10.1103/PhysRevA.94.062109. I saw that SciPy does not currently have this and I think that it would be a useful addition. If the community is interested, I would be happy to contribute the code. Question about preferred name: It is sometimes known as Autonne?Takagi or Autonne decomposition (this is the name used in Matrix Analysis by Horn and Johnson). It was first discovered by Autonne for non-singular matrices and later rediscovered by Takagi for both singular and non-singular matrices. >From what I can tell, the more common name is Takagi (see e.g. the paper above, arxiv.org/abs/physics/0607103, and doi.org/10.1134/S0965542512010034), so I think that name makes more sense. Are there any opinions about this? Regards, Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Jan 27 21:20:46 2021 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 27 Jan 2021 18:20:46 -0800 Subject: [SciPy-Dev] PSF Scientific WG grants Message-ID: <7d68f8da-0781-4b02-aa71-d26ad8893908@www.fastmail.com> Hi all, I'd like to make you aware of this call for funding from the PSF: https://pyfound.blogspot.com/2020/12/psf-scientific-working-group-announces.html Funding is for up to 4000 USD. The deadline has been extended (the blog post will soon be updated accordingly); I recommend submitting requests within the next two weeks. St?fan From treverhines at gmail.com Fri Jan 29 08:13:18 2021 From: treverhines at gmail.com (Trever Hines) Date: Fri, 29 Jan 2021 08:13:18 -0500 Subject: [SciPy-Dev] ENH: improve RBF interpolation Message-ID: Hello scipy developers, I would like to contribute code to scipy to address some issues regarding RBF interpolation. The code can be found on my branch here . My contribution consists of two new classes for scattered N-D interpolation: 1. `RBFInterpolator`: This is intended to be a replacement for `Rbf` that addresses issues mentioned in 9904 and 4790 . Namely, the major differences with `Rbf` are 1) the usage is similar to `NearestNDInterpolator` and `LinearNDInterpolator` making it easier to swap out different interpolation methods, 2) the sign of the smoothing parameter is correct (see page 10 of these lecture notes ), and 3) the interpolant includes polynomial terms. For some RBF choices (values of ?linear?, ?thin_plate?, ?cubic?, ?quintic?, or ?multiquadratic? for `function` in `Rbf`), the additional polynomial terms are needed to ensure that the interpolation problem is well-posed (see theorem 3.2.7 in this document ). Without the additional polynomial terms for these RBFs, I have noticed that some values for the smoothing parameter (with the corrected sign) result in an obviously erroneous interpolant. Even when the chosen RBF does not require additional polynomial terms, they still can improve the quality of the interpolant. In particular, the polynomial terms are able to accommodate shifts or linear trends in data, which the RBFs tend to struggle with by themselves. 2. `KNearestRBFInterpolator`: This class performs RBF interpolation using only the k nearest data points to each interpolation point (which was suggested in 5180 ). This class is useful when there are too many observations for `RBFInterpolator` (on the order of tens of thousands) and you want an interpolant that *looks* smoother than what you get with `NearestNDInterpolator` or `LinearNDInterpolator`. My concern with interpolation using the k nearest neighbors is that it is a bit of an ad hoc strategy to work around computational limitations. That being said, I have seen a similar strategy used in the Kriging world (Kriging is a form of RBF interpolation). I would appreciate your feedback on whether you think these would be valuable contributions to scipy. If so, I will make the pull request after adding benchmarks, unit tests, and more docs. Thanks, Trever -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Jan 30 11:34:12 2021 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 30 Jan 2021 17:34:12 +0100 Subject: [SciPy-Dev] merged scipy.stats.qmc with quasi-Monte Carlo functionality Message-ID: Hi all, I think this is worth an announcement: we just merged https://github.com/scipy/scipy/pull/10844 which adds a new submodule with quasi-Monte Carlo functionality, `scipy.stats.qmc`. See http://scipy.github.io/devdocs/stats.qmc.html for details. This was in the works for two years. Thanks to Pamphile Roy and Max Balandat for contributing this module, and to the many reviewers and QMC domain experts (see https://github.com/scipy/scipy/pull/10844#issuecomment-770231427) who pitched in. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Jan 30 15:41:20 2021 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 30 Jan 2021 13:41:20 -0700 Subject: [SciPy-Dev] NumPy 1.20.0 released Message-ID: Hi All, On behalf of the NumPy team I am pleased to announce the release of NumPy 1.20.0rc2. This NumPy release is the largest to date, containing some 684 merged pull requests contributed by 184 people. See the list of highlights below. The Python versions supported for this release are 3.7-3.9, support for Python 3.6 has been dropped. Wheels can be downloaded from PyPI ; source archives, release notes, and wheel hashes are available on Github . Linux users will need pip >= 0.19.3 in order to install manylinux2010 and manylinux2014 wheels. *Highlights* - Annotations for NumPy functions. This work is ongoing and improvements can be expected pending feedback from users. - Wider use of SIMD to increase execution speed of ufuncs. Much work has been done in introducing universal functions that will ease use of modern features across different hardware platforms. This work is ongoing. - Preliminary work in changing the dtype and casting implementations in order to provide an easier path to extending dtypes. This work is ongoing but enough has been done to allow experimentation and feedback. - Extensive documentation improvements comprising some 185 PR merges. This work is ongoing and part of the larger project to improve NumPy's online presence and usefulness to new users. - Further cleanups related to removing Python 2.7. This improves code readability and removes technical debt. - Preliminary support for the upcoming Cython 3.0. *Contributors* A total of 184 people contributed to this release. People with a "+" by their names contributed a patch for the first time. * Aaron Meurer + * Abhilash Barigidad + * Abhinav Reddy + * Abhishek Singh + * Al-Baraa El-Hag + * Albert Villanova del Moral + * Alex Leontiev + * Alex Rockhill + * Alex Rogozhnikov * Alexander Belopolsky * Alexander Kuhn-Regnier + * Allen Downey + * Andras Deak * Andrea Olivo + * Andrew Eckart + * Anirudh Subramanian * Anthony Byuraev + * Antonio Larrosa + * Ashutosh Singh + * Bangcheng Yang + * Bas van Beek + * Ben Derrett + * Ben Elliston + * Ben Nathanson + * Bernie Gray + * Bharat Medasani + * Bharat Raghunathan * Bijesh Mohan + * Bradley Dice + * Brandon David + * Brandt Bucher * Brian Soto + * Brigitta Sipocz * Cameron Blocker + * Carl Leake + * Charles Harris * Chris Brown + * Chris Vavaliaris + * Christoph Gohlke * Chunlin Fang * CloseChoice + * Daniel G. A. Smith + * Daniel Hrisca * Daniel Vanzo + * David Pitchford + * Davide Dal Bosco + * Derek Homeier * Dima Kogan + * Dmitry Kutlenkov + * Douglas Fenstermacher + * Dustin Spicuzza + * E. Madison Bray + * Elia Franzella + * Enrique Mat?as S?nchez + * Erfan Nariman | Veneficus + * Eric Larson * Eric Moore * Eric Wieser * Erik M. Bray * EthanCJ-git + * Etienne Guesnet + * FX Coudert + * Felix Divo * Frankie Robertson + * Ganesh Kathiresan * Gengxin Xie * Gerry Manoim + * Guilherme Leobas * Hassan Kibirige * Hugo Mendes + * Hugo van Kemenade * Ian Thomas + * InessaPawson + * Isabela Presedo-Floyd + * Isuru Fernando * Jakob Jakobson + * Jakub Wilk * James Myatt + * Jesse Li + * John Hagen + * John Zwinck * Joseph Fox-Rabinovitz * Josh Wilson * Jovial Joe Jayarson + * Julia Signell + * Jun Kudo + * Karan Dhir + * Kaspar Thommen + * Kerem Halla? * Kevin Moore + * Kevin Sheppard * Klaus Zimmermann + * LSchroefl + * Laurie + * Laurie Stephey + * Levi Stovall + * Lisa Schwetlick + * Lukas Geiger + * Madhulika Jain Chambers + * Matthias Bussonnier * Matti Picus * Melissa Weber Mendon?a * Michael Hirsch * Nick R. Papior * Nikola Forr? * Noman Arshad + * Paul YS Lee + * Pauli Virtanen * Pawe? Redzy?ski + * Peter Andreas Entschev * Peter Bell * Philippe Ombredanne + * Phoenix Meadowlark + * Piotr Gai?ski * Raghav Khanna + * Raghuveer Devulapalli * Rajas Rade + * Rakesh Vasudevan * Ralf Gommers * Raphael Kruse + * Rashmi K A + * Robert Kern * Rohit Sanjay + * Roman Yurchak * Ross Barnowski * Royston E Tauro + * Ryan C Cooper + * Ryan Soklaski * Safouane Chergui + * Sahil Siddiq + * Sarthak Vineet Kumar + * Sayed Adel * Sebastian Berg * Sergei Vorfolomeev + * Seth Troisi * Sidhant Bansal + * Simon Gasse * Simon Graham + * Stefan Appelhoff + * Stefan Behnel + * Stefan van der Walt * Steve Dower * Steve Joachim + * Steven Pitman + * Stuart Archibald * Sturla Molden * Susan Chang + * Takanori H + * Tapajyoti Bose + * Thomas A Caswell * Tina Oberoi * Tirth Patel * Tobias Pitters + * Tomoki, Karatsu + * Tyler Reddy * Veniamin Petrenko + * Wansoo Kim + * Warren Weckesser * Wei Yang + * Wojciech Rzadkowski * Yang Hau + * Yogesh Raisinghani + * Yu Feng * Yuya Unno + * Zac Hatfield-Dodds * Zuhair Ali-Khan + * @abhilash42 + * @danbeibei + * @dojafrat * @dpitch40 + * @forfun + * @iamsoto + * @jbrockmendel + * @leeyspaul + * @mitch + * @prateek arora + * @serge-sans-paille + * @skywalker + * @stphnlyd + * @xoviat * @??? + * @JMFT + * @Jack + * @Neal C + Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: