From ralf.gommers at gmail.com Tue Sep 1 05:24:37 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 1 Sep 2020 10:24:37 +0100 Subject: [SciPy-Dev] Thanks wiki protocol In-Reply-To: <281A1467-9EA2-4751-8AB7-79B57D61E441@vt.edu> References: <281A1467-9EA2-4751-8AB7-79B57D61E441@vt.edu> Message-ID: On Mon, Aug 31, 2020 at 11:20 PM wrote: > > > On Aug 31, 2020, at 5:29 AM, Ralf Gommers wrote: > > ? > > > = > On Mon, Aug 31, 2020 at 2:47 AM <<= a href=3D"mailto:rlucas7 at vt.edu"> > rlucas7 at vt.edu> wrote: > > On= Aug 17, 2020, at 10:40 PM, Tyler Reddy > wrote: > <= br> > > =EF=BB=BF<= div dir=3D"ltr">For the author lists (with new contributors > highlighted), t= he command looks like " python tools/authors.py > 042731365..fb51f487c= 8 " > I = think some of those release management tools are dependent on proper > usage = of milestones, though perhaps not in that specific case where the > hash rang= e is provided directly. > > > <= /div>Thanks Tyler. I didn=E2=80=99t check closely on the milestone > issue-yo= u mean use of tags on the commit log right? > > I was able t= o add a flag to get the names and nothing else so those > could easily be cop= ied into the file on the website.=C2=A0 > > Pr is: > > https://github.com/scipy/scipy/pull/12793 > = > I also opened a pr on the website repo to move over thanks f= ile: > > https://github.com/scipy/scipy.org/pull/364 > > Finally, I opened a separate pr to delete the ex= isting thanks: > > https://github.com/scipy/scipy/pull/12= 792 > > I think this is what you had in mind Ralf = but if not please let me know > on the PR(s).=C2=A0 > <= div> > Thanks Lucas. That set of PRs looks like what I had in m= ind. Looking at > the moving of content - should we decide right now to drop = all the > outdated/incomplete stuff, and simply have a list of contributor na= mes? > Either organized by date of first contribution, or alphabetically? > > > yes unless anyone objects let?s keep the names only. I?ll update the PR > for that one. > > Hmm, For the authors tool it?s not giving the date of the commit, unless I > missed that in the source, dates way would be easier to track because names > can just be appended. Lexicographic would be difficult to insert new unless > we overwrite the *full* list of contributors (on the website) on each > append. > > If the lexicogrqphic case it would need to be by the first characters > displayed. > I think I'd prefer alpabetic ordering, because it looks a lot less messy. The diffs won't be any larger, and it's not hard to output all names. Some of the contributors only have their github handle and not a first/last > name. > There shouldn't be too many; we ask for real names at release time and add them to .mailmap, most people are happy to be acknowledged. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilhanpolat at gmail.com Sun Sep 6 19:37:51 2020 From: ilhanpolat at gmail.com (Ilhan Polat) Date: Mon, 7 Sep 2020 01:37:51 +0200 Subject: [SciPy-Dev] Cythonize and add backslash logic to "scipy.linalg.solve" Message-ID: Dear all, I've finally managed to draft a PR [1] for the functionality given in the title (see also "Algorithms" section of [2]). It is almost halfway done but the gist is already in place. I'm posting this to both NumPy and SciPy lists since I think it is important enough to get feedback from all parties involved.The particular detail I need to be taught is the Cython parts and fleshing out anti-patterns and code smells. Probably NumPy folks are better equipped to spot the C related issues. There is this ILP64 issue that I am aware of which would cause a bit of trouble and I would appreciate it if we can tackle it at this early stage. Thanks in advance, ilhan [1] : https://github.com/scipy/scipy/pull/12824 [2] : https://nl.mathworks.com/help/matlab/ref/mldivide.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Sep 10 14:40:27 2020 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 10 Sep 2020 12:40:27 -0600 Subject: [SciPy-Dev] NumPy 1.19.2 released Message-ID: Hi All, On behalf of the NumPy team I am pleased to announce that NumPy 1.19.2 has been released. This release fixes several bugs, prepares for the upcoming Cython 3.x release. and pins setuptools to keep distutils working while upstream modifications are ongoing. The aarch64 wheels are built with the latest manylinux2014 release that fixes the problem of differing page sizes used by different Linux distros. There is a known problem with Windows 10 version=2004 and OpenBLAS svd that we are trying to debug. If you are running that Windows version you should use a NumPy version that links to the MKL library, earlier Windows versions are fine. This release supports Python 3.6-3.8. Downstream developers should use Cython >= 0.29.21 when building for Python 3.9 and Cython >= 0.29.16 when building for Python 3.8. OpenBLAS >= 3.7 is needed to avoid wrong results on the Skylake architecture. The NumPy Wheels for this release can be downloaded from PyPI , source archives, release notes, and wheel hashes are available from Github . Linux users will need pip >= 0.19.3 in order to install manylinux2010 and manylinux2014 wheels. *Contributors* A total of 8 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - Charles Harris - Matti Picus - Pauli Virtanen - Philippe Ombredanne + - Sebastian Berg - Stefan Behnel + - Stephan Loyd + - Zac Hatfield-Dodds *Pull requests merged* A total of 9 pull requests were merged for this release. - #16959: TST: Change aarch64 to arm64 in travis.yml. - #16998: MAINT: Configure hypothesis in ``np.test()`` for determinism,... - #17000: BLD: pin setuptools < 49.2.0 - #17015: ENH: Add NumPy declarations to be used by Cython 3.0+ - #17125: BUG: Remove non-threadsafe sigint handling from fft calculation - #17243: BUG: core: fix ilp64 blas dot/vdot/... for strides > int32 max - #17244: DOC: Use SPDX license expressions with correct license - #17245: DOC: Fix the link to the quick-start in the old API functions - #17272: BUG: fix pickling of arrays larger than 2GiB Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Sep 10 17:24:14 2020 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 10 Sep 2020 15:24:14 -0600 Subject: [SciPy-Dev] Numpy wheels with Python 3.9 support. Message-ID: Hi All, There are numpy pre-wheels for Python 3.9 available on the x86_64, i686, and aarch64 platforms. They use manylinux2010 and manylinux2014, so you need a recent pip to get them. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Sep 11 17:41:28 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 11 Sep 2020 22:41:28 +0100 Subject: [SciPy-Dev] proposal to drop Python 3.6 and NumPy 1.14 / 1.15 In-Reply-To: References: Message-ID: On Mon, Aug 24, 2020 at 10:47 PM Warren Weckesser < warren.weckesser at gmail.com> wrote: > Sorry for the top post, but this question doesn't follow immediately > from any previous comment. > > Is the conclusion of this thread that with SciPy 1.6 we will > definitely drop Python 3.6 support? I think so. Given that there were a few hesitations at least, I'll split it in separate PRs - will do the NumPy one first. It might be useful to use > dataclasses (https://docs.python.org/3/library/dataclasses.html), and > they are new in Python 3.7. > You want to start using them now, or is it more a "nice to have the option"? Cheers, Ralf > Warren > > > On 8/20/20, Charles R Harris wrote: > > On Sun, Aug 16, 2020 at 5:17 PM Ilhan Polat > wrote: > > > >> I have too many personas in my head arguing about this. And many of them > >> are in conflict. > >> > >> 1- I think I had enough arguments online to be known as buying none of > >> that scientific stability views. That doesn't mean I don't get their > >> point, > >> I really really do, but still. It's in the job description. Software is > a > >> big part of science just like the papers. (What Matti said) > >> 2- Even a two-year old code is not safe to let it collect dust and > >> constantly requires attention and becomes a liability rather than a tool > >> (what Bennet said). > >> 3- Matlab breaks things all the time with 0 user consideration. None. > But > >> admittedly they break in style. We shouldn't be like them (not even > >> close) > >> but the point is when it is a commercial product, we don't hear the > >> uproar > >> we receive. People silently fix things. > >> 4- Just because we update the python version, a lot of packages stop > >> working. This is seriously disheartening. Happens to me all the time > >> (protobuf, influxdb etc). And really annoying if one package has not > >> released the new version yet. > >> 5- Python is releasing too quick. I know Python is not Fortran > >> (thankfully) but this is the other end of the spectrum. With every > >> version > >> one or two of us spent at least 2 weeks of intensive "What did they > >> change > >> on Windows again?" bug hunting. Hence our platform is not reliable and > >> now > >> they want 12 months. (What Ralf said) > >> 6- There are always new users, downloading Python for the first time and > >> it is expected that SciPy stack should work off-the-shelf. Hence we > don't > >> have the luxury to wait and see (see 4th item). > >> 7- If there are limited resources for software support, then why people > >> take the risk and update their stuff is something that has been > discussed > >> for decades now. And no one seems to converge to a point. (What Matti > >> said) > >> 8- what Evgeni suggested > >> > >> I'm not sure what to do about this but I am also more worried about the > >> Python version than the NumPy version, my limited anectodal evidence > >> around > >> me suggests that people update NumPy + SciPy together mainly when they > >> use > >> pip with -U (--upgrade) on some package that has SciPy as a dependency. > >> So > >> I feel that it is safe to bump. > >> > >> > > Just for the NumPy perspective, there is nothing in the forthcoming NumPy > > 1.20 that doesn't work with Python 3.6, it is just a question of what > > wheels to provide. OTOH, as Python advances I don't want to worry about > > someone using newer features, whatever they are. I go back and forth, > but > > feel that 48 months is about the right support window and that argues for > > dropping Python 3.6. With the faster pace of Python development we will > > probably want to support the last four versions going forward. Note that > > SciPy doesn't need to support all the NumPy versions that support a > > particular version of Python, just the latest one. The main concern > doesn't > > seem to be compatibility as much as availability. > > > > > > > > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.mahler.larsen at gmail.com Sat Sep 12 13:31:26 2020 From: peter.mahler.larsen at gmail.com (Peter Larsen) Date: Sat, 12 Sep 2020 19:31:26 +0200 Subject: [SciPy-Dev] Disjoint set / union find data structure Message-ID: >> On Mon, Jul 20, 2020 at 2:37 PM Peter Larsen wrote: >> In many projects I find myself needing a disjoint set data structure >> https://en.wikipedia.org/wiki/Disjoint-set_data_structure. >> It would be very convenient if we had an implementation in SciPy. >> > >That seems like a reasonable and useful thing to add. > > >> The codebase already contains two implementations, albeit not publicly >> accessible ones: >> >> https://github.com/scipy/scipy/blob/1d8b34b47d2b881ea904f4c9bf4c49b3fa36b29a/scipy/cluster/_hierarchy.pyx#L1074 >> >> https://github.com/scipy/scipy/blob/8dba340293fe20e62e173bdf2c10ae208286692f/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_coletree.c#L40 >> >> If there are no objections to its inclusion in SciPy I will write a PR. I >> tentatively propose to put it in scipy.cluster but other suggestions are >> welcome. >> > >Cluster, spatial and sparse all could make sense. If we foresee using this >internally in all those modules, I think we'd have to put it in `sparse` or >in `_lib`. The reason: no further dependencies between modules. cluster >already depends on spatial, and spatial depends on sparse. > >Cheers, >Ralf I made a pull request for a disjoint set here: https://github.com/scipy/scipy/pull/12600 Thanks to help from reviewers it is in quite good shape. The only thing missing is someone with sufficient "ownership" of the `scipy.cluster.hierarchy` module to sign off on its inclusion there. To allow internal reuse without creating further dependencies, the actual disjoint set code is in `scipy._lib` but with the public interface in `scipy.cluster.hierarchy`. Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Sep 13 05:50:18 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 13 Sep 2020 10:50:18 +0100 Subject: [SciPy-Dev] Disjoint set / union find data structure In-Reply-To: References: Message-ID: On Sat, Sep 12, 2020 at 6:31 PM Peter Larsen wrote: > >> On Mon, Jul 20, 2020 at 2:37 PM Peter Larsen gmail.com>wrote: > >> In many projects I find myself needing a disjoint set data structure > >> https://en.wikipedia.org/wiki/Disjoint-set_data_structure. > >> It would be very convenient if we had an implementation in SciPy. > >> > > > >That seems like a reasonable and useful thing to add. > > > > > >> The codebase already contains two implementations, albeit not publicly > >> accessible ones: > >> > >> > https://github.com/scipy/scipy/blob/1d8b34b47d2b881ea904f4c9bf4c49b3fa36b29a/scipy/cluster/_hierarchy.pyx#L1074 > >> > >> > https://github.com/scipy/scipy/blob/8dba340293fe20e62e173bdf2c10ae208286692f/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_coletree.c#L40 > >> > >> If there are no objections to its inclusion in SciPy I will write a PR. > I > >> tentatively propose to put it in scipy.cluster but other suggestions are > >> welcome. > >> > > > >Cluster, spatial and sparse all could make sense. If we foresee using this > >internally in all those modules, I think we'd have to put it in `sparse` > or > >in `_lib`. The reason: no further dependencies between modules. cluster > >already depends on spatial, and spatial depends on sparse. > > > >Cheers, > >Ralf > > I made a pull request for a disjoint set here: > https://github.com/scipy/scipy/pull/12600 > Looks great! > > Thanks to help from reviewers it is in quite good shape. The only thing > missing is someone with sufficient "ownership" of the > `scipy.cluster.hierarchy` module to sign off on its inclusion there. To > allow internal reuse without creating further dependencies, the actual > disjoint set code is in `scipy._lib` but with the public interface in > `scipy.cluster.hierarchy`. > Thanks for the ping Peter. I couldn't keep up with PR updates/review, so I missed the request there - commented there now. As I wrote on the PR, scipy.cluster doesn't have a dedicated maintainer. Of all the active maintainers, Nikolay and I have worked on it most over the past years I think. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.home at sbcglobal.net Wed Sep 16 13:09:11 2020 From: frank.home at sbcglobal.net (frank.home at sbcglobal.net) Date: Wed, 16 Sep 2020 17:09:11 +0000 (UTC) Subject: [SciPy-Dev] feature proposal for scipy.optimize.differential_evolution In-Reply-To: <2020944517.3031414.1600271214579@mail.yahoo.com> References: <2020944517.3031414.1600271214579.ref@mail.yahoo.com> <2020944517.3031414.1600271214579@mail.yahoo.com> Message-ID: <264588388.3087478.1600276151150@mail.yahoo.com> Hello, In using?scipy.optimize.differential_evolution, we found that it is useful to modify the behavior for problems where solutions of the equations being optimized are not feasible in some of the space being explored. Rather than having the code throw exceptions and then abort, we use the approach of returning np.inf for the objective function in infeasible regions. We are interested in turning this approach into a feature of?scipy.optimize.differential_evolution and submitting a pull request, and would like to know if people think this would be a useful contribution.? Our approach would be to add a boolean parameter to?scipy.optimize.differential_evolution for selecting whether to handle lack of ability to find a solution in the above manner. The default for the parameter would be to not do this, i.e. run?scipy.optimize.differential_evolution as it now works. Please let us know what you think, and if you would be interested in reviewing code, please let us know that as well! Thank you,Frank Torres -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhaberla at calpoly.edu Wed Sep 16 13:15:26 2020 From: mhaberla at calpoly.edu (Matt Haberland) Date: Wed, 16 Sep 2020 10:15:26 -0700 Subject: [SciPy-Dev] feature proposal for scipy.optimize.differential_evolution In-Reply-To: <264588388.3087478.1600276151150@mail.yahoo.com> References: <2020944517.3031414.1600271214579.ref@mail.yahoo.com> <2020944517.3031414.1600271214579@mail.yahoo.com> <264588388.3087478.1600276151150@mail.yahoo.com> Message-ID: Sounds useful to me to have that option. Do you know of other software with a similar option? On Wed, Sep 16, 2020 at 10:09 AM frank.home at sbcglobal.net < frank.home at sbcglobal.net> wrote: > Hello, > > In using scipy.optimize.differential_evolution, we found that it is useful > to modify the behavior for problems where solutions of the equations being > optimized are not feasible in some of the space being explored. Rather than > having the code throw exceptions and then abort, we use the approach of > returning np.inf for the objective function in infeasible regions. We are > interested in turning this approach into a feature > of scipy.optimize.differential_evolution and submitting a pull request, and > would like to know if people think this would be a useful contribution. > > Our approach would be to add a boolean parameter > to scipy.optimize.differential_evolution for selecting whether to handle > lack of ability to find a solution in the above manner. The default for the > parameter would be to not do this, i.e. > run scipy.optimize.differential_evolution as it now works. > > Please let us know what you think, and if you would be interested in > reviewing code, please let us know that as well! > > Thank you, > Frank Torres > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- Matt Haberland Assistant Professor BioResource and Agricultural Engineering 08A-3K, Cal Poly -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Wed Sep 16 17:58:00 2020 From: andyfaff at gmail.com (Andrew Nelson) Date: Thu, 17 Sep 2020 07:58:00 +1000 Subject: [SciPy-Dev] feature proposal for scipy.optimize.differential_evolution In-Reply-To: <264588388.3087478.1600276151150@mail.yahoo.com> References: <2020944517.3031414.1600271214579.ref@mail.yahoo.com> <2020944517.3031414.1600271214579@mail.yahoo.com> <264588388.3087478.1600276151150@mail.yahoo.com> Message-ID: On Thu, 17 Sep 2020 at 03:09, frank.home at sbcglobal.net < frank.home at sbcglobal.net> wrote: > In using scipy.optimize.differential_evolution, we found that it is useful > to modify the behavior for problems where solutions of the equations being > optimized are not feasible in some of the space being explored. Rather than > having the code throw exceptions and then abort, we use the approach of > returning np.inf for the objective function in infeasible regions. We are > interested in turning this approach into a feature > of scipy.optimize.differential_evolution and submitting a pull request, and > would like to know if people think this would be a useful contribution. > > Our approach would be to add a boolean parameter > to scipy.optimize.differential_evolution for selecting whether to handle > lack of ability to find a solution in the above manner. The default for the > parameter would be to not do this, i.e. > run scipy.optimize.differential_evolution as it now works. > > As the code currently exists the user's objective `func` is allowed to return `np.inf` if it finds itself within a region that is infeasible, differential_evolution doesn't throw exceptions or abort if `func` does that. One can also mark regions as infeasible with constraint functions. So it's not clear to me what feature is missing. Do you have a demonstration? -------------- next part -------------- An HTML attachment was scrubbed... URL: From serge.guelton at telecom-bretagne.eu Wed Sep 23 13:03:44 2020 From: serge.guelton at telecom-bretagne.eu (Serge Guelton) Date: Wed, 23 Sep 2020 19:03:44 +0200 Subject: [SciPy-Dev] Pythran 0.9.8 - memes tra Message-ID: <20200923170344.GA1512235@sguelton.remote.csb> Hi Folks, New Pythran release, tagged on github, available on PyPI and conda, and described here: http://serge-sans-paille.github.io/pythran-stories/pythran-097-memes-tra.html Enjoy! From frank.home at sbcglobal.net Fri Sep 25 21:52:31 2020 From: frank.home at sbcglobal.net (frank.home at sbcglobal.net) Date: Sat, 26 Sep 2020 01:52:31 +0000 (UTC) Subject: [SciPy-Dev] SciPy-Dev Digest, Vol 203, Issue 6 In-Reply-To: References: Message-ID: <1539247507.992709.1601085151233@mail.yahoo.com> Hello, My colleague and I discussed this and now agree that this should be handled in the function being optimized, not in the scipy differential_evolution code. We do feel, however, that it would be good to add documentation for handling such cases, especially when the optimum winds up being at a boundary where np.inf occurs. We would be happy to submit draft documentation to add to?https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html?if that is of interest. Let me explain some of the details of issues we ran across. Interestingly, in a side conversation we learned that a similar thing came up in issue #7799/7819. In our case, the function is actually a solution of a set of coupled boundary value ode's. For some values of the boundary condition, there are no feasible solutions, but it would be difficult in many cases to explicitly map out boundaries with all infeasible regions and add those as bounds.? In our initial attempts to use the differential evolution solver, the scipy code failed when our solver of the ode's gave warnings due to infeasibilities. We did learn to turn those warnings into errors and catch them with try-except blocks, and then return np.inf if the exceptions happened. Even that might be useful for some people to see described in documentation :) This is a case of infeasible solutions. But another problem we encountered is that sometimes the optimum then winds up being very near or actually at the boundary with an infeasible region. In such a case, the differential evolution code does not converge or behave well because each time it picks a new population, some points are in the infeasible region (since the "answer" is at the border), and therefore there are np.inf values that continue to get returned and thereby interfere with the concept of shrinking the population towards the answer.? We handled this by using a callback function that gave early termination if the "best value" is not changing. I feel that would also be useful to document. Initially we were proposing to add to the scipy code itself to recognize this is happening, and do something about it internally in scipy. But we now agree that having this different convergence criterion is better handled with a callback function, as it is specific to the problem being encountered and is consistent with the availability of the callback function. A similar problem also occurred in issue #7799/7819 for basinhopping optimization. In that case, there was not a boundary where solutions for the function became infeasible, but rather there was an explicit constraint that wound up causing the solution to be exactly at the constraint boundary. The effect was the same as above because the solution was no longer a local minimum.? Returning np.inf when appropriate and adding a callback function that provides early termination when the objective function is not changing works for that case as well. Would it be of interest to have this added to the documention? Best regards,Frank? On Wednesday, September 16, 2020, 02:59:09 PM PDT, scipy-dev-request at python.org wrote: Send SciPy-Dev mailing list submissions to ??? scipy-dev at python.org To subscribe or unsubscribe via the World Wide Web, visit ??? https://mail.python.org/mailman/listinfo/scipy-dev or, via email, send a message with subject or body 'help' to ??? scipy-dev-request at python.org You can reach the person managing the list at ??? scipy-dev-owner at python.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-Dev digest..." Today's Topics: ? 1. feature proposal for scipy.optimize.differential_evolution ? ? ? (frank.home at sbcglobal.net) ? 2. Re: feature proposal for ? ? ? scipy.optimize.differential_evolution (Matt Haberland) ? 3. Re: feature proposal for ? ? ? scipy.optimize.differential_evolution (Andrew Nelson) Hello, In using?scipy.optimize.differential_evolution, we found that it is useful to modify the behavior for problems where solutions of the equations being optimized are not feasible in some of the space being explored. Rather than having the code throw exceptions and then abort, we use the approach of returning np.inf for the objective function in infeasible regions. We are interested in turning this approach into a feature of?scipy.optimize.differential_evolution and submitting a pull request, and would like to know if people think this would be a useful contribution.? Our approach would be to add a boolean parameter to?scipy.optimize.differential_evolution for selecting whether to handle lack of ability to find a solution in the above manner. The default for the parameter would be to not do this, i.e. run?scipy.optimize.differential_evolution as it now works. Please let us know what you think, and if you would be interested in reviewing code, please let us know that as well! Thank you,Frank Torres Sounds useful to me to have that option. Do you know of other software with a similar option? On Wed, Sep 16, 2020 at 10:09 AM frank.home at sbcglobal.net wrote: Hello, In using?scipy.optimize.differential_evolution, we found that it is useful to modify the behavior for problems where solutions of the equations being optimized are not feasible in some of the space being explored. Rather than having the code throw exceptions and then abort, we use the approach of returning np.inf for the objective function in infeasible regions. We are interested in turning this approach into a feature of?scipy.optimize.differential_evolution and submitting a pull request, and would like to know if people think this would be a useful contribution.? Our approach would be to add a boolean parameter to?scipy.optimize.differential_evolution for selecting whether to handle lack of ability to find a solution in the above manner. The default for the parameter would be to not do this, i.e. run?scipy.optimize.differential_evolution as it now works. Please let us know what you think, and if you would be interested in reviewing code, please let us know that as well! Thank you,Frank Torres _______________________________________________ SciPy-Dev mailing list SciPy-Dev at python.org https://mail.python.org/mailman/listinfo/scipy-dev -- Matt HaberlandAssistant ProfessorBioResource and Agricultural Engineering08A-3K, Cal Poly On Thu, 17 Sep 2020 at 03:09, frank.home at sbcglobal.net wrote: In using?scipy.optimize.differential_evolution, we found that it is useful to modify the behavior for problems where solutions of the equations being optimized are not feasible in some of the space being explored. Rather than having the code throw exceptions and then abort, we use the approach of returning np.inf for the objective function in infeasible regions. We are interested in turning this approach into a feature of?scipy.optimize.differential_evolution and submitting a pull request, and would like to know if people think this would be a useful contribution.? Our approach would be to add a boolean parameter to?scipy.optimize.differential_evolution for selecting whether to handle lack of ability to find a solution in the above manner. The default for the parameter would be to not do this, i.e. run?scipy.optimize.differential_evolution as it now works. As the code currently exists the user's objective `func` is allowed to return `np.inf` if it finds itself within a region that is infeasible, differential_evolution doesn't throw exceptions or abort if `func` does that. One can also mark regions as infeasible with constraint functions. So it's not clear to me what feature is missing. Do you have a demonstration? _______________________________________________ SciPy-Dev mailing list SciPy-Dev at python.org https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Fri Sep 25 22:45:26 2020 From: andyfaff at gmail.com (Andrew Nelson) Date: Sat, 26 Sep 2020 12:45:26 +1000 Subject: [SciPy-Dev] SciPy-Dev Digest, Vol 203, Issue 6 In-Reply-To: <1539247507.992709.1601085151233@mail.yahoo.com> References: <1539247507.992709.1601085151233@mail.yahoo.com> Message-ID: The documentation does say that if "If callback returns *True*, then the minimization is halted (any polishing is still carried out)", so the documentation already indicates0 that the user can control when the minimisation halts. However, it's not mentioned in the documentation that the objective function is allowed to return `np.inf` if the solution is impossible. When constraints are employed the solver will attempt to reduce the magnitude of the constraint violation. It's worth reading the Lampinen paper (Lampinen, J., A constraint handling approach for the differential evolution algorithm. Proceedings of the 2002 Congress on Evolutionary Computation. CEC?02 (Cat. No. 02TH8600). Vol. 2. IEEE, 2002.) to see how that's applied. > We did learn to turn those warnings into errors and catch them with try-except blocks, and then return np.inf if the exceptions happened. Even that might be useful for some people to see described in documentation :) Hmm, that's a reasonably simple programming approach to take, and I don't think it would aid the documentation because its clarity would be degraded. -------------- next part -------------- An HTML attachment was scrubbed... URL: From blairuk at gmail.com Sun Sep 27 13:26:02 2020 From: blairuk at gmail.com (Blair Azzopardi) Date: Sun, 27 Sep 2020 18:26:02 +0100 Subject: [SciPy-Dev] How to compile Scipy without optimisations -O0? Message-ID: Hi I'm trying to debug some scipy C code between two platforms and I'm struggling with each platform optimizing away different variables while stepping through. I'd like to add -O0 flag and rebuild but when I do something along the lines of: OPT=-O0 python setup.py build_ext --inplace I see the flag added but it's superseded by later flags (default optimization -O2), ie. .. extra options: '-std=c++14' x86_64-linux-gnu-g++: /tmp/tmpnf9pc9rw/main.c C compiler: x86_64-linux-gnu-g++ -pthread -Wno-unused-result -Wsign-compare -O0 -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC .. I've also tried creating a site.cfg with the following: [ALL] extra_compile_args = -O0 where I see the buildlog picking this option up but it appears to have no effect on the compiler flags. FOUND: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/lib/x86_64-linux-gnu'] language = c define_macros = [('HAVE_CBLAS', None)] extra_compile_args = ['-O0'] Finally, I also tried building via runtests.py but this appears to have the same effect as setting the OPT environment variable above (eg python runtests.py -g -b). Short of sifting through the numpy distutils package and hacking the flags in, I was wondering if there's a recommended way to do this in Scipy? I'm sure I'm just being daft. Kind regards, Blair -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.home at sbcglobal.net Sun Sep 27 15:47:31 2020 From: frank.home at sbcglobal.net (frank.home at sbcglobal.net) Date: Sun, 27 Sep 2020 19:47:31 +0000 (UTC) Subject: [SciPy-Dev] SciPy-Dev Digest, Vol 203, Issue 9 In-Reply-To: References: Message-ID: <128711136.1357014.1601236051852@mail.yahoo.com> Fair enough, we don't have to present an example that would then be a little bloated with details of catching infeasibilities. But it seems like a concise statement in the documentation that says the objective function is allowed to return np.inf, maybe a dozen words or so, might help some people. Otherwise, one only knows by reading the source code. Thank you for the reference, I am enjoying reading it :)? I am also realizing that I overloaded the meaning of "infeasible" in my use of the term, which means I should come up with another term. For what it is worth, in the problems I am currently working on, the objective function is actually the solution of a set of coupled differential equations, and the variables over which the optimization takes place are parameters in the differential equations and other parameters in the boundary conditions. In an inner loop, the system of differential equations are solved, and then a figure of merit is reported back to the differential evolution algorithm. For some sets of the parameters, there is no solution to the set of equations. So "infeasibility" in the sense I am using it means the objective function does not exist for the given set of parameters, not that the objective function can be evaluated but constraints are violated.? Sorry for the overloading of this term.? One could argue that the "infeasible" space in the sense I am using the term should be mapped out before an optimization, and then encoded as bounds on the independent variables. But for sufficiently complicated problems, that is difficult to do. That is why it was great to figure out how to do it using the powerful scipy differential evolution optimizer. Best regards,Frank Torres On Saturday, September 26, 2020, 09:00:25 AM PDT, wrote: Send SciPy-Dev mailing list submissions to ??? scipy-dev at python.org To subscribe or unsubscribe via the World Wide Web, visit ??? https://mail.python.org/mailman/listinfo/scipy-dev or, via email, send a message with subject or body 'help' to ??? scipy-dev-request at python.org You can reach the person managing the list at ??? scipy-dev-owner at python.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-Dev digest..." Today's Topics: ? 1. Re: SciPy-Dev Digest, Vol 203, Issue 6 (Andrew Nelson) The documentation does say that if "If callback returns?True, then the minimization is halted (any polishing is still carried out)", so the documentation already indicates0 that the user can control when the minimisation halts. However, it's not mentioned in the documentation that the objective function is allowed to return `np.inf` if the solution is impossible.When constraints are employed the solver will attempt to reduce the magnitude of the constraint violation. It's worth reading the Lampinen paper (Lampinen, J., A constraint handling approach for the differential evolution algorithm. Proceedings of the 2002 Congress on Evolutionary Computation. CEC?02 (Cat. No. 02TH8600). Vol. 2. IEEE, 2002.) to see how that's applied. >?We did learn to turn those warnings into errors and catch them with try-except blocks, and then return np.inf if the exceptions happened. Even that might be useful for some people to see described in documentation :) Hmm, that's a reasonably simple programming approach to take, and I don't think it would aid the documentation because its clarity would be degraded._______________________________________________ SciPy-Dev mailing list SciPy-Dev at python.org https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Sep 27 16:48:46 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 27 Sep 2020 21:48:46 +0100 Subject: [SciPy-Dev] How to compile Scipy without optimisations -O0? In-Reply-To: References: Message-ID: On Sun, Sep 27, 2020 at 6:26 PM Blair Azzopardi wrote: > Hi > > I'm trying to debug some scipy C code between two platforms and I'm > struggling with each platform optimizing away different variables while > stepping through. > > I'd like to add -O0 flag and rebuild but when I do something along the > lines of: > > OPT=-O0 python setup.py build_ext --inplace > > I see the flag added but it's superseded by later flags (default > optimization -O2), ie. > > .. > extra options: '-std=c++14' > x86_64-linux-gnu-g++: /tmp/tmpnf9pc9rw/main.c > C compiler: x86_64-linux-gnu-g++ -pthread -Wno-unused-result > -Wsign-compare -O0 -g -fstack-protector-strong -Wformat > -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat > -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC > .. > > I've also tried creating a site.cfg with the following: > > [ALL] > extra_compile_args = -O0 > > where I see the buildlog picking this option up but it appears to have no > effect on the compiler flags. > > FOUND: > libraries = ['openblas', 'openblas'] > library_dirs = ['/usr/lib/x86_64-linux-gnu'] > language = c > define_macros = [('HAVE_CBLAS', None)] > extra_compile_args = ['-O0'] > > Finally, I also tried building via runtests.py but this appears to have > the same effect as setting the OPT environment variable above (eg python > runtests.py -g -b). > > Short of sifting through the numpy distutils package and hacking the flags > in, I was wondering if there's a recommended way to do this in Scipy? I'm > sure I'm just being daft. > No you got it, this is just badly designed - editing the flags in numpy.distutils is the best we have unfortunately. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From blairuk at gmail.com Mon Sep 28 18:15:15 2020 From: blairuk at gmail.com (Blair Azzopardi) Date: Mon, 28 Sep 2020 23:15:15 +0100 Subject: [SciPy-Dev] How to compile Scipy without optimisations -O0? In-Reply-To: References: Message-ID: On Sun, 27 Sep 2020 at 21:49, Ralf Gommers wrote: > > On Sun, Sep 27, 2020 at 6:26 PM Blair Azzopardi wrote: > >> >> Short of sifting through the numpy distutils package and hacking the >> flags in, I was wondering if there's a recommended way to do this in Scipy? >> I'm sure I'm just being daft. >> > > No you got it, this is just badly designed - editing the flags in > numpy.distutils is the best we have unfortunately. > > Thanks. Eventually I resorted to creating a wrapper around gcc, ie >> ~/bin/cc_no_opt #!/bin/bash declare -a new_args for i in "${@:1}" do new_args+=("$i") done new_args+=("-g" "-O0") gcc ${new_args[@]} Then calling: CC=~/bin/cc_no_opt python setup.py build_ext --inplace -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Tue Sep 29 02:43:37 2020 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 29 Sep 2020 09:43:37 +0300 Subject: [SciPy-Dev] How to compile Scipy without optimisations -O0? In-Reply-To: References: Message-ID: <85096279-c6d1-747c-b1a3-ce11194fcc1f@gmail.com> On 9/29/20 1:15 AM, Blair Azzopardi wrote: > On Sun, 27 Sep 2020 at 21:49, Ralf Gommers > wrote: > > > On Sun, Sep 27, 2020 at 6:26 PM Blair Azzopardi > wrote: > > > Short of sifting through the numpy distutils package and > hacking the flags in, I was wondering if there's a recommended > way to do this in Scipy? I'm sure I'm just being daft. > > > No you got it, this is just badly designed - editing the flags in > numpy.distutils is the best we have unfortunately. > > > Thanks. > > Eventually I resorted to creating a wrapper around gcc, ie > > >> ~/bin/cc_no_opt > #!/bin/bash > declare -a new_args > for i in "${@:1}" > do new_args+=("$i") > done > new_args+=("-g" "-O0") > gcc ${new_args[@]} > > Then calling: > > CC=~/bin/cc_no_opt python setup.py build_ext --inplace In NumPy, I can do (on Ubuntu) ``CFLAGS='-O0 -g' gdb --args runtests.py ...`` to get a debug build. That doesn't work on SciPy? Matti From ralf.gommers at gmail.com Tue Sep 29 06:45:27 2020 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 29 Sep 2020 11:45:27 +0100 Subject: [SciPy-Dev] How to compile Scipy without optimisations -O0? In-Reply-To: <85096279-c6d1-747c-b1a3-ce11194fcc1f@gmail.com> References: <85096279-c6d1-747c-b1a3-ce11194fcc1f@gmail.com> Message-ID: On Tue, Sep 29, 2020 at 7:43 AM Matti Picus wrote: > > On 9/29/20 1:15 AM, Blair Azzopardi wrote: > > On Sun, 27 Sep 2020 at 21:49, Ralf Gommers > > wrote: > > > > > > On Sun, Sep 27, 2020 at 6:26 PM Blair Azzopardi > > wrote: > > > > > > Short of sifting through the numpy distutils package and > > hacking the flags in, I was wondering if there's a recommended > > way to do this in Scipy? I'm sure I'm just being daft. > > > > > > No you got it, this is just badly designed - editing the flags in > > numpy.distutils is the best we have unfortunately. > > > > > > Thanks. > > > > Eventually I resorted to creating a wrapper around gcc, ie > > > > >> ~/bin/cc_no_opt > > #!/bin/bash > > declare -a new_args > > for i in "${@:1}" > > do new_args+=("$i") > > done > > new_args+=("-g" "-O0") > > gcc ${new_args[@]} > > > > Then calling: > > > > CC=~/bin/cc_no_opt python setup.py build_ext --inplace > > > > In NumPy, I can do (on Ubuntu) ``CFLAGS='-O0 -g' gdb --args runtests.py > ...`` to get a debug build. That doesn't work on SciPy? > This may depend on the compiler; we have hardcoded -O3 flags in some compiler classes, so your line will give `-O0 -O3` or vice versa (not sure). IIRC gcc takes the last flag when there's conflicting flags, but I'm not sure this will work the same between compilers and platforms. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen_malcolm at hotmail.com Wed Sep 30 13:55:26 2020 From: stephen_malcolm at hotmail.com (Stephen Malcolm) Date: Wed, 30 Sep 2020 17:55:26 +0000 Subject: [SciPy-Dev] Issues Inserting Graphical Overlay Using Matplotlib Patches Message-ID: Hello All, I'm having some trouble adding a graphical overlay i.e. an ellipse onto my plot. I wish to do this, as I need to explain/ portray the mean, standard deviation and outliers. And hence evaluate the suitability of the dataset. Could you please let me know what code I'm missing/ or need to add, in order to insert this ellipse? I have no trouble plotting the data points and the mean using this code, however, the ellipse (width and height/ standard deviation) doesn't appear. I have no errors, instead, I'm getting a separate graph (without data points or ellipse) below the plotted one. Please find my code below: #pandas used to read dataset and return the data #numpy and matplotlib to represent and visualize the data #sklearn to implement kmeans algorithm import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans #import the data data = pd.read_csv('banknotes.csv') #extract values x=data['V1'] y=data['V2'] #print range to determine normalization print ("X_max : ",x.max()) print ("X_min : ",x.min()) print ("Y_max : ",y.max()) print ("Y_min : ",y.min()) #normalize values mean_x=x.mean() mean_y=y.mean() max_x=x.max() max_y=y.max() min_x=x.min() min_y=y.min() for i in range(0,x.size): x[i] = (x[i] - mean_x) / (max_x - min_x) for i in range(0,y.size): y[i] = (y[i] - mean_y) / (max_y - min_y) #statistical analyis using mean and standard deviation import matplotlib.patches as patches mean = np.mean(data, 0) std_dev = np.std(data, 0) ellipse = patches.Ellipse([mean[0], mean [1]], std_dev[0]*2, std_dev[1]*2, alpha=0.25) plt.xlabel('V1') plt.ylabel('V2') plt.title('Visualization of raw data'); plt.scatter(data.iloc[:, 0], data.iloc[:, 1]) plt.scatter(mean[0],mean[1]) plt.figure(figsize=(6, 6)) fig,graph = plt.subplots() graph.add_patch(ellipse) Kind Regards, Stephen -------------- next part -------------- An HTML attachment was scrubbed... URL: