[SciPy-Dev] Global Optimization Benchmarks

Ralf Gommers ralf.gommers at gmail.com
Tue May 25 05:10:36 EDT 2021


On Mon, May 24, 2021 at 9:36 AM Andrea Gavana <andrea.gavana at gmail.com>
wrote:

> Hi Daniel,
>
> On Mon, 24 May 2021 at 09.23, Daniel Schmitz <
> danielschmitzsiegen at googlemail.com> wrote:
>
>> Hey Ralf,
>>
>> discussion for DIRECT was opened here:
>> https://github.com/scipy/scipy/issues/14121 and the scipydirect
>> maintainer was pinged: https://github.com/andim/scipydirect/issues/9 .
>>
>
Thanks! I commented on the issue about having checked the license info and
what I think needs to be done.


>> About the general discussion regarding global optimization in SciPy: I am
>> working on a SciPy style API for NLopt in my free time (mostly missing
>> documentation at the moment) and would like to benchmark its MLSL and StoGO
>> algorithms using the SciPy benchmarks. Currently, this seems very
>> complicated as any new algorithm would have to be included into SciPy
>> first. Is there a way to circumvent this? Of course, Andrea's benchmark
>> suite would be the mother of all benchmarks if available ;).
>>
>
> I think it sounds like an very good idea to design a SciPy stile API for
> NLOpt.
>

This seems valuable for LGPL'd algorithms that we cannot include directly.
Having an optional dependency also has maintenance and CI costs, so for
BSD/MIT-licensed algorithms I'd say we prefer to vendor them rather than
have a scikit-umfpack like optional dependency. Unless there's really a ton
of code - but if it's one or two algorithms, then vendoring seems like the
way to go.

Cheers,
Ralf



> In my personal experience the two algorithms you mention (MLSL and StoGo)
> are generally less performant compared to the other ones available in
> NLOpt.
>
> That said, assuming enough interest was there and some spare time, I could
> easily rerun the whole benchmarks with those two methods as well.
>
> Andrea.
>
>
>
>
>> Cheers,
>> Daniel
>>
>> On Sun, 25 Apr 2021 at 22:32, Ralf Gommers <ralf.gommers at gmail.com>
>> wrote:
>>
>>> Hi Daniel,
>>>
>>> Sorry for the extremely delayed reply.
>>>
>>>
>>> On Thu, Mar 11, 2021 at 10:19 AM Daniel Schmitz <
>>> danielschmitzsiegen at googlemail.com> wrote:
>>>
>>>> Hey all,
>>>>
>>>> after Andrea's suggestions, I did an extensive github search and found
>>>> several global optimization python libraries which mimic the scipy
>>>> API, so that a user only has to change the import statements. Could it
>>>> be useful to add a page in the documentation of these?
>>>>
>>>
>>> This does sound useful. Probably not a whole page, more like a note in
>>> the `minimize` docstring with references I'd think.
>>>
>>>
>>>> Non exhaustive list:
>>>>
>>>> DIRECT: https://github.com/andim/scipydirect
>>>> DE/PSO/CMA-ES: https://github.com/keurfonluu/stochopy
>>>> PSO: https://github.com/jerrytheo/psopy
>>>> Powell's derivative free optimizers: https://www.pdfo.net/index.html
>>>>
>>>> As DIRECT was very competitive on some of Andrea's benchmarks, it
>>>> could be useful to mimic the scipydirect repo for inclusion into scipy
>>>> (MIT license). The code is unfortunately a f2py port of the original
>>>> Fortran implementation which has hard coded bounds on the number of
>>>> function evaluations (90.000) and iterations (6.000). Any opinions on
>>>> this?
>>>>
>>>
>>> This sounds like a good idea. Would you mind opening a GitHub issue with
>>> the feature request, so we keep track of this? Contacting the original
>>> author about this would also be useful; if the author would like to
>>> upstream their code, that'd be a good outcome.
>>>
>>> Re hard coded bounds, I assume those can be removed again without too
>>> much trouble.
>>>
>>>
>>>>
>>>> I personally am very impressed by biteopt's performance, and although
>>>> it ranked very high in other global optimization benchmarks there is
>>>> no formal paper on it yet. I understand the scipy guidelines in a way
>>>> that such a paper is a requisite for inclusion into scipy.
>>>>
>>>
>>> Well, we do have the very extensive benchmarks - if it does indeed
>>> perform better than what we have, then of course we'd be happy to add a new
>>> algorithm even if it doesn't have a paper. We use papers and the citations
>>> it has as an indication of usefulness only; anything that outperforms our
>>> existing code is clearly useful.
>>>
>>> Cheers,
>>> Ralf
>>>
>>>
>>>
>>>>
>>>> Best,
>>>>
>>>> Daniel
>>>>
>>>> Daniel
>>>>
>>>>
>>>> On Sun, 17 Jan 2021 at 14:33, Andrea Gavana <andrea.gavana at gmail.com>
>>>> wrote:
>>>> >
>>>> > Hi Stefan,
>>>> >
>>>> >     You’re most welcome :-) . I’m happy the experts in the community
>>>> are commenting and suggesting things, and constructive criticism is also
>>>> always welcome.
>>>> >
>>>> > On Sun, 17 Jan 2021 at 12.11, Stefan Endres <
>>>> stefan.c.endres at gmail.com> wrote:
>>>> >>
>>>> >> Dear Andrea,
>>>> >>
>>>> >> Thank you very much for this detailed analysis. I don't think I've
>>>> seen such a large collection of benchmark test suites or collection of DFO
>>>> algorithms since the publication by Rios and Sahinidis in 2013. Some
>>>> questions:
>>>> >>
>>>> >> Many of the commercial algorithms offer free licenses for
>>>> benchmarking problems of less than 10 dimensions. Would you be willing to
>>>> include some of these in your benchmarks at some point? It would be a great
>>>> reference to use.
>>>> >
>>>> >
>>>> > I’m definitely willing to include those commercial algorithms. The
>>>> test suite per se is almost completely automated, so it’s not that
>>>> complicated to add one or more solvers. I’m generally more inclined in
>>>> testing open source algorithms but there’s nothing stopping the inclusion
>>>> of commercial ones.
>>>> >
>>>> > I welcome any suggestions related to commercial solvers, as long as
>>>> they can run on Python 2 / Python 3 and on Windows (I might be able to
>>>> setup a Linux virtual machine if absolutely needed but that would defy part
>>>> of the purpose of the exercise - SHGO, Dual Annealing and the other SciPy
>>>> solvers run on all platforms that support SciPy).
>>>> >
>>>> >> The collection of test suites you've garnered could be immensely
>>>> useful for further algorithm development. Is there a possibility of
>>>> releasing the code publicly (presumably after you've published the results
>>>> in a journal)?
>>>> >>
>>>> >> In this case I would also like to volunteer to run some of the
>>>> commercial solvers on the benchmark suite.
>>>> >> It would also help to have a central repository for fixing bugs and
>>>> adding lower global minima when they are found (of which there are quite
>>>> few ).
>>>> >
>>>> >
>>>> >
>>>> > I’m still sorting out all the implications related to a potential
>>>> paper with my employer, but as far as I can see there shouldn’t be any
>>>> problem with that: assuming everything goes as it should, I will definitely
>>>> push for making the code open source.
>>>> >
>>>> >
>>>> >>
>>>> >> Comments on shgo:
>>>> >>
>>>> >> High RAM use in higher dimensions:
>>>> >>
>>>> >> In the higher dimensions the new simplicial sampling can be used
>>>> (not pushed to scipy yet; I still need to update some documentation before
>>>> the PR). This alleviates, but does not eliminate the memory leak issue. As
>>>> you've said SHGO is best suited to problems below 10 dimensions as any
>>>> higher leaves the realm of DFO problems and starts to enter the domain of
>>>> NLP problems. My personal preference in this case is to use the stochastic
>>>> algorithms (basinhopping and differential evolution) on problems where it
>>>> is known that a gradient based solver won't work.
>>>> >>
>>>> >> An exception to this "rule" is when special grey box information
>>>> such as symmetry of the objective function (something that can be supplied
>>>> to shgo to push the applicability of the algorithm up to ~100 variables) or
>>>> pre-computed bounds on the Lipschitz constants is known.
>>>> >>
>>>> >> In the symmetry case SHGO can solve these by supplying the
>>>> `symmetry` option (which was used in the previous benchmarks done by me for
>>>> the JOGO publication, although I did not specifically check if performance
>>>> was actually improved on those problems, but shgo did converge on all
>>>> benchmark problems in the scipy test suite).
>>>> >>
>>>> >> I have had a few reports of memory leaks from various users. I have
>>>> spoken to a few collaborators about the possibility of finding a Masters
>>>> student to cythonize some of the code or otherwise improve it. Hopefully,
>>>> this will happen in the summer semester of 2021.
>>>> >
>>>> >
>>>> > To be honest I wouldn’t be so concerned in general: SHGO is an
>>>> excellent global optimization algorithm and it consistently ranks at the
>>>> top, no matter what problems you throw at it. Together with Dual Annealing,
>>>> SciPy has gained two phenomenal nonlinear solvers and I’m very happy to see
>>>> that SciPy is now at the cutting edge of the open source global
>>>> optimization universe.
>>>> >
>>>> > Andrea.
>>>> >
>>>> >> Thank you again for compiling this large set of benchmark results.
>>>> >>
>>>> >> Best regards,
>>>> >> Stefan
>>>> >> On Fri, Jan 8, 2021 at 10:21 AM Andrea Gavana <
>>>> andrea.gavana at gmail.com> wrote:
>>>> >>>
>>>> >>> Dear SciPy Developers & Users,
>>>> >>>
>>>> >>>     long time no see :-) . I thought to start 2021 with a bit of a
>>>> bang, to try and forget how bad 2020 has been... So I am happy to present
>>>> you with a revamped version of the Global Optimization Benchmarks from my
>>>> previous exercise in 2013.
>>>> >>>
>>>> >>> This new set of benchmarks pretty much superseeds - and greatly
>>>> expands - the previous analysis that you can find at this location:
>>>> http://infinity77.net/global_optimization/ .
>>>> >>>
>>>> >>> The approach I have taken this time is to select as many benchmark
>>>> test suites as possible: most of them are characterized by test function
>>>> generators, from which we can actually create almost an unlimited number of
>>>> unique test problems. Biggest news are:
>>>> >>>
>>>> >>> This whole exercise is made up of 6,825 test problems divided
>>>> across 16 different test suites: most of these problems are of low
>>>> dimensionality (2 to 6 variables) with a few benchmarks extending to 9+
>>>> variables. With all the sensitivities performed during this exercise on
>>>> those benchmarks, the overall grand total number of functions evaluations
>>>> stands at 3,859,786,025 - close to 4 billion. Not bad.
>>>> >>> A couple of "new" optimization algorithms I have ported to Python:
>>>> >>>
>>>> >>> MCS: Multilevel Coordinate Search, it’s my translation to Python of
>>>> the original Matlab code from A. Neumaier and W. Huyer (giving then for
>>>> free also GLS and MINQ) I have added a few, minor improvements compared to
>>>> the original implementation.
>>>> >>> BiteOpt: BITmask Evolution OPTimization , I have converted the C++
>>>> code into Python and added a few, minor modifications.
>>>> >>>
>>>> >>>
>>>> >>> Enough chatting for now. The 13 tested algorithms are described
>>>> here:
>>>> >>>
>>>> >>> http://infinity77.net/go_2021/
>>>> >>>
>>>> >>> High level description & results of the 16 benchmarks:
>>>> >>>
>>>> >>> http://infinity77.net/go_2021/thebenchmarks.html
>>>> >>>
>>>> >>> Each benchmark test suite has its own dedicated page, with more
>>>> detailed results and sensitivities.
>>>> >>>
>>>> >>> List of tested algorithms:
>>>> >>>
>>>> >>> AMPGO: Adaptive Memory Programming for Global Optimization: this is
>>>> my Python implementation of the algorithm described here:
>>>> >>>
>>>> >>>
>>>> http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdf
>>>> >>>
>>>> >>> I have added a few improvements here and there based on my Master
>>>> Thesis work on the standard Tunnelling Algorithm of Levy, Montalvo and
>>>> Gomez. After AMPGO was integrated in lmfit, I have improved it even more -
>>>> in my opinion.
>>>> >>>
>>>> >>> BasinHopping: Basin hopping is a random algorithm which attempts to
>>>> find the global minimum of a smooth scalar function of one or more
>>>> variables. The algorithm was originally described by David Wales:
>>>> >>>
>>>> >>> http://www-wales.ch.cam.ac.uk/
>>>> >>>
>>>> >>> BasinHopping is now part of the standard SciPy distribution.
>>>> >>>
>>>> >>> BiteOpt: BITmask Evolution OPTimization, based on the algorithm
>>>> presented in this GitHub link:
>>>> >>>
>>>> >>> https://github.com/avaneev/biteopt
>>>> >>>
>>>> >>> I have converted the C++ code into Python and added a few, minor
>>>> modifications.
>>>> >>>
>>>> >>> CMA-ES: Covariance Matrix Adaptation Evolution Strategy, based on
>>>> the following algorithm:
>>>> >>>
>>>> >>> http://www.lri.fr/~hansen/cmaesintro.html
>>>> >>>
>>>> >>> http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code
>>>> for the algorithm)
>>>> >>>
>>>> >>> CRS2: Controlled Random Search with Local Mutation, as implemented
>>>> in the NLOpt package:
>>>> >>>
>>>> >>>
>>>> http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_Search_.28CRS.29_with_local_mutation
>>>> >>>
>>>> >>> DE: Differential Evolution, described in the following page:
>>>> >>>
>>>> >>> http://www1.icsi.berkeley.edu/~storn/code.html
>>>> >>>
>>>> >>> DE is now part of the standard SciPy distribution, and I have taken
>>>> the implementation as it stands in SciPy:
>>>> >>>
>>>> >>>
>>>> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution
>>>> >>>
>>>> >>> DIRECT: the DIviding RECTangles procedure, described in:
>>>> >>>
>>>> >>>
>>>> https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLinGloOpt/doc/DIRECT_Lipschitzian%20optimization%20without%20the%20lipschitz%20constant.pdf
>>>> >>>
>>>> >>>
>>>> http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L
>>>> (Python code for the algorithm)
>>>> >>>
>>>> >>> DualAnnealing: the Dual Annealing algorithm, taken directly from
>>>> the SciPy implementation:
>>>> >>>
>>>> >>>
>>>> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_annealing.html#scipy.optimize.dual_annealing
>>>> >>>
>>>> >>> LeapFrog: the Leap Frog procedure, which I have been recommended
>>>> for use, taken from:
>>>> >>>
>>>> >>> https://github.com/flythereddflagg/lpfgopt
>>>> >>>
>>>> >>> MCS: Multilevel Coordinate Search, it’s my translation to Python of
>>>> the original Matlab code from A. Neumaier and W. Huyer (giving then for
>>>> free also GLS and MINQ):
>>>> >>>
>>>> >>> https://www.mat.univie.ac.at/~neum/software/mcs/
>>>> >>>
>>>> >>> I have added a few, minor improvements compared to the original
>>>> implementation. See the MCS section for a quick and dirty comparison
>>>> between the Matlab code and my Python conversion.
>>>> >>>
>>>> >>> PSWARM: Particle Swarm optimization algorithm, it has been
>>>> described in many online papers. I have used a compiled version of the C
>>>> source code from:
>>>> >>>
>>>> >>> http://www.norg.uminho.pt/aivaz/pswarm/
>>>> >>>
>>>> >>> SCE: Shuffled Complex Evolution, described in:
>>>> >>>
>>>> >>> Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient
>>>> global optimization for conceptual rainfall-runoff models, Water Resour.
>>>> Res., 28, 1015-1031, 1992.
>>>> >>>
>>>> >>> The version I used was graciously made available by Matthias Cuntz
>>>> via a personal e-mail.
>>>> >>>
>>>> >>> SHGO: Simplicial Homology Global Optimization, taken directly from
>>>> the SciPy implementation:
>>>> >>>
>>>> >>>
>>>> https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.html#scipy.optimize.shgo
>>>> >>>
>>>> >>>
>>>> >>> List of benchmark test suites:
>>>> >>>
>>>> >>> SciPy Extended: 235 multivariate problems (where the number of
>>>> independent variables ranges from 2 to 17), again with multiple
>>>> local/global minima.
>>>> >>>
>>>> >>> I have added about 40 new functions to the standard SciPy
>>>> benchmarks and fixed a few bugs in the existing benchmark models in the
>>>> SciPy repository.
>>>> >>>
>>>> >>> GKLS: 1,500 test functions, with dimensionality varying from 2 to
>>>> 6, generated with the super famous GKLS Test Functions Generator. I have
>>>> taken the original C code (available at http://netlib.org/toms/) and
>>>> converted it to Python.
>>>> >>>
>>>> >>> GlobOpt: 288 tough problems, with dimensionality varying from 2 to
>>>> 5, created with another test function generator which I arbitrarily named
>>>> “GlobOpt”:
>>>> https://www.researchgate.net/publication/225566516_A_new_class_of_test_functions_for_global_optimization
>>>> . The original code is in C++ and I have bridged it to Python using Cython.
>>>> >>>
>>>> >>> Many thanks go to Professor Marco Locatelli for providing an
>>>> updated copy of the C++ source code.
>>>> >>>
>>>> >>> MMTFG: sort-of an acronym for “Multi-Modal Test Function with
>>>> multiple Global minima”, this test suite implements the work of Jani
>>>> Ronkkonen:
>>>> https://www.researchgate.net/publication/220265526_A_Generator_for_Multimodal_Test_Functions_with_Multiple_Global_Optima
>>>> . It contains 981 test problems with dimensionality varying from 2 to 4.
>>>> The original code is in C and I have bridge it to Python using Cython.
>>>> >>>
>>>> >>> GOTPY: a generator of benchmark functions using the
>>>> Bocharov-Feldbaum “Method-Min”, containing 400 test problems with
>>>> dimensionality varying from 2 to 5. I have taken the Python implementation
>>>> from https://github.com/redb0/gotpy and improved it in terms of
>>>> runtime.
>>>> >>>
>>>> >>> Original paper from
>>>> http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng
>>>> .
>>>> >>>
>>>> >>> Huygens: this benchmark suite is very different from the rest, as
>>>> it uses a “fractal” approach to generate test functions. It is based on the
>>>> work of Cara MacNish on Fractal Functions. The original code is in Java,
>>>> and at the beginning I just converted it to Python: given it was slow as a
>>>> turtle, I have re-implemented it in Fortran and wrapped it using f2py, then
>>>> generating 600 2-dimensional test problems out of it.
>>>> >>>
>>>> >>> LGMVG: not sure about the meaning of the acronym, but the
>>>> implementation follows the “Max-Set of Gaussians Landscape Generator”
>>>> described in http://boyuan.global-optimization.com/LGMVG/index.htm .
>>>> Source code is given in Matlab, but it’s fairly easy to convert it to
>>>> Python. This test suite contains 304 problems with dimensionality varying
>>>> from 2 to 5.
>>>> >>>
>>>> >>> NgLi: Stemming from the work of Chi-Kong Ng and Duan Li, this is a
>>>> test problem generator for unconstrained optimization, but it’s fairly easy
>>>> to assign bound constraints to it. The methodology is described in
>>>> https://www.sciencedirect.com/science/article/pii/S0305054814001774 ,
>>>> while the Matlab source code can be found in
>>>> http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab
>>>> script to generate 240 problems with dimensionality varying from 2 to 5 by
>>>> outputting the generator parameters in text files, then used Python to
>>>> create the objective functions based on those parameters and the benchmark
>>>> methodology.
>>>> >>>
>>>> >>> MPM2: Implementing the “Multiple Peaks Model 2”, there is a Python
>>>> implementation at
>>>> https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This
>>>> is a test problem generator also used in the smoof library, I have taken
>>>> the code almost as is and generated 480 benchmark functions with
>>>> dimensionality varying from 2 to 5.
>>>> >>>
>>>> >>> RandomFields: as described in
>>>> https://www.researchgate.net/publication/301940420_Global_optimization_test_problems_based_on_random_field_composition
>>>> , it generates benchmark functions by “smoothing” one or more
>>>> multidimensional discrete random fields and composing them. No source code
>>>> is given, but the implementation is fairly straightforward from the article
>>>> itself.
>>>> >>>
>>>> >>> NIST: not exactly the realm of Global Optimization solvers, but the
>>>> NIST StRD dataset can be used to generate a single objective function as
>>>> “sum of squares”. I have used the NIST dataset as implemented in lmfit,
>>>> thus creating 27 test problems with dimensionality ranging from 2 to 9.
>>>> >>>
>>>> >>> GlobalLib: Arnold Neumaier maintains a suite of test problems
>>>> termed “COCONUT Benchmark” and Sahinidis has converted the GlobalLib and
>>>> PricentonLib AMPL/GAMS dataset into C/Fortran code (
>>>> http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple C
>>>> parser to convert the benchmarks from C to Python.
>>>> >>>
>>>> >>> The global minima are taken from Sahinidis or from Neumaier or
>>>> refined using the NEOS server when the accuracy of the reported minima is
>>>> too low. The suite contains 181 test functions with dimensionality varying
>>>> between 2 and 9.
>>>> >>>
>>>> >>> CVMG: another “landscape generator”, I had to dig it out using the
>>>> Wayback Machine at
>>>> http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/multi.kennedy.html
>>>> , the acronym stands for “Continuous Valued Multimodality Generator”.
>>>> Source code is in C++ but it’s fairly easy to port it to Python. In
>>>> addition to the original implementation (that uses the Sigmoid as a
>>>> softmax/transformation function) I have added a few others to create varied
>>>> landscapes. 360 test problems have been generated, with dimensionality
>>>> ranging from 2 to 5.
>>>> >>>
>>>> >>> NLSE: again, not really the realm of Global optimization solvers,
>>>> but Nonlinear Systems of Equations can be transformed to single objective
>>>> functions to optimize. I have drawn from many different sources
>>>> (Publications, ALIAS/COPRIN and many others) to create 44 systems of
>>>> nonlinear equations with dimensionality ranging from 2 to 8.
>>>> >>>
>>>> >>> Schoen: based on the early work of Fabio Schoen and his short note
>>>> on a simple but interesting idea on a test function generator, I have taken
>>>> the C code in the note and converted it into Python, thus creating 285
>>>> benchmark functions with dimensionality ranging from 2 to 6.
>>>> >>>
>>>> >>> Many thanks go to Professor Fabio Schoen for providing an updated
>>>> copy of the source code and for the email communications.
>>>> >>>
>>>> >>> Robust: the last benchmark test suite for this exercise, it is
>>>> actually composed of 5 different kind-of analytical test function
>>>> generators, containing deceptive, multimodal, flat functions depending on
>>>> the settings. Matlab source code is available at
>>>> http://www.alimirjalili.com/RO.html , I simply converted it to Python
>>>> and created 420 benchmark functions with dimensionality ranging from 2 to 6.
>>>> >>>
>>>> >>>
>>>> >>> Enjoy, and Happy 2021 :-) .
>>>> >>>
>>>> >>>
>>>> >>> Andrea.
>>>> >>>
>>>> >>> _______________________________________________
>>>> >>>
>>>> >>>
>>>> >>> SciPy-Dev mailing list
>>>> >>> SciPy-Dev at python.org
>>>> >>> https://mail.python.org/mailman/listinfo/scipy-dev
>>>> >>
>>>> >>
>>>> >>
>>>> >> --
>>>> >> Stefan Endres (MEng, AMIChemE, BEng (Hons) Chemical Engineering)
>>>> >>
>>>> >> Wissenchaftlicher Mitarbeiter: Leibniz Institute for Materials
>>>> Engineering IWT, Badgasteiner Straße 3, 28359 Bremen, Germany
>>>> <https://www.google.com/maps/search/Badgasteiner+Stra%C3%9Fe+3,+28359+Bremen,+Germany?entry=gmail&source=g>
>>>> >> Work phone (DE): +49 (0) 421 218 51238
>>>> >> Cellphone (DE): +49 (0) 160 949 86417
>>>> >> Cellphone (ZA): +27 (0) 82 972 42 89
>>>> >> E-mail (work): s.endres at iwt.uni-bremen.de
>>>> >> Website: https://stefan-endres.github.io/
>>>> >> _______________________________________________
>>>> >> SciPy-Dev mailing list
>>>> >> SciPy-Dev at python.org
>>>> >> https://mail.python.org/mailman/listinfo/scipy-dev
>>>> >
>>>> > _______________________________________________
>>>> > SciPy-Dev mailing list
>>>> > SciPy-Dev at python.org
>>>> > https://mail.python.org/mailman/listinfo/scipy-dev
>>>> _______________________________________________
>>>> SciPy-Dev mailing list
>>>> SciPy-Dev at python.org
>>>> https://mail.python.org/mailman/listinfo/scipy-dev
>>>>
>>> _______________________________________________
>>> SciPy-Dev mailing list
>>> SciPy-Dev at python.org
>>> https://mail.python.org/mailman/listinfo/scipy-dev
>>>
>> _______________________________________________
>> SciPy-Dev mailing list
>> SciPy-Dev at python.org
>> https://mail.python.org/mailman/listinfo/scipy-dev
>>
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev at python.org
> https://mail.python.org/mailman/listinfo/scipy-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.python.org/pipermail/scipy-dev/attachments/20210525/93222aba/attachment-0001.html>


More information about the SciPy-Dev mailing list