From marc.barbry at mailoo.org Fri Sep 1 09:56:05 2017 From: marc.barbry at mailoo.org (marc) Date: Fri, 1 Sep 2017 15:56:05 +0200 Subject: [SciPy-Dev] SPMV not implemented in linalg.blas? Message-ID: Dear Scipy developers, Do there is a particular reason why the Blas routine SPMV does not have a wrapper in Scipy? Marc From ralf.gommers at gmail.com Fri Sep 1 23:31:38 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 2 Sep 2017 15:31:38 +1200 Subject: [SciPy-Dev] SPMV not implemented in linalg.blas? In-Reply-To: References: Message-ID: On Sat, Sep 2, 2017 at 1:56 AM, marc wrote: > Dear Scipy developers, > > Do there is a particular reason why the Blas routine SPMV does not have a > wrapper in Scipy? > No particular reason as far as I know, just no one has done the work. spmv is listed as one of a number of not wrapped routines in https://github.com/scipy/scipy/blob/master/scipy/linalg/fblas_l2.pyf.src#L12, PRs for any of those are welcome. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From oss at multiwave.ch Sat Sep 2 00:51:26 2017 From: oss at multiwave.ch (oss) Date: Sat, 2 Sep 2017 06:51:26 +0200 Subject: [SciPy-Dev] N dimensional Gaussian quadrature In-Reply-To: <9426BFEE-91F1-4ACC-B57F-C3073A6B1CC7@multiwave.ch> References: <9426BFEE-91F1-4ACC-B57F-C3073A6B1CC7@multiwave.ch> Message-ID: <1DD590A2-BDDE-43F4-BFDB-1F5C36A9BB8D@multiwave.ch> Hi scipy-dev, We ve implemented and tested n dimensional fixed_quad function with choice of orders for each dimension separately (useful when known singular points need to be avoided). See the diff below. We re open for any request for improvement. https://github.com/scipy/scipy/compare/master...multiwave:n_fixed_quad Thanks tryfon at multiwave -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Sep 2 03:32:36 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 2 Sep 2017 19:32:36 +1200 Subject: [SciPy-Dev] Proposed API change for some functions in scipy.signal In-Reply-To: References: Message-ID: On Thu, Aug 31, 2017 at 5:22 PM, Warren Weckesser < warren.weckesser at gmail.com> wrote: > I've been using quite a few of the functions in scipy.signal recently, and > I've been reminded of some of the quirks of the API. I have submitted a > pull request to clean up one of those quirks. > > In the new pull request, I have added the argument 'fs' (the sampling > frequency) to the following functions: firwin, firwin2, firls, and remez. > For firwin, firwin2 and firls, the new argument replaces 'nyq', and for > remez, it replaces 'Hz'. This makes these functions consistent with the > other functions in which the sampling frequency is specified using 'fs': > periodogram, welch, csd, coherence, spectrogram, stft, and istft. 'fs', or > in LaTeX, $f_s$, is common notation for the sampling frequency in the DSP > literature. > > In the pull request, the old argument is given a "soft" deprecation. That > means the docstring says the argument is deprecated, but code to actually > generate a DeprecationWarning has not been added yet. I'm fine with adding > that now, but some might prefer a relatively long and soft transition for > these changes. (Personally, I don't mind if they hang around for a while, > but they should be gone by 2.0. :) > Good idea, +1 for a soft deprecation. Ralf > I haven't changed the default value of the sampling frequency. For the > FIR filter design functions firls, firwin and firwin2, the default is > nyq=1.0 (equivalent to fs=2), while for remez the default is Hz=1 (i.e. > fs=1). The functions that currently already use 'fs' all have the default > fs=1. Changing the default for the FIR design functions would be a much > more disruptive change. > > Comments here or on the pull request are welcome. > > P.S. I can see future pull requests in which 'fs' is added to functions > that currently don't have an argument to specify the sampling frequency. > I'm looking at you, freqz. > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maurizio.cipollina at gmail.com Fri Sep 8 03:55:34 2017 From: maurizio.cipollina at gmail.com (Maurizio Cipollina) Date: Fri, 8 Sep 2017 09:55:34 +0200 Subject: [SciPy-Dev] Stochastic calculus package Message-ID: Hi everybody, and first of all thanks for the great job you have been doing, on which I relied *A LOT* over time. Stochastic calculus and Stochastic Differential Equations are a significant branch of mathematics that to my understanding is underrepresented in scipy, as well as in the open source python stack at large. I may well be missing something, but as I can see one can find out there many recipies or collections of recipies, and a number of packages/projects skewed towards finance, but no structured framework for generating and handling stochastic processes and numerically solving SDEs. This is a pity, since scipy/numpy provide efficen implementations of all the basic ingredients needed to do the trick. Out of professional need, and prompted by this gap, I have developed a general purpose package of which I own the copyright, and which I would be happy to release under the BSD license as part of scipy, in case there is a consensus among the scipy community that it makes sense to do so (the package has no dependencies beyond standard library, numpy and scipy, and might probably fit best as a subpackage of scipy.stats). Before setting up a PR, I would be happy to hear your thoughts: I have pasted below the package docstring, that should give an overview of its scope and limitations. Please note that some of the realized processes stray into mathematical finance territory, somehow inevitably since finance is one major field of application of stochastic calculus, but the focus is the latter, not the former, at least in my intentions. Thanks in advance for taking your time to go through this, and for your comments and suggestions. Maurizio ??? =========================================================== Stochastic - Monte Carlo simulation of stochastic processes =========================================================== This package provides: 1. A `process` class, a subclass of `numpy.ndarray` representing a sequence of values in time, realized in one or several paths. Algebraic manipulations and ufunc computations are supported for instances that share the same timeline and comply with numpy broadcasting rules. Interpolation along the timeline is supported via callability of `process` instances. Process-specific functionalities, such as averaging and indexing along time or across paths, are delegated to process-specific methods, attributes and properties (no overriding of `numpy.ndarray` operations). 2. The `source` class and its subclasses, stochasticity sources providing numerical realizations of the differentials commonly found in stochastic differential equations (SDEs), with or without memory of formerly invoked realizations. 3. The `integrator` class, a general framework for numerical SDE integration, intended for subclassing, to assist the definition of step by step integration algorithms and the computation of numerical realizations of stochastic processes. 4. Several objects providing realizations of specific stochastic processes along a given timeline and across a requested number of paths. Realizations are obtained via explicit formulae, in case the relevant SDE has a closed form solution, or otherwise as `integrator` subclasses performing Euler-Maruyama numerical integration. 5. A `montecarlo` class, as an aid to cumulate the results of several Monte Carlo simulations of a given stochastic variable, and to extract summary estimates for its probability distribution function and statistics, thus supporting simulations across a number of paths that exceeds the maximum allowed by available memory. 6. Several analytical results relating to the supported stochastic processes, as a general reference and for testing purpouses. For all sources and realizations, process values can take any shape, scalar or multidimensional. Correlated multivariate stochasticity sources are supported. Time-varying process parameters (correlations, intensity of Poisson processes, volatilities etc.) are allowed whenever applicable. `process` instances act as valid stochasticity source realizations (as does any callable object complying with a `source` protocol), and may be passed as a source specification when computing the realization of a given process. Computations are fully vectorized across paths, providing an efficient infrastructure for simulating a large number of process realizations. Less so, for large number of time steps: integrating 1000 time steps across 100000 paths takes seconds, one million time steps across 100 paths takes minutes. General infrastructure ====================== .. autosummary:: :toctree: generated/ process -- Array representation of a process (a subclass of -- numpy.ndarray). kfunc -- Base class for functions with managed keyword arguments. source -- Base class for stochasticity sources. true_source -- Base class for stochasticity sources with memory. integrator -- General framework for numerical SDE integration. montecarlo -- Summmary statistics of Monte Carlo simulations. Stochasticity sources ===================== .. autosummary:: :toctree: generated/ wiener -- dW, a source of standard Wiener process (Brownian -- motion) increments. symmetric_wiener -- as above, with symmetric paths (averages exactly 0). true_wiener -- dW, a source of standard Wiener process (Brownian -- motion) increments, with memory. poisson -- dN, a source of Poisson process increments. compound_poisson -- dJ, a source of compound Poisson process increments. -- (jumps distributed in time as a Poisson process, -- and in size as a given `scipy.stats` distribution). compound_poisson_rv -- Preset jump size distributions for compound Poisson -- process increments. Stochastic process realizations =============================== .. autosummary:: :toctree: generated/ const_wiener_process -- Wiener process (Brownian motion), with -- time-independent parameters. const_lognorm_process -- Lognormal process, with time-independent -- parameters. wiener_process -- Wiener process (Brownian motion). lognorm_process -- Lognormal process. ornstein_uhlenbeck_process -- Ornstein-Uhlenbeck process (mean-reverting -- Brownian motion). hull_white_process -- F-factors Hull-White process (sum of F -- correlated mean-reverting Brownian -- motions). hull_white_1factor_process -- 1-factor Hull-White process (F=1 Hull-White -- process with F-index collapsed to a -- scalar). cox_ingersoll_ross_process -- Cox-Ingersoll-Ross mean-reverting process. full_heston_process -- Heston stochastic volatility process -- (returns both process and volatility). heston_process -- Heston stochastic volatility process -- (stores and returns process only). jump_diffusion_process -- Jump-diffusion process (lognormal process -- with compound Poisson jumps). merton_jump_diffusion_process -- Merton jump-diffusion process -- (jump-diffusion process with normal -- jump size distribution). kou_jump_diffusion_process -- Kou jump-diffusion process (jump-diffusion -- process with double exponential -- jump size distribution). Shortcuts:: lognorm -- lognorm_process oruh -- ornstein_uhlenbeck_process hwp -- hull_white_process hw1f -- hull_white_1factor_process cir -- cox_ingersoll_ross_process heston -- heston_process mjd -- merton_jump_diffusion_process kou -- kou_jump_diffusion_process Analytical results ================== Exact statistics for the realized stochastic processes are listed below, limited to the case of constant process parameters and with some further limitations to the parameters' domains. Function arguments are consistent with those of the corresponding processes. Suffixes `pdf`, `cdf` and `chf` stand respectively for probability distribution function, cumulative probability distribution function, and characteristic function. Black-Scholes formulae for the valuation of call and put options have been included (prefixed with `bs` below). .. autosummary:: :toctree: generated/ wiener_mean wiener_var wiener_std wiener_pdf wiener_cdf wiener_chf lognorm_mean lognorm_var lognorm_std lognorm_pdf lognorm_cdf lognorm_log_chf oruh_mean oruh_var oruh_std oruh_pdf oruh_cdf hw2f_mean hw2f_var hw2f_std hw2f_pdf hw2f_cdf cir_mean cir_var cir_std cir_pdf heston_log_mean heston_log_var heston_log_std heston_log_pdf heston_log_chf mjd_log_pdf mjd_log_chf kou_mean kou_log_pdf kou_log_chf bsd1d2 bscall bscall_delta bsput bsput_delta Notes ===== To the benefit of interactive and notebook sessions, and as an aid to fluently freeze or vary the plurality of parameters that define each stochastic process, all sources, process realizations and analytical results are objects with managed keywords (subclasses of `kfunc`): if the corresponding classes are instantiated with both variables (non keyword arguments) and parameters (keyword arguments), they behave as functions returning the computations' result; if called with parameters only, return an instance that stores the set parameters, and exposes the same calling pattern (call with variables and optionally with parameters to get results, call with parameters only to get a new instance storing modified parameters). Parameters that are not specified fall back to the class defaults, if calling the class, or to the instance's stored values, if calling an instance. """ -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Fri Sep 8 07:22:57 2017 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 8 Sep 2017 07:22:57 -0400 Subject: [SciPy-Dev] Need to filter DeprecationWarnings in the tests? Message-ID: In the original version of a pull request that I am updating ( https://github.com/scipy/scipy/pull/7190), I had filtered DeprecationWarnings in the unit tests. Now that we are using pytest, is this still necessary? According to https://docs.pytest.org/en/latest/warnings.html, by default, pytest doesn't report DeprecatingWarnings, so it looks like there is no need to filter them with `numpy.tests.suppress_warnings()`. Is that correct? Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Fri Sep 8 11:16:39 2017 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Fri, 08 Sep 2017 17:16:39 +0200 Subject: [SciPy-Dev] Need to filter DeprecationWarnings in the tests? In-Reply-To: References: Message-ID: <1504883799.21571.1.camel@sipsolutions.net> Not sure how it is set up with pytest, but without pytest it was set up, so that DeprecationWarning are raised during development testing (so that upstream changes are loud) and printed in release versions. I think that is not really a bad setup, so likely it is/would also be used with pytest? - Sebastian On Fri, 2017-09-08 at 07:22 -0400, Warren Weckesser wrote: > In the original version of a pull request that I am updating (https:/ > /github.com/scipy/scipy/pull/7190), I had filtered > DeprecationWarnings in the unit tests.? Now that we are using pytest, > is this still necessary?? According to https://docs.pytest.org/en/lat > est/warnings.html, by default, pytest doesn't report > DeprecatingWarnings, so it looks like there is no need to filter them > with `numpy.tests.suppress_warnings()`.? Is that correct? > > Warren > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From warren.weckesser at gmail.com Fri Sep 8 13:03:55 2017 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 8 Sep 2017 13:03:55 -0400 Subject: [SciPy-Dev] Need to filter DeprecationWarnings in the tests? In-Reply-To: <1504883799.21571.1.camel@sipsolutions.net> References: <1504883799.21571.1.camel@sipsolutions.net> Message-ID: On Fri, Sep 8, 2017 at 11:16 AM, Sebastian Berg wrote: > Not sure how it is set up with pytest, but without pytest it was set > up, so that DeprecationWarning are raised during development testing > (so that upstream changes are loud) and printed in release versions. > > I think that is not really a bad setup, so likely it is/would also be > used with pytest? > > Looks like it still works that way. I had been running the tests using the 'test()' functions (e.g. 'from scipy import stats; stats.test("full")'), and those ignore the DeprecationWarnings by default. Warren - Sebastian > > > On Fri, 2017-09-08 at 07:22 -0400, Warren Weckesser wrote: > > In the original version of a pull request that I am updating (https:/ > > /github.com/scipy/scipy/pull/7190), I had filtered > > DeprecationWarnings in the unit tests. Now that we are using pytest, > > is this still necessary? According to https://docs.pytest.org/en/lat > > est/warnings.html, by default, pytest doesn't report > > DeprecatingWarnings, so it looks like there is no need to filter them > > with `numpy.tests.suppress_warnings()`. Is that correct? > > > > Warren > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at python.org > > https://mail.python.org/mailman/listinfo/scipy-dev > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haberland at ucla.edu Fri Sep 8 13:06:56 2017 From: haberland at ucla.edu (Matt Haberland) Date: Fri, 8 Sep 2017 10:06:56 -0700 Subject: [SciPy-Dev] Stochastic calculus package In-Reply-To: References: Message-ID: Sounds like a pretty general computational tool to me. I have some problems in rigid body dynamics that I'd like to use this for. Matt On Fri, Sep 8, 2017 at 12:55 AM, Maurizio Cipollina < maurizio.cipollina at gmail.com> wrote: > Hi everybody, and first of all thanks for the great job you have been > doing, on which I relied *A LOT* over time. > > Stochastic calculus and Stochastic Differential Equations are a > significant branch of mathematics that to my understanding is > underrepresented in scipy, as well as in the open source python stack at > large. I may well be missing something, but as I can see one can find out > there many recipies or collections of recipies, and a number of > packages/projects skewed towards finance, but no structured framework for > generating and handling stochastic processes and numerically solving SDEs. > > This is a pity, since scipy/numpy provide efficen implementations of all > the basic ingredients needed to do the trick. Out of professional need, and > prompted by this gap, I have developed a general purpose package of which I > own the copyright, and which I would be happy to release under the BSD > license as part of scipy, in case there is a consensus among the scipy > community that it makes sense to do so (the package has no dependencies > beyond standard library, numpy and scipy, and might probably fit best as a > subpackage of scipy.stats). > > Before setting up a PR, I would be happy to hear your thoughts: I have > pasted below the package docstring, that should give an overview of its > scope and limitations. Please note that some of the realized processes > stray into mathematical finance territory, somehow inevitably since finance > is one major field of application of stochastic calculus, but the focus is > the latter, not the former, at least in my intentions. > > Thanks in advance for taking your time to go through this, and for your > comments and suggestions. > Maurizio > > ??? > =========================================================== > Stochastic - Monte Carlo simulation of stochastic processes > =========================================================== > > This package provides: > > 1. A `process` class, a subclass of `numpy.ndarray` representing > a sequence of values in time, realized in one or several paths. > Algebraic manipulations and ufunc computations are supported for > instances that share the same timeline and comply with numpy > broadcasting rules. Interpolation along the timeline is supported > via callability of `process` instances. Process-specific > functionalities, > such as averaging and indexing along time or across paths, are > delegated > to process-specific methods, attributes and properties (no overriding > of `numpy.ndarray` operations). > > 2. The `source` class and its subclasses, stochasticity sources providing > numerical realizations of the differentials commonly found > in stochastic differential equations (SDEs), with or without > memory of formerly invoked realizations. > > 3. The `integrator` class, a general framework for numerical SDE > integration, intended for subclassing, to assist the definition of > step by > step integration algorithms and the computation of numerical > realizations of > stochastic processes. > > 4. Several objects providing realizations of specific stochastic processes > along a given timeline and across a requested number of paths. > Realizations are obtained via explicit formulae, in case the relevant > SDE > has a closed form solution, or otherwise as `integrator` subclasses > performing > Euler-Maruyama numerical integration. > > 5. A `montecarlo` class, as an aid to cumulate the results of several > Monte Carlo simulations of a given stochastic variable, and to extract > summary estimates for its probability distribution function and > statistics, thus supporting simulations across a number of paths that > exceeds > the maximum allowed by available memory. > > 6. Several analytical results relating to the supported stochastic > processes, as a general reference and for testing purpouses. > > For all sources and realizations, process values can take any shape, > scalar or multidimensional. Correlated multivariate stochasticity sources > are > supported. Time-varying process parameters (correlations, intensity of > Poisson > processes, volatilities etc.) are allowed whenever applicable. > `process` instances act as valid stochasticity source realizations (as does > any callable object complying with a `source` protocol), and may be > passed as a source specification when computing the realization of a given > process. > > Computations are fully vectorized across paths, providing an efficient > infrastructure for simulating a large number of process realizations. > Less so, for large number of time steps: integrating 1000 time steps > across 100000 paths takes seconds, one million time steps across > 100 paths takes minutes. > > General infrastructure > ====================== > .. autosummary:: > :toctree: generated/ > > process -- Array representation of a process (a subclass of > -- numpy.ndarray). > kfunc -- Base class for functions with managed keyword arguments. > source -- Base class for stochasticity sources. > true_source -- Base class for stochasticity sources with memory. > integrator -- General framework for numerical SDE integration. > montecarlo -- Summmary statistics of Monte Carlo simulations. > > > Stochasticity sources > ===================== > .. autosummary:: > :toctree: generated/ > > wiener -- dW, a source of standard Wiener process (Brownian > -- motion) increments. > symmetric_wiener -- as above, with symmetric paths (averages exactly > 0). > true_wiener -- dW, a source of standard Wiener process (Brownian > -- motion) increments, with memory. > poisson -- dN, a source of Poisson process increments. > compound_poisson -- dJ, a source of compound Poisson process > increments. > -- (jumps distributed in time as a Poisson > process, > -- and in size as a given `scipy.stats` > distribution). > compound_poisson_rv -- Preset jump size distributions for compound > Poisson > -- process increments. > > > Stochastic process realizations > =============================== > .. autosummary:: > :toctree: generated/ > > const_wiener_process -- Wiener process (Brownian motion), with > -- time-independent parameters. > const_lognorm_process -- Lognormal process, with > time-independent > -- parameters. > wiener_process -- Wiener process (Brownian motion). > lognorm_process -- Lognormal process. > ornstein_uhlenbeck_process -- Ornstein-Uhlenbeck process > (mean-reverting > -- Brownian motion). > hull_white_process -- F-factors Hull-White process (sum of F > -- correlated mean-reverting Brownian > -- motions). > hull_white_1factor_process -- 1-factor Hull-White process (F=1 > Hull-White > -- process with F-index collapsed to a > -- scalar). > cox_ingersoll_ross_process -- Cox-Ingersoll-Ross mean-reverting > process. > full_heston_process -- Heston stochastic volatility process > -- (returns both process and > volatility). > heston_process -- Heston stochastic volatility process > -- (stores and returns process only). > jump_diffusion_process -- Jump-diffusion process (lognormal > process > -- with compound Poisson jumps). > merton_jump_diffusion_process -- Merton jump-diffusion process > -- (jump-diffusion process with normal > -- jump size distribution). > kou_jump_diffusion_process -- Kou jump-diffusion process > (jump-diffusion > -- process with double exponential > -- jump size distribution). > > Shortcuts:: > > lognorm -- lognorm_process > oruh -- ornstein_uhlenbeck_process > hwp -- hull_white_process > hw1f -- hull_white_1factor_process > cir -- cox_ingersoll_ross_process > heston -- heston_process > mjd -- merton_jump_diffusion_process > kou -- kou_jump_diffusion_process > > > Analytical results > ================== > Exact statistics for the realized stochastic processes > are listed below, limited to the case of constant process parameters and > with > some further limitations to the parameters' domains. > Function arguments are consistent with those of the corresponding > processes. > Suffixes `pdf`, `cdf` and `chf` stand respectively for probability > distribution > function, cumulative probability distribution function, and characteristic > function. Black-Scholes formulae for the valuation of call and put options > have been > included (prefixed with `bs` below). > > .. autosummary:: > :toctree: generated/ > > wiener_mean > wiener_var > wiener_std > wiener_pdf > wiener_cdf > wiener_chf > > lognorm_mean > lognorm_var > lognorm_std > lognorm_pdf > lognorm_cdf > lognorm_log_chf > > oruh_mean > oruh_var > oruh_std > oruh_pdf > oruh_cdf > > hw2f_mean > hw2f_var > hw2f_std > hw2f_pdf > hw2f_cdf > > cir_mean > cir_var > cir_std > cir_pdf > > heston_log_mean > heston_log_var > heston_log_std > heston_log_pdf > heston_log_chf > > mjd_log_pdf > mjd_log_chf > > kou_mean > kou_log_pdf > kou_log_chf > > bsd1d2 > bscall > bscall_delta > bsput > bsput_delta > > Notes > ===== > To the benefit of interactive and notebook sessions, and as an aid to > fluently > freeze or vary the plurality of parameters that define each stochastic > process, > all sources, process realizations and analytical results are objects > with managed keywords (subclasses of `kfunc`): if the corresponding classes > are instantiated with both variables (non keyword arguments) and parameters > (keyword arguments), they behave as functions returning the computations' > result; if called with parameters only, return an instance that stores the > set > parameters, and exposes the same calling pattern (call with variables > and optionally with parameters to get results, call with parameters only > to get a new instance storing modified parameters). > Parameters that are not specified fall back to the class defaults, if > calling > the class, or to the instance's stored values, if calling an instance. > """ > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -- Matt Haberland Assistant Adjunct Professor in the Program in Computing Department of Mathematics 7620E Math Sciences Building, UCLA -------------- next part -------------- An HTML attachment was scrubbed... URL: From haberland at ucla.edu Fri Sep 8 14:03:44 2017 From: haberland at ucla.edu (Matt Haberland) Date: Fri, 8 Sep 2017 11:03:44 -0700 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: References: Message-ID: Regarding long enumerations, I would suggest that the added bullet points are examples of *positive* behavior. There are actually fewer examples of *negative *behavior in 1.4. I think this improves the tone. I prefer the addition of the positive examples in 1.4, the space for contacting the project team, the organization/headings of each section, and the overall wording of 1.4. It seems more comprehensive and professional. On Tue, Aug 29, 2017 at 10:32 PM, Nathaniel Smith wrote: > On Sat, Aug 26, 2017 at 7:57 PM, Charles R Harris > wrote: > > > > > > On Sat, Aug 26, 2017 at 7:32 PM, Nathaniel Smith wrote: > >> > >> On Sat, Aug 26, 2017 at 6:14 PM, Ralf Gommers > >> wrote: > >> > > >> > Thanks for the feedback Chuck. Pauli said on GitHub he preferred > >> > something > >> > shorter and less flowery as well, so that's two votes for the > >> > contributor > >> > covenant or something more in that direction. > >> > >> It looks like the link above goes to the 1.2 version of the > >> Contributor Covenant, while the latest version is 1.4: > >> > >> https://www.contributor-covenant.org/version/1/4/ > >> > >> The 1.4 version has a little more detail on venues and a slot to put > >> in a contact address. > >> > > > > The extra sections are good but I think the long enumerations dilute the > > message. All means all is a stronger statement than a list of every > > conceivable attribute. I think the 1.2 version is better in that regard. > > Folks who look like you or me aren't really the intended audience for > this ? we already know we're included :-). The important thing is how > the message comes across to those who might otherwise feel excluded. > So I think this is a place where we should defer to the experts. > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- Matt Haberland Assistant Adjunct Professor in the Program in Computing Department of Mathematics 7620E Math Sciences Building, UCLA -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Sep 8 14:16:22 2017 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 08 Sep 2017 20:16:22 +0200 Subject: [SciPy-Dev] Need to filter DeprecationWarnings in the tests? In-Reply-To: References: <1504883799.21571.1.camel@sipsolutions.net> Message-ID: <1504894582.3363.7.camel@iki.fi> pe, 2017-09-08 kello 13:03 -0400, Warren Weckesser kirjoitti: > On Fri, Sep 8, 2017 at 11:16 AM, Sebastian Berg ns.net> > wrote: > > > Not sure how it is set up with pytest, but without pytest it was > > set > > up, so that DeprecationWarning are raised during development > > testing > > (so that upstream changes are loud) and printed in release > > versions. > > > > I think that is not really a bad setup, so likely it is/would also > > be > > used with pytest? > > Looks like it still works that way. I had been running the tests > using the > 'test()' functions (e.g. 'from scipy import stats; > stats.test("full")'), > and those ignore the DeprecationWarnings by default. With pytest, we apply the default development settings by having a pytest.ini in the repository. It applies if you use runtests.py (maybe with the -n option) or use inplace builds or otherwise launch pytest from under the repository dir, but not when otherwise running tests from an installed module elsewhere. This seemed to me to be the "pytest way", and it appeared best to not add more detailed hacks trying to fight the tool (currently, the test() have minimal customization, and it seems to me to be for the best to keep them as close as possible to the pytest command line usage). Pauli From warren.weckesser at gmail.com Fri Sep 8 14:36:06 2017 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 8 Sep 2017 14:36:06 -0400 Subject: [SciPy-Dev] Need to filter DeprecationWarnings in the tests? In-Reply-To: <1504894582.3363.7.camel@iki.fi> References: <1504883799.21571.1.camel@sipsolutions.net> <1504894582.3363.7.camel@iki.fi> Message-ID: On Fri, Sep 8, 2017 at 2:16 PM, Pauli Virtanen wrote: > pe, 2017-09-08 kello 13:03 -0400, Warren Weckesser kirjoitti: > > On Fri, Sep 8, 2017 at 11:16 AM, Sebastian Berg > ns.net> > > wrote: > > > > > Not sure how it is set up with pytest, but without pytest it was > > > set > > > up, so that DeprecationWarning are raised during development > > > testing > > > (so that upstream changes are loud) and printed in release > > > versions. > > > > > > I think that is not really a bad setup, so likely it is/would also > > > be > > > used with pytest? > > > > Looks like it still works that way. I had been running the tests > > using the > > 'test()' functions (e.g. 'from scipy import stats; > > stats.test("full")'), > > and those ignore the DeprecationWarnings by default. > > With pytest, we apply the default development settings by having a > pytest.ini in the repository. It applies if you use runtests.py (maybe > with the -n option) or use inplace builds or otherwise launch pytest > from under the repository dir, but not when otherwise running tests > from an installed module elsewhere. > > Will python runtests.py -s stats use the development settings? Warren This seemed to me to be the "pytest way", and it appeared best to not > add more detailed hacks trying to fight the tool (currently, the test() > have minimal customization, and it seems to me to be for the best to > keep them as close as possible to the pytest command line usage). > > Pauli > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Sep 8 15:32:27 2017 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 08 Sep 2017 21:32:27 +0200 Subject: [SciPy-Dev] Need to filter DeprecationWarnings in the tests? In-Reply-To: References: <1504883799.21571.1.camel@sipsolutions.net> <1504894582.3363.7.camel@iki.fi> Message-ID: <1504899147.3363.9.camel@iki.fi> pe, 2017-09-08 kello 14:36 -0400, Warren Weckesser kirjoitti: > On Fri, Sep 8, 2017 at 2:16 PM, Pauli Virtanen wrote: > > > pe, 2017-09-08 kello 13:03 -0400, Warren Weckesser kirjoitti: > > > On Fri, Sep 8, 2017 at 11:16 AM, Sebastian Berg > > utio > > > ns.net> > > > wrote: > > > > > > > Not sure how it is set up with pytest, but without pytest it > > > > was > > > > set > > > > up, so that DeprecationWarning are raised during development > > > > testing > > > > (so that upstream changes are loud) and printed in release > > > > versions. > > > > > > > > I think that is not really a bad setup, so likely it is/would > > > > also > > > > be > > > > used with pytest? > > > > > > Looks like it still works that way. I had been running the tests > > > using the > > > 'test()' functions (e.g. 'from scipy import stats; > > > stats.test("full")'), > > > and those ignore the DeprecationWarnings by default. > > > > With pytest, we apply the default development settings by having a > > pytest.ini in the repository. It applies if you use runtests.py > > (maybe > > with the -n option) or use inplace builds or otherwise launch > > pytest > > from under the repository dir, but not when otherwise running tests > > from an installed module elsewhere. > > > > > > Will > > python runtests.py -s stats > > use the development settings? Yes. (The comment above about -n was just to point out that you can also use runtests.py for running tests in the currently installed scipy.) > > > Warren > > > > This seemed to me to be the "pytest way", and it appeared best to not > > add more detailed hacks trying to fight the tool (currently, the > > test() > > have minimal customization, and it seems to me to be for the best > > to > > keep them as close as possible to the pytest command line usage). > > > > Pauli > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at python.org > > https://mail.python.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev From ralf.gommers at gmail.com Fri Sep 8 21:31:55 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 9 Sep 2017 13:31:55 +1200 Subject: [SciPy-Dev] Stochastic calculus package In-Reply-To: References: Message-ID: Hi Maurizio, On Fri, Sep 8, 2017 at 7:55 PM, Maurizio Cipollina < maurizio.cipollina at gmail.com> wrote: > Hi everybody, and first of all thanks for the great job you have been > doing, on which I relied *A LOT* over time. > > Stochastic calculus and Stochastic Differential Equations are a > significant branch of mathematics that to my understanding is > underrepresented in scipy, as well as in the open source python stack at > large. I may well be missing something, but as I can see one can find out > there many recipies or collections of recipies, and a number of > packages/projects skewed towards finance, but no structured framework for > generating and handling stochastic processes and numerically solving SDEs. > I've never used these, but a quick search turns up two packages: https://pypi.python.org/pypi/sdeint https://github.com/alu042/SDE-higham Not very mature probably, but would be good to compare your code with those. > This is a pity, since scipy/numpy provide efficen implementations of all > the basic ingredients needed to do the trick. Out of professional need, and > prompted by this gap, I have developed a general purpose package of which I > own the copyright, and which I would be happy to release under the BSD > license as part of scipy, in case there is a consensus among the scipy > community that it makes sense to do so (the package has no dependencies > beyond standard library, numpy and scipy, and might probably fit best as a > subpackage of scipy.stats). > > Before setting up a PR, I would be happy to hear your thoughts: I have > pasted below the package docstring, that should give an overview of its > scope and limitations. Please note that some of the realized processes > stray into mathematical finance territory, somehow inevitably since finance > is one major field of application of stochastic calculus, but the focus is > the latter, not the former, at least in my intentions. > First thought: this looks like a useful addition to the scientific Python ecosystem, but is probably too specialized for SciPy (at least at this stage). Based on your docstring below, here are some questions/concerns: 1. We definitely don't want a new ndarray subclass (there's a pretty strong consensus at this point that subclassing ndarray is usually not a good solution), so your `process` doesn't sound attractive. 2. The `integrator` class sounds a lot like odeint, so maybe this would fit better with scipy.integrate? 3. Monte-Carlo simulation is a big and fairly technical topic. There are whole packages (e.g. PyMC3, emcee) dedicated to this. It may not fit well with SciPy. 4. Things specific to finance like Black-Scholes and put/call options are not a good fit. 5. Maintainers for this added code. The ODE integrators in scipy.integrate already suffer from lack of a dedicated maintainer, SDEs are more specialized so are likely to not be very well-maintained by other devs than you. Without seeing your code it's hard to say for sure, but it looks to me like it would be better to release your code as a standalone package. Cheers, Ralf > Thanks in advance for taking your time to go through this, and for your > comments and suggestions. > Maurizio > > ??? > =========================================================== > Stochastic - Monte Carlo simulation of stochastic processes > =========================================================== > > This package provides: > > 1. A `process` class, a subclass of `numpy.ndarray` representing > a sequence of values in time, realized in one or several paths. > Algebraic manipulations and ufunc computations are supported for > instances that share the same timeline and comply with numpy > broadcasting rules. Interpolation along the timeline is supported > via callability of `process` instances. Process-specific > functionalities, > such as averaging and indexing along time or across paths, are > delegated > to process-specific methods, attributes and properties (no overriding > of `numpy.ndarray` operations). > > 2. The `source` class and its subclasses, stochasticity sources providing > numerical realizations of the differentials commonly found > in stochastic differential equations (SDEs), with or without > memory of formerly invoked realizations. > > 3. The `integrator` class, a general framework for numerical SDE > integration, intended for subclassing, to assist the definition of > step by > step integration algorithms and the computation of numerical > realizations of > stochastic processes. > > 4. Several objects providing realizations of specific stochastic processes > along a given timeline and across a requested number of paths. > Realizations are obtained via explicit formulae, in case the relevant > SDE > has a closed form solution, or otherwise as `integrator` subclasses > performing > Euler-Maruyama numerical integration. > > 5. A `montecarlo` class, as an aid to cumulate the results of several > Monte Carlo simulations of a given stochastic variable, and to extract > summary estimates for its probability distribution function and > statistics, thus supporting simulations across a number of paths that > exceeds > the maximum allowed by available memory. > > 6. Several analytical results relating to the supported stochastic > processes, as a general reference and for testing purpouses. > > For all sources and realizations, process values can take any shape, > scalar or multidimensional. Correlated multivariate stochasticity sources > are > supported. Time-varying process parameters (correlations, intensity of > Poisson > processes, volatilities etc.) are allowed whenever applicable. > `process` instances act as valid stochasticity source realizations (as does > any callable object complying with a `source` protocol), and may be > passed as a source specification when computing the realization of a given > process. > > Computations are fully vectorized across paths, providing an efficient > infrastructure for simulating a large number of process realizations. > Less so, for large number of time steps: integrating 1000 time steps > across 100000 paths takes seconds, one million time steps across > 100 paths takes minutes. > > General infrastructure > ====================== > .. autosummary:: > :toctree: generated/ > > process -- Array representation of a process (a subclass of > -- numpy.ndarray). > kfunc -- Base class for functions with managed keyword arguments. > source -- Base class for stochasticity sources. > true_source -- Base class for stochasticity sources with memory. > integrator -- General framework for numerical SDE integration. > montecarlo -- Summmary statistics of Monte Carlo simulations. > > > Stochasticity sources > ===================== > .. autosummary:: > :toctree: generated/ > > wiener -- dW, a source of standard Wiener process (Brownian > -- motion) increments. > symmetric_wiener -- as above, with symmetric paths (averages exactly > 0). > true_wiener -- dW, a source of standard Wiener process (Brownian > -- motion) increments, with memory. > poisson -- dN, a source of Poisson process increments. > compound_poisson -- dJ, a source of compound Poisson process > increments. > -- (jumps distributed in time as a Poisson > process, > -- and in size as a given `scipy.stats` > distribution). > compound_poisson_rv -- Preset jump size distributions for compound > Poisson > -- process increments. > > > Stochastic process realizations > =============================== > .. autosummary:: > :toctree: generated/ > > const_wiener_process -- Wiener process (Brownian motion), with > -- time-independent parameters. > const_lognorm_process -- Lognormal process, with > time-independent > -- parameters. > wiener_process -- Wiener process (Brownian motion). > lognorm_process -- Lognormal process. > ornstein_uhlenbeck_process -- Ornstein-Uhlenbeck process > (mean-reverting > -- Brownian motion). > hull_white_process -- F-factors Hull-White process (sum of F > -- correlated mean-reverting Brownian > -- motions). > hull_white_1factor_process -- 1-factor Hull-White process (F=1 > Hull-White > -- process with F-index collapsed to a > -- scalar). > cox_ingersoll_ross_process -- Cox-Ingersoll-Ross mean-reverting > process. > full_heston_process -- Heston stochastic volatility process > -- (returns both process and > volatility). > heston_process -- Heston stochastic volatility process > -- (stores and returns process only). > jump_diffusion_process -- Jump-diffusion process (lognormal > process > -- with compound Poisson jumps). > merton_jump_diffusion_process -- Merton jump-diffusion process > -- (jump-diffusion process with normal > -- jump size distribution). > kou_jump_diffusion_process -- Kou jump-diffusion process > (jump-diffusion > -- process with double exponential > -- jump size distribution). > > Shortcuts:: > > lognorm -- lognorm_process > oruh -- ornstein_uhlenbeck_process > hwp -- hull_white_process > hw1f -- hull_white_1factor_process > cir -- cox_ingersoll_ross_process > heston -- heston_process > mjd -- merton_jump_diffusion_process > kou -- kou_jump_diffusion_process > > > Analytical results > ================== > Exact statistics for the realized stochastic processes > are listed below, limited to the case of constant process parameters and > with > some further limitations to the parameters' domains. > Function arguments are consistent with those of the corresponding > processes. > Suffixes `pdf`, `cdf` and `chf` stand respectively for probability > distribution > function, cumulative probability distribution function, and characteristic > function. Black-Scholes formulae for the valuation of call and put options > have been > included (prefixed with `bs` below). > > .. autosummary:: > :toctree: generated/ > > wiener_mean > wiener_var > wiener_std > wiener_pdf > wiener_cdf > wiener_chf > > lognorm_mean > lognorm_var > lognorm_std > lognorm_pdf > lognorm_cdf > lognorm_log_chf > > oruh_mean > oruh_var > oruh_std > oruh_pdf > oruh_cdf > > hw2f_mean > hw2f_var > hw2f_std > hw2f_pdf > hw2f_cdf > > cir_mean > cir_var > cir_std > cir_pdf > > heston_log_mean > heston_log_var > heston_log_std > heston_log_pdf > heston_log_chf > > mjd_log_pdf > mjd_log_chf > > kou_mean > kou_log_pdf > kou_log_chf > > bsd1d2 > bscall > bscall_delta > bsput > bsput_delta > > Notes > ===== > To the benefit of interactive and notebook sessions, and as an aid to > fluently > freeze or vary the plurality of parameters that define each stochastic > process, > all sources, process realizations and analytical results are objects > with managed keywords (subclasses of `kfunc`): if the corresponding classes > are instantiated with both variables (non keyword arguments) and parameters > (keyword arguments), they behave as functions returning the computations' > result; if called with parameters only, return an instance that stores the > set > parameters, and exposes the same calling pattern (call with variables > and optionally with parameters to get results, call with parameters only > to get a new instance storing modified parameters). > Parameters that are not specified fall back to the class defaults, if > calling > the class, or to the instance's stored values, if calling an instance. > """ > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Sep 8 23:24:32 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 9 Sep 2017 15:24:32 +1200 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: References: Message-ID: On Sat, Sep 9, 2017 at 6:03 AM, Matt Haberland wrote: > Regarding long enumerations, I would suggest that the added bullet points > are examples of *positive* behavior. There are actually fewer examples of *negative > *behavior in 1.4. I think this improves the tone. > > I prefer the addition of the positive examples in 1.4, the space for > contacting the project team, the organization/headings of each section, and > the overall wording of 1.4. It seems more comprehensive and professional. > I agree. That new version addresses my concern about the harsh tone of version 1.2. Overall there seems to be a preference for the Contributor Covenant or at least something of that level of conciseness, rather than the Jupyter/Django style CoC. So let's go in that direction. Ralf > > On Tue, Aug 29, 2017 at 10:32 PM, Nathaniel Smith wrote: > >> On Sat, Aug 26, 2017 at 7:57 PM, Charles R Harris >> wrote: >> > >> > >> > On Sat, Aug 26, 2017 at 7:32 PM, Nathaniel Smith wrote: >> >> >> >> On Sat, Aug 26, 2017 at 6:14 PM, Ralf Gommers >> >> wrote: >> >> > >> >> > Thanks for the feedback Chuck. Pauli said on GitHub he preferred >> >> > something >> >> > shorter and less flowery as well, so that's two votes for the >> >> > contributor >> >> > covenant or something more in that direction. >> >> >> >> It looks like the link above goes to the 1.2 version of the >> >> Contributor Covenant, while the latest version is 1.4: >> >> >> >> https://www.contributor-covenant.org/version/1/4/ >> >> >> >> The 1.4 version has a little more detail on venues and a slot to put >> >> in a contact address. >> >> >> > >> > The extra sections are good but I think the long enumerations dilute the >> > message. All means all is a stronger statement than a list of every >> > conceivable attribute. I think the 1.2 version is better in that regard. >> >> Folks who look like you or me aren't really the intended audience for >> this ? we already know we're included :-). The important thing is how >> the message comes across to those who might otherwise feel excluded. >> So I think this is a place where we should defer to the experts. >> >> -n >> >> -- >> Nathaniel J. Smith -- https://vorpus.org >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > > > > -- > Matt Haberland > Assistant Adjunct Professor in the Program in Computing > Department of Mathematics > 7620E Math Sciences Building, UCLA > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Sep 10 10:06:37 2017 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 10 Sep 2017 16:06:37 +0200 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: References: Message-ID: <1505052397.2303.14.camel@iki.fi> la, 2017-09-09 kello 15:24 +1200, Ralf Gommers kirjoitti: > I agree. That new version addresses my concern about the harsh tone > of > version 1.2. > > Overall there seems to be a preference for the Contributor Covenant > or at > least something of that level of conciseness, rather than the > Jupyter/Django style CoC. So let's go in that direction. The text of the current version of the Contributor Covenant seems easy to understand and fairly reasonable to me. The scope where it applies is clearly defined, and essentially codifies not accepting wildly unprofessional behavior on scipy issue trackers etc. It is also adopted by a number of other projects, so I think it is a better starting point. Pauli From ralf.gommers at gmail.com Mon Sep 11 04:44:51 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 11 Sep 2017 20:44:51 +1200 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: <1505052397.2303.14.camel@iki.fi> References: <1505052397.2303.14.camel@iki.fi> Message-ID: On Mon, Sep 11, 2017 at 2:06 AM, Pauli Virtanen wrote: > la, 2017-09-09 kello 15:24 +1200, Ralf Gommers kirjoitti: > > I agree. That new version addresses my concern about the harsh tone > > of > > version 1.2. > > > > Overall there seems to be a preference for the Contributor Covenant > > or at > > least something of that level of conciseness, rather than the > > Jupyter/Django style CoC. So let's go in that direction. > > The text of the current version of the Contributor Covenant seems easy > to understand and fairly reasonable to me. The scope where it applies > is clearly defined, and essentially codifies not accepting wildly > unprofessional behavior on scipy issue trackers etc. It is also adopted > by a number of other projects, so I think it is a better starting > point. > Agreed. There's also the Apache one ( https://www.apache.org/foundation/policies/conduct.html, thanks to Stefan for the suggestion), which is also clear and concise, and has a much friendlier tone than the Contributor Covenant. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Sep 11 05:14:31 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 11 Sep 2017 10:14:31 +0100 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: References: <1505052397.2303.14.camel@iki.fi> Message-ID: Hi, On Mon, Sep 11, 2017 at 9:44 AM, Ralf Gommers wrote: > > > On Mon, Sep 11, 2017 at 2:06 AM, Pauli Virtanen wrote: >> >> la, 2017-09-09 kello 15:24 +1200, Ralf Gommers kirjoitti: >> > I agree. That new version addresses my concern about the harsh tone >> > of >> > version 1.2. >> > >> > Overall there seems to be a preference for the Contributor Covenant >> > or at >> > least something of that level of conciseness, rather than the >> > Jupyter/Django style CoC. So let's go in that direction. >> >> The text of the current version of the Contributor Covenant seems easy >> to understand and fairly reasonable to me. The scope where it applies >> is clearly defined, and essentially codifies not accepting wildly >> unprofessional behavior on scipy issue trackers etc. It is also adopted >> by a number of other projects, so I think it is a better starting >> point. > > > Agreed. There's also the Apache one > (https://www.apache.org/foundation/policies/conduct.html, thanks to Stefan > for the suggestion), which is also clear and concise, and has a much > friendlier tone than the Contributor Covenant. I greatly dislike the Contributor Covenant - I find it to be a unpleasant, intimidating document. Just for example, if you run it through https://readable.io/text/, it gets an E for Readability (on an A-E scale), Moderately Formal, and a 100% score for Male on a Male / Female detection algorithm. I am not sure I really like the Apache code of conduct, but I think it's much better - more positive, direct and friendly in tone. I could certainly live with it. Just for comparison, C for readability, Slightly Formal, about 78% Male. Cheers, Matthew From maurizio.cipollina at gmail.com Mon Sep 11 12:35:19 2017 From: maurizio.cipollina at gmail.com (Maurizio Cipollina) Date: Mon, 11 Sep 2017 18:35:19 +0200 Subject: [SciPy-Dev] Stochastic calculus package In-Reply-To: References: Message-ID: Hi Ralf, thanks for going through this, here are some follow up comments on your points: On the two packages you found: - Sdeint: provides several SDE integration algorithms but no specific processes; does not cover processes with jumps; offers no explicit provisions for handling solutions in a number of paths, which is what you need since the integration output is random variables, not numbers. - SDE-higham is not a package or tool, but rather a collection of recipes for a few textbook examples (no jumps either), in the form that an interactive session might take (set globals, do calculations, plot and print results). About your questions/concerns: 1. ndarray subclassing: the integrator does not depend on the process class, and process realizations may be easily tweaked to return an ndarray instead of a process instance; however, a main point of the package is not just to produce SDE solutions, but to allow the user to handle them (almost) effortlessly once obtained: so one might possibly hide the process class inside the testing suite, which depends on it, but hardly get rid of it altogether. Indeed, I subclassed ndarray as safely as I could figure out, but I will surely go through the scipy-dev archives and find out what advised you against doing so. 2. odeint and scipy.integrate: my view is that odeint churns out numbers dependant on numbers, while SDE integration churns out random variables dependant on random variables (the stochasticity sources), so there is a shift of paradigm and the two schemes do not fit smoothly one within the other. The step by step integration of SDEs makes sense, and gets useful, if you can easily control the inputs (eg. correlations among stochasticity sources fed into different SDEs), generate the output in a decent number of paths, and easily extract statistics of expressions dependent on it (hence the flow ?get a number of paths packaged as a process instance? -> ?do numpy algebra with it? -> ?estimate probability distributions etc. using the process class methods? -> ?scale it up with the montecarlo class in case the needed paths do not fit into memory?). 3. Montecarlo simulations is a big topic indeed, that goes far beyond this package; what is called the montecarlo class is in fact a tool to cumulate statistical information extracted from a number of realizations of a random variable, and is focused on making easy to interpret the output of SDE integration. 4. Black-Scholes assists one more tests in a rather extensive testing suite, and might be painlessly dropped. This said, I see your general feeling is that this whole subject is too specialized to fit into scipy at this stage. I suggested otherwise, in my proposal, because stochastic calculus is regarded as one of the pillars of modern probability theory, and covering it into scipy might be relevant to lots of mathematicians and physicists, both practitioners and students, that might otherwise hesitate to adopt, and build dependencies upon, a standalone package. In case you reconsider let me know, I?d be happy to help (now or in the future?). If you have further questions, let me know as well. Cheers Maurizio On 9 September 2017 at 03:31, Ralf Gommers wrote: > Hi Maurizio, > > > On Fri, Sep 8, 2017 at 7:55 PM, Maurizio Cipollina < > maurizio.cipollina at gmail.com> wrote: > >> Hi everybody, and first of all thanks for the great job you have been >> doing, on which I relied *A LOT* over time. >> >> Stochastic calculus and Stochastic Differential Equations are a >> significant branch of mathematics that to my understanding is >> underrepresented in scipy, as well as in the open source python stack at >> large. I may well be missing something, but as I can see one can find out >> there many recipies or collections of recipies, and a number of >> packages/projects skewed towards finance, but no structured framework for >> generating and handling stochastic processes and numerically solving SDEs. >> > > I've never used these, but a quick search turns up two packages: > https://pypi.python.org/pypi/sdeint > https://github.com/alu042/SDE-higham > Not very mature probably, but would be good to compare your code with > those. > > >> This is a pity, since scipy/numpy provide efficen implementations of all >> the basic ingredients needed to do the trick. Out of professional need, and >> prompted by this gap, I have developed a general purpose package of which I >> own the copyright, and which I would be happy to release under the BSD >> license as part of scipy, in case there is a consensus among the scipy >> community that it makes sense to do so (the package has no dependencies >> beyond standard library, numpy and scipy, and might probably fit best as a >> subpackage of scipy.stats). >> >> Before setting up a PR, I would be happy to hear your thoughts: I have >> pasted below the package docstring, that should give an overview of its >> scope and limitations. Please note that some of the realized processes >> stray into mathematical finance territory, somehow inevitably since finance >> is one major field of application of stochastic calculus, but the focus is >> the latter, not the former, at least in my intentions. >> > > First thought: this looks like a useful addition to the scientific Python > ecosystem, but is probably too specialized for SciPy (at least at this > stage). > > Based on your docstring below, here are some questions/concerns: > 1. We definitely don't want a new ndarray subclass (there's a pretty > strong consensus at this point that subclassing ndarray is usually not a > good solution), so your `process` doesn't sound attractive. > 2. The `integrator` class sounds a lot like odeint, so maybe this would > fit better with scipy.integrate? > 3. Monte-Carlo simulation is a big and fairly technical topic. There are > whole packages (e.g. PyMC3, emcee) dedicated to this. It may not fit well > with SciPy. > 4. Things specific to finance like Black-Scholes and put/call options are > not a good fit. > 5. Maintainers for this added code. The ODE integrators in scipy.integrate > already suffer from lack of a dedicated maintainer, SDEs are more > specialized so are likely to not be very well-maintained by other devs than > you. > > Without seeing your code it's hard to say for sure, but it looks to me > like it would be better to release your code as a standalone package. > > Cheers, > Ralf > > > >> Thanks in advance for taking your time to go through this, and for your >> comments and suggestions. >> Maurizio >> >> ??? >> =========================================================== >> Stochastic - Monte Carlo simulation of stochastic processes >> =========================================================== >> >> This package provides: >> >> 1. A `process` class, a subclass of `numpy.ndarray` representing >> a sequence of values in time, realized in one or several paths. >> Algebraic manipulations and ufunc computations are supported for >> instances that share the same timeline and comply with numpy >> broadcasting rules. Interpolation along the timeline is supported >> via callability of `process` instances. Process-specific >> functionalities, >> such as averaging and indexing along time or across paths, are >> delegated >> to process-specific methods, attributes and properties (no overriding >> of `numpy.ndarray` operations). >> >> 2. The `source` class and its subclasses, stochasticity sources providing >> numerical realizations of the differentials commonly found >> in stochastic differential equations (SDEs), with or without >> memory of formerly invoked realizations. >> >> 3. The `integrator` class, a general framework for numerical SDE >> integration, intended for subclassing, to assist the definition of >> step by >> step integration algorithms and the computation of numerical >> realizations of >> stochastic processes. >> >> 4. Several objects providing realizations of specific stochastic >> processes >> along a given timeline and across a requested number of paths. >> Realizations are obtained via explicit formulae, in case the relevant >> SDE >> has a closed form solution, or otherwise as `integrator` subclasses >> performing >> Euler-Maruyama numerical integration. >> >> 5. A `montecarlo` class, as an aid to cumulate the results of several >> Monte Carlo simulations of a given stochastic variable, and to extract >> summary estimates for its probability distribution function and >> statistics, thus supporting simulations across a number of paths that >> exceeds >> the maximum allowed by available memory. >> >> 6. Several analytical results relating to the supported stochastic >> processes, as a general reference and for testing purpouses. >> >> For all sources and realizations, process values can take any shape, >> scalar or multidimensional. Correlated multivariate stochasticity sources >> are >> supported. Time-varying process parameters (correlations, intensity of >> Poisson >> processes, volatilities etc.) are allowed whenever applicable. >> `process` instances act as valid stochasticity source realizations (as >> does >> any callable object complying with a `source` protocol), and may be >> passed as a source specification when computing the realization of a given >> process. >> >> Computations are fully vectorized across paths, providing an efficient >> infrastructure for simulating a large number of process realizations. >> Less so, for large number of time steps: integrating 1000 time steps >> across 100000 paths takes seconds, one million time steps across >> 100 paths takes minutes. >> >> General infrastructure >> ====================== >> .. autosummary:: >> :toctree: generated/ >> >> process -- Array representation of a process (a subclass of >> -- numpy.ndarray). >> kfunc -- Base class for functions with managed keyword arguments. >> source -- Base class for stochasticity sources. >> true_source -- Base class for stochasticity sources with memory. >> integrator -- General framework for numerical SDE integration. >> montecarlo -- Summmary statistics of Monte Carlo simulations. >> >> >> Stochasticity sources >> ===================== >> .. autosummary:: >> :toctree: generated/ >> >> wiener -- dW, a source of standard Wiener process >> (Brownian >> -- motion) increments. >> symmetric_wiener -- as above, with symmetric paths (averages >> exactly 0). >> true_wiener -- dW, a source of standard Wiener process >> (Brownian >> -- motion) increments, with memory. >> poisson -- dN, a source of Poisson process increments. >> compound_poisson -- dJ, a source of compound Poisson process >> increments. >> -- (jumps distributed in time as a Poisson >> process, >> -- and in size as a given `scipy.stats` >> distribution). >> compound_poisson_rv -- Preset jump size distributions for compound >> Poisson >> -- process increments. >> >> >> Stochastic process realizations >> =============================== >> .. autosummary:: >> :toctree: generated/ >> >> const_wiener_process -- Wiener process (Brownian motion), with >> -- time-independent parameters. >> const_lognorm_process -- Lognormal process, with >> time-independent >> -- parameters. >> wiener_process -- Wiener process (Brownian motion). >> lognorm_process -- Lognormal process. >> ornstein_uhlenbeck_process -- Ornstein-Uhlenbeck process >> (mean-reverting >> -- Brownian motion). >> hull_white_process -- F-factors Hull-White process (sum of F >> -- correlated mean-reverting Brownian >> -- motions). >> hull_white_1factor_process -- 1-factor Hull-White process (F=1 >> Hull-White >> -- process with F-index collapsed to a >> -- scalar). >> cox_ingersoll_ross_process -- Cox-Ingersoll-Ross mean-reverting >> process. >> full_heston_process -- Heston stochastic volatility process >> -- (returns both process and >> volatility). >> heston_process -- Heston stochastic volatility process >> -- (stores and returns process only). >> jump_diffusion_process -- Jump-diffusion process (lognormal >> process >> -- with compound Poisson jumps). >> merton_jump_diffusion_process -- Merton jump-diffusion process >> -- (jump-diffusion process with normal >> -- jump size distribution). >> kou_jump_diffusion_process -- Kou jump-diffusion process >> (jump-diffusion >> -- process with double exponential >> -- jump size distribution). >> >> Shortcuts:: >> >> lognorm -- lognorm_process >> oruh -- ornstein_uhlenbeck_process >> hwp -- hull_white_process >> hw1f -- hull_white_1factor_process >> cir -- cox_ingersoll_ross_process >> heston -- heston_process >> mjd -- merton_jump_diffusion_process >> kou -- kou_jump_diffusion_process >> >> >> Analytical results >> ================== >> Exact statistics for the realized stochastic processes >> are listed below, limited to the case of constant process parameters and >> with >> some further limitations to the parameters' domains. >> Function arguments are consistent with those of the corresponding >> processes. >> Suffixes `pdf`, `cdf` and `chf` stand respectively for probability >> distribution >> function, cumulative probability distribution function, and characteristic >> function. Black-Scholes formulae for the valuation of call and put >> options have been >> included (prefixed with `bs` below). >> >> .. autosummary:: >> :toctree: generated/ >> >> wiener_mean >> wiener_var >> wiener_std >> wiener_pdf >> wiener_cdf >> wiener_chf >> >> lognorm_mean >> lognorm_var >> lognorm_std >> lognorm_pdf >> lognorm_cdf >> lognorm_log_chf >> >> oruh_mean >> oruh_var >> oruh_std >> oruh_pdf >> oruh_cdf >> >> hw2f_mean >> hw2f_var >> hw2f_std >> hw2f_pdf >> hw2f_cdf >> >> cir_mean >> cir_var >> cir_std >> cir_pdf >> >> heston_log_mean >> heston_log_var >> heston_log_std >> heston_log_pdf >> heston_log_chf >> >> mjd_log_pdf >> mjd_log_chf >> >> kou_mean >> kou_log_pdf >> kou_log_chf >> >> bsd1d2 >> bscall >> bscall_delta >> bsput >> bsput_delta >> >> Notes >> ===== >> To the benefit of interactive and notebook sessions, and as an aid to >> fluently >> freeze or vary the plurality of parameters that define each stochastic >> process, >> all sources, process realizations and analytical results are objects >> with managed keywords (subclasses of `kfunc`): if the corresponding >> classes >> are instantiated with both variables (non keyword arguments) and >> parameters >> (keyword arguments), they behave as functions returning the computations' >> result; if called with parameters only, return an instance that stores >> the set >> parameters, and exposes the same calling pattern (call with variables >> and optionally with parameters to get results, call with parameters only >> to get a new instance storing modified parameters). >> Parameters that are not specified fall back to the class defaults, if >> calling >> the class, or to the instance's stored values, if calling an instance. >> """ >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Sep 11 13:31:17 2017 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 11 Sep 2017 13:31:17 -0400 Subject: [SciPy-Dev] Stochastic calculus package In-Reply-To: References: Message-ID: On Mon, Sep 11, 2017 at 12:35 PM, Maurizio Cipollina < maurizio.cipollina at gmail.com> wrote: > Hi Ralf, > > > thanks for going through this, here are some follow up comments on your > points: > > > > On the two packages you found: > > > - Sdeint: provides several SDE integration algorithms but no specific > processes; does not cover processes with jumps; offers no explicit > provisions for handling solutions in a number of paths, which is what you > need since the integration output is random variables, not numbers. > - SDE-higham is not a package or tool, but rather a collection of > recipes for a few textbook examples (no jumps either), in the form that an > interactive session might take (set globals, do calculations, plot and > print results). > > > > About your questions/concerns: > > > 1. ndarray subclassing: the integrator does not depend on the process > class, and process realizations may be easily tweaked to return an ndarray > instead of a process instance; however, a main point of the package is not > just to produce SDE solutions, but to allow the user to handle them > (almost) effortlessly once obtained: so one might possibly hide the process > class inside the testing suite, which depends on it, but hardly get rid of > it altogether. Indeed, I subclassed ndarray as safely as I could figure > out, but I will surely go through the scipy-dev archives and find out what > advised you against doing so. > 2. odeint and scipy.integrate: my view is that odeint churns out > numbers dependant on numbers, while SDE integration churns out random > variables dependant on random variables (the stochasticity sources), so > there is a shift of paradigm and the two schemes do not fit smoothly one > within the other. The step by step integration of SDEs makes sense, and > gets useful, if you can easily control the inputs (eg. correlations among > stochasticity sources fed into different SDEs), generate the output in a > decent number of paths, and easily extract statistics of expressions > dependent on it (hence the flow ?get a number of paths packaged as a > process instance? -> ?do numpy algebra with it? -> ?estimate probability > distributions etc. using the process class methods? -> ?scale it up with > the montecarlo class in case the needed paths do not fit into memory?). > 3. Montecarlo simulations is a big topic indeed, that goes far beyond > this package; what is called the montecarlo class is in fact a tool to > cumulate statistical information extracted from a number of realizations of > a random variable, and is focused on making easy to interpret the output of > SDE integration. > 4. Black-Scholes assists one more tests in a rather extensive testing > suite, and might be painlessly dropped. > > > > This said, I see your general feeling is that this whole subject is too > specialized to fit into scipy at this stage. I suggested otherwise, in my > proposal, because stochastic calculus is regarded as one of the pillars of > modern probability theory, and covering it into scipy might be relevant to > lots of mathematicians and physicists, both practitioners and students, > that might otherwise hesitate to adopt, and build dependencies upon, a > standalone package. In case you reconsider let me know, I?d be happy to > help (now or in the future?). > > > > If you have further questions, let me know as well. > > Cheers > > Maurizio > To partially follow up, similar to Ralph's view. I think it would be good to have this as a separate package, or at least the code on github. With distributions like conda it is now much easier to install additional packages, once they are established enough and are included in distributions or easily pip installable. The main reason that I think scipy is currently not the appropriate package for it is that the design and review would require a lot of work. I guess that there are not many maintainers and reviewers that are familiar with this area. I also know only of applications in finance and it is not clear whether or which parts would be of more general interest. In my opinion, some core tools for SDE and jump diffusions would fit in scipy. But the application support will not or might not fit well in a general purpose scientific numerical library. This would be similar to other areas where scipy provides the core computational tools, but applications are supported by other packages. For example, the process class might be similar in magnitude to scipy stats distributions or signal statespace classes, but I have no idea what design for it would fit in scipy. Some of the time aggregation properties sound similar to the corresponding pandas functions, and it is not clear whether users would want to use a separate class or just use pandas instead. Similarly, the kfunc class sounds like something that doesn't have a similar pattern in scipy. Design changes can be made more easily in a standalone package, while all refactorings of classes in scipy are difficult issues because of the backwards compatibility policy which requires that the design should be largely settled before a merge. Josef https://groups.google.com/d/msg/pystatsmodels/xsttiEiyqAg/maR6n4EeAgAJ (stackoverflow question has been deleted "Is there a library out there to calibrate an Ornstein-Uhlenbeck model?") https://github.com/statsmodels/statsmodels/blob/master/statsmodels/sandbox/tsa/diffusion.py and diffusion2.py > > > > On 9 September 2017 at 03:31, Ralf Gommers wrote: > >> Hi Maurizio, >> >> >> On Fri, Sep 8, 2017 at 7:55 PM, Maurizio Cipollina < >> maurizio.cipollina at gmail.com> wrote: >> >>> Hi everybody, and first of all thanks for the great job you have been >>> doing, on which I relied *A LOT* over time. >>> >>> Stochastic calculus and Stochastic Differential Equations are a >>> significant branch of mathematics that to my understanding is >>> underrepresented in scipy, as well as in the open source python stack at >>> large. I may well be missing something, but as I can see one can find out >>> there many recipies or collections of recipies, and a number of >>> packages/projects skewed towards finance, but no structured framework for >>> generating and handling stochastic processes and numerically solving SDEs. >>> >> >> I've never used these, but a quick search turns up two packages: >> https://pypi.python.org/pypi/sdeint >> https://github.com/alu042/SDE-higham >> Not very mature probably, but would be good to compare your code with >> those. >> >> >>> This is a pity, since scipy/numpy provide efficen implementations of all >>> the basic ingredients needed to do the trick. Out of professional need, and >>> prompted by this gap, I have developed a general purpose package of which I >>> own the copyright, and which I would be happy to release under the BSD >>> license as part of scipy, in case there is a consensus among the scipy >>> community that it makes sense to do so (the package has no dependencies >>> beyond standard library, numpy and scipy, and might probably fit best as a >>> subpackage of scipy.stats). >>> >>> Before setting up a PR, I would be happy to hear your thoughts: I have >>> pasted below the package docstring, that should give an overview of its >>> scope and limitations. Please note that some of the realized processes >>> stray into mathematical finance territory, somehow inevitably since finance >>> is one major field of application of stochastic calculus, but the focus is >>> the latter, not the former, at least in my intentions. >>> >> >> First thought: this looks like a useful addition to the scientific Python >> ecosystem, but is probably too specialized for SciPy (at least at this >> stage). >> >> Based on your docstring below, here are some questions/concerns: >> 1. We definitely don't want a new ndarray subclass (there's a pretty >> strong consensus at this point that subclassing ndarray is usually not a >> good solution), so your `process` doesn't sound attractive. >> 2. The `integrator` class sounds a lot like odeint, so maybe this would >> fit better with scipy.integrate? >> 3. Monte-Carlo simulation is a big and fairly technical topic. There are >> whole packages (e.g. PyMC3, emcee) dedicated to this. It may not fit well >> with SciPy. >> 4. Things specific to finance like Black-Scholes and put/call options are >> not a good fit. >> 5. Maintainers for this added code. The ODE integrators in >> scipy.integrate already suffer from lack of a dedicated maintainer, SDEs >> are more specialized so are likely to not be very well-maintained by other >> devs than you. >> >> Without seeing your code it's hard to say for sure, but it looks to me >> like it would be better to release your code as a standalone package. >> >> Cheers, >> Ralf >> >> >> >>> Thanks in advance for taking your time to go through this, and for your >>> comments and suggestions. >>> Maurizio >>> >>> ??? >>> =========================================================== >>> Stochastic - Monte Carlo simulation of stochastic processes >>> =========================================================== >>> >>> This package provides: >>> >>> 1. A `process` class, a subclass of `numpy.ndarray` representing >>> a sequence of values in time, realized in one or several paths. >>> Algebraic manipulations and ufunc computations are supported for >>> instances that share the same timeline and comply with numpy >>> broadcasting rules. Interpolation along the timeline is supported >>> via callability of `process` instances. Process-specific >>> functionalities, >>> such as averaging and indexing along time or across paths, are >>> delegated >>> to process-specific methods, attributes and properties (no overriding >>> of `numpy.ndarray` operations). >>> >>> 2. The `source` class and its subclasses, stochasticity sources >>> providing >>> numerical realizations of the differentials commonly found >>> in stochastic differential equations (SDEs), with or without >>> memory of formerly invoked realizations. >>> >>> 3. The `integrator` class, a general framework for numerical SDE >>> integration, intended for subclassing, to assist the definition of >>> step by >>> step integration algorithms and the computation of numerical >>> realizations of >>> stochastic processes. >>> >>> 4. Several objects providing realizations of specific stochastic >>> processes >>> along a given timeline and across a requested number of paths. >>> Realizations are obtained via explicit formulae, in case the >>> relevant SDE >>> has a closed form solution, or otherwise as `integrator` subclasses >>> performing >>> Euler-Maruyama numerical integration. >>> >>> 5. A `montecarlo` class, as an aid to cumulate the results of several >>> Monte Carlo simulations of a given stochastic variable, and to >>> extract >>> summary estimates for its probability distribution function and >>> statistics, thus supporting simulations across a number of paths >>> that exceeds >>> the maximum allowed by available memory. >>> >>> 6. Several analytical results relating to the supported stochastic >>> processes, as a general reference and for testing purpouses. >>> >>> For all sources and realizations, process values can take any shape, >>> scalar or multidimensional. Correlated multivariate stochasticity >>> sources are >>> supported. Time-varying process parameters (correlations, intensity of >>> Poisson >>> processes, volatilities etc.) are allowed whenever applicable. >>> `process` instances act as valid stochasticity source realizations (as >>> does >>> any callable object complying with a `source` protocol), and may be >>> passed as a source specification when computing the realization of a >>> given >>> process. >>> >>> Computations are fully vectorized across paths, providing an efficient >>> infrastructure for simulating a large number of process realizations. >>> Less so, for large number of time steps: integrating 1000 time steps >>> across 100000 paths takes seconds, one million time steps across >>> 100 paths takes minutes. >>> >>> General infrastructure >>> ====================== >>> .. autosummary:: >>> :toctree: generated/ >>> >>> process -- Array representation of a process (a subclass of >>> -- numpy.ndarray). >>> kfunc -- Base class for functions with managed keyword >>> arguments. >>> source -- Base class for stochasticity sources. >>> true_source -- Base class for stochasticity sources with memory. >>> integrator -- General framework for numerical SDE integration. >>> montecarlo -- Summmary statistics of Monte Carlo simulations. >>> >>> >>> Stochasticity sources >>> ===================== >>> .. autosummary:: >>> :toctree: generated/ >>> >>> wiener -- dW, a source of standard Wiener process >>> (Brownian >>> -- motion) increments. >>> symmetric_wiener -- as above, with symmetric paths (averages >>> exactly 0). >>> true_wiener -- dW, a source of standard Wiener process >>> (Brownian >>> -- motion) increments, with memory. >>> poisson -- dN, a source of Poisson process increments. >>> compound_poisson -- dJ, a source of compound Poisson process >>> increments. >>> -- (jumps distributed in time as a Poisson >>> process, >>> -- and in size as a given `scipy.stats` >>> distribution). >>> compound_poisson_rv -- Preset jump size distributions for compound >>> Poisson >>> -- process increments. >>> >>> >>> Stochastic process realizations >>> =============================== >>> .. autosummary:: >>> :toctree: generated/ >>> >>> const_wiener_process -- Wiener process (Brownian motion), >>> with >>> -- time-independent parameters. >>> const_lognorm_process -- Lognormal process, with >>> time-independent >>> -- parameters. >>> wiener_process -- Wiener process (Brownian motion). >>> lognorm_process -- Lognormal process. >>> ornstein_uhlenbeck_process -- Ornstein-Uhlenbeck process >>> (mean-reverting >>> -- Brownian motion). >>> hull_white_process -- F-factors Hull-White process (sum of >>> F >>> -- correlated mean-reverting Brownian >>> -- motions). >>> hull_white_1factor_process -- 1-factor Hull-White process (F=1 >>> Hull-White >>> -- process with F-index collapsed to >>> a >>> -- scalar). >>> cox_ingersoll_ross_process -- Cox-Ingersoll-Ross mean-reverting >>> process. >>> full_heston_process -- Heston stochastic volatility process >>> -- (returns both process and >>> volatility). >>> heston_process -- Heston stochastic volatility process >>> -- (stores and returns process only). >>> jump_diffusion_process -- Jump-diffusion process (lognormal >>> process >>> -- with compound Poisson jumps). >>> merton_jump_diffusion_process -- Merton jump-diffusion process >>> -- (jump-diffusion process with >>> normal >>> -- jump size distribution). >>> kou_jump_diffusion_process -- Kou jump-diffusion process >>> (jump-diffusion >>> -- process with double exponential >>> -- jump size distribution). >>> >>> Shortcuts:: >>> >>> lognorm -- lognorm_process >>> oruh -- ornstein_uhlenbeck_process >>> hwp -- hull_white_process >>> hw1f -- hull_white_1factor_process >>> cir -- cox_ingersoll_ross_process >>> heston -- heston_process >>> mjd -- merton_jump_diffusion_process >>> kou -- kou_jump_diffusion_process >>> >>> >>> Analytical results >>> ================== >>> Exact statistics for the realized stochastic processes >>> are listed below, limited to the case of constant process parameters and >>> with >>> some further limitations to the parameters' domains. >>> Function arguments are consistent with those of the corresponding >>> processes. >>> Suffixes `pdf`, `cdf` and `chf` stand respectively for probability >>> distribution >>> function, cumulative probability distribution function, and >>> characteristic >>> function. Black-Scholes formulae for the valuation of call and put >>> options have been >>> included (prefixed with `bs` below). >>> >>> .. autosummary:: >>> :toctree: generated/ >>> >>> wiener_mean >>> wiener_var >>> wiener_std >>> wiener_pdf >>> wiener_cdf >>> wiener_chf >>> >>> lognorm_mean >>> lognorm_var >>> lognorm_std >>> lognorm_pdf >>> lognorm_cdf >>> lognorm_log_chf >>> >>> oruh_mean >>> oruh_var >>> oruh_std >>> oruh_pdf >>> oruh_cdf >>> >>> hw2f_mean >>> hw2f_var >>> hw2f_std >>> hw2f_pdf >>> hw2f_cdf >>> >>> cir_mean >>> cir_var >>> cir_std >>> cir_pdf >>> >>> heston_log_mean >>> heston_log_var >>> heston_log_std >>> heston_log_pdf >>> heston_log_chf >>> >>> mjd_log_pdf >>> mjd_log_chf >>> >>> kou_mean >>> kou_log_pdf >>> kou_log_chf >>> >>> bsd1d2 >>> bscall >>> bscall_delta >>> bsput >>> bsput_delta >>> >>> Notes >>> ===== >>> To the benefit of interactive and notebook sessions, and as an aid to >>> fluently >>> freeze or vary the plurality of parameters that define each stochastic >>> process, >>> all sources, process realizations and analytical results are objects >>> with managed keywords (subclasses of `kfunc`): if the corresponding >>> classes >>> are instantiated with both variables (non keyword arguments) and >>> parameters >>> (keyword arguments), they behave as functions returning the computations' >>> result; if called with parameters only, return an instance that stores >>> the set >>> parameters, and exposes the same calling pattern (call with variables >>> and optionally with parameters to get results, call with parameters only >>> to get a new instance storing modified parameters). >>> Parameters that are not specified fall back to the class defaults, if >>> calling >>> the class, or to the instance's stored values, if calling an instance. >>> """ >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >>> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Sep 11 17:08:37 2017 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 11 Sep 2017 23:08:37 +0200 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: References: <1505052397.2303.14.camel@iki.fi> Message-ID: <1505164117.2582.92.camel@iki.fi> Ralf Gommers kirjoitti 11.09.2017 klo 10:44: [clip] > There's also the Apache one ( > https://www.apache.org/foundation/policies/conduct.html, thanks to > Stefan > for the suggestion), which is also clear and concise, and has a much > friendlier tone than the Contributor Covenant. If we look at the content, there seem to be differences between (A)pache to the (C)ontributor Covenant: - (A) talks about good behavior in detail, including how to write good emails, communication expected when resigning from projects, etc. - (A) suggests to try to deal with the issues by reminding about the CoC. - (C) promises maintainers will consider and take fair action on complaints if necessary. (A) does not promise someone will do something. - (C) gives examples of possible actions taken, (A) not. So it seems that the promises are weaker, and I think the content of the text is here important. *** It seems to me that these documents generally try to do both (1) outline community guidelines for "good professional behavior", and (2) lay out anti-harassment/discrimination/trolling/etc rules. Here, it appears to me (A) is more about (1), and (C) is mainly about (2). The tone probably different because of this. I think Scipy has benefited from being a fairly technical project where contributors often have professional and academic background, with existing norms about collegial behavior and communication, and I like to think that this has been reflected in the discussions. It can of course be useful to think about writing such things down explicitly to produce a document explaining (1), and I think the Apache one gives good hints for keeping things productive. But it is less clear to me this is something for which formal moderator action would be necessary. However, if I understand correctly, the reason why people want these things is more about (2). Indeed, this is standard stuff in the workplace and in moderation of internet forums (usually in "Rules" in the latter). It seems a good idea to structure this part so that it does not fail if someone acts in bad faith, and so that the moderation plan is reasonable to the reader and possible to implement. Pauli From stefanv at berkeley.edu Mon Sep 11 20:08:57 2017 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 11 Sep 2017 17:08:57 -0700 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: <1505164117.2582.92.camel@iki.fi> References: <1505052397.2303.14.camel@iki.fi> <1505164117.2582.92.camel@iki.fi> Message-ID: <1505174937.3638904.1102815104.5E1A155A@webmail.messagingengine.com> On Mon, Sep 11, 2017, at 14:08, Pauli Virtanen wrote: > It can of course be useful to think about writing such things down > explicitly to produce a document explaining (1), and I think the Apache > one gives good hints for keeping things productive. But it is less > clear to me this is something for which formal moderator action would > be necessary. > > However, if I understand correctly, the reason why people want these > things is more about (2). Indeed, this is standard stuff in the > workplace and in moderation of internet forums (usually in "Rules" in > the latter). It seems a good idea to structure this part so that it > does not fail if someone acts in bad faith, and so that the moderation > plan is reasonable to the reader and possible to implement. I feel it is important to mix in a bit of (1) with (2), the reason being that almost every person reading the CoC will not ever act in bad faith. You'd think that those people could simply ignore language related to enforcement, but in previous discussions (e.g., around the Jupyter CoC) that turned out not to be the case: it is all too easy to frighten people into not speaking up. So, I'd recommend focusing on a description of the kind of community we want, instead of what we're trying to avoid; and postponing the enforcement language until later in the document, making it clear that enforcement only comes through (somewhat wide) deliberation of trusted community members (and, preferably, also after engagement with the offending party). This way, we can hopefully instill trust in our CoC as a process, rather than a set of rules. Best regards St?fan From andyfaff at gmail.com Tue Sep 12 01:46:33 2017 From: andyfaff at gmail.com (Andrew Nelson) Date: Tue, 12 Sep 2017 15:46:33 +1000 Subject: [SciPy-Dev] Debugging a segfault Message-ID: I'm using a computing cluster which has no access to the outside world, which means I have had to install everything from source, including Python. I've built scipy and am trying to run the test suite. However, python is crashing somewhere in `io/tests/test_fortran.py`. I tried to run the module in gdb and got the following: https://gist.github.com/andyfaff/18d3e8ee8551c630316afd66591f4cb7 I would try to investigate further but I don't know how to proceed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From larson.eric.d at gmail.com Tue Sep 12 09:43:27 2017 From: larson.eric.d at gmail.com (Eric Larson) Date: Tue, 12 Sep 2017 09:43:27 -0400 Subject: [SciPy-Dev] Debugging a segfault In-Reply-To: References: Message-ID: You could try pytest-faulthandler. Installing it should automatically enable faulthandler, which should hopefully give more information. Eric On Tue, Sep 12, 2017 at 1:46 AM, Andrew Nelson wrote: > I'm using a computing cluster which has no access to the outside world, > which means I have had to install everything from source, including Python. > > I've built scipy and am trying to run the test suite. However, python is > crashing somewhere in `io/tests/test_fortran.py`. > > I tried to run the module in gdb and got the following: > > https://gist.github.com/andyfaff/18d3e8ee8551c630316afd66591f4cb7 > > I would try to investigate further but I don't know how to proceed. > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Sep 12 09:56:26 2017 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 12 Sep 2017 15:56:26 +0200 Subject: [SciPy-Dev] Debugging a segfault In-Reply-To: References: Message-ID: <1ce1259c-c335-b731-d466-ddbd1b6993ea@iki.fi> Andrew Nelson kirjoitti 12.09.2017 klo 07:46: [clip] > https://gist.github.com/andyfaff/18d3e8ee8551c630316afd66591f4cb7 > > I would try to investigate further but I don't know how to proceed. Try first to run the test suite in verbose mode, so we see which test crashes. I would expect this is the Fortran roundtrip test, since it's the only one involving compiled code. Possibly, the fortran compiler on the cluster is some more exotic than gfortran, its unformatted i/o is not compatible, and it handles errors by crashing rather than doing something else. Installing pytest-faulthandler should give you the traceback, or you can run the whole test suite in gdb, `gdb --args python runtests.py -n`, `(gdb) run`, ..., `(gdb) thread apply all bt`. From andyfaff at gmail.com Tue Sep 12 10:18:24 2017 From: andyfaff at gmail.com (Andrew Nelson) Date: Wed, 13 Sep 2017 00:18:24 +1000 Subject: [SciPy-Dev] Debugging a segfault In-Reply-To: <1ce1259c-c335-b731-d466-ddbd1b6993ea@iki.fi> References: <1ce1259c-c335-b731-d466-ddbd1b6993ea@iki.fi> Message-ID: On 12 Sep 2017 11:57 pm, "Pauli Virtanen" Possibly, the fortran compiler on the cluster is some more exotic than gfortran, its unformatted i/o is not compatible, and it handles errors by crashing rather than doing something else. I'm using gfortran, I'm not sure what version. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Tue Sep 12 22:04:04 2017 From: andyfaff at gmail.com (Andrew Nelson) Date: Wed, 13 Sep 2017 12:04:04 +1000 Subject: [SciPy-Dev] Debugging a segfault In-Reply-To: References: <1ce1259c-c335-b731-d466-ddbd1b6993ea@iki.fi> Message-ID: Installing pytest-faulthandler did not give a traceback. I'm using Python 3.6, not sure if that's the issue. However, using verbose mode indicated that it's test_fortran_roundtrip that causes the issue, as Pauli thought. commodore$ python runtests.py -v -s io Building, see build.log... Build OK (0:00:07.503852 elapsed) =============================================================================== test session starts =============================================================================== platform linux -- Python 3.6.2, pytest-3.2.2, py-1.4.34, pluggy-0.4.0 -- /home/anz/bin/python cachedir: ../../../../../.cache rootdir: /home/anz/software/scipy-master, inifile: pytest.ini plugins: faulthandler-1.3.1 collected 268 items scipy/io/tests/test_fortran.py::test_fortranfile_write_mixed_record PASSED scipy/io/tests/test_fortran.py::test_fortran_roundtrip commodore$ I'm definitely using gfortran: commodore$ ldd _test_fortran.cpython-36m-x86_64-linux-gnu.so linux-vdso.so.1 => (0x00007fffc75d1000) libgfortran.so.1 => /usr/lib64/libgfortran.so.1 (0x00002b0495f3b000) libm.so.6 => /lib64/libm.so.6 (0x00002b04961d3000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b0496456000) libc.so.6 => /lib64/libc.so.6 (0x00002b0496664000) /lib64/ld-linux-x86-64.so.2 (0x000000398a400000) The gfortran specs are: commodore$ gfortran -v Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-libgcj-multifile --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --disable-plugin --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic --host=x86_64-redhat-linux Thread model: posix gcc version 4.1.2 20080704 (Red Hat 4.1.2-50) I also tried running the tests in gdb, using the instructions that were provided, but that didn't give me a stack trace either. A. On 13 September 2017 at 00:18, Andrew Nelson wrote: > On 12 Sep 2017 11:57 pm, "Pauli Virtanen" > > Possibly, the fortran compiler on the cluster is some more exotic than > gfortran, its unformatted i/o is not compatible, and it handles errors by > crashing rather than doing something else. > > > I'm using gfortran, I'm not sure what version. > > -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Tue Sep 12 22:08:24 2017 From: andyfaff at gmail.com (Andrew Nelson) Date: Wed, 13 Sep 2017 12:08:24 +1000 Subject: [SciPy-Dev] Debugging a segfault In-Reply-To: References: <1ce1259c-c335-b731-d466-ddbd1b6993ea@iki.fi> Message-ID: Is there any significance to exiting with Error Code 2? Is that: #define ENOENT 2 /* No such file or directory */ On 13 September 2017 at 12:04, Andrew Nelson wrote: > Installing pytest-faulthandler did not give a traceback. I'm using Python > 3.6, not sure if that's the issue. However, using verbose mode indicated > that it's test_fortran_roundtrip that causes the issue, as Pauli thought. > > commodore$ python runtests.py -v -s io > Building, see build.log... > Build OK (0:00:07.503852 elapsed) > =============================================================================== > test session starts ============================== > ================================================= > platform linux -- Python 3.6.2, pytest-3.2.2, py-1.4.34, pluggy-0.4.0 -- > /home/anz/bin/python > cachedir: ../../../../../.cache > rootdir: /home/anz/software/scipy-master, inifile: pytest.ini > plugins: faulthandler-1.3.1 > collected 268 items > > scipy/io/tests/test_fortran.py::test_fortranfile_write_mixed_record PASSED > scipy/io/tests/test_fortran.py::test_fortran_roundtrip commodore$ > > > I'm definitely using gfortran: > > commodore$ ldd _test_fortran.cpython-36m-x86_64-linux-gnu.so > linux-vdso.so.1 => (0x00007fffc75d1000) > libgfortran.so.1 => /usr/lib64/libgfortran.so.1 (0x00002b0495f3b000) > libm.so.6 => /lib64/libm.so.6 (0x00002b04961d3000) > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b0496456000) > libc.so.6 => /lib64/libc.so.6 (0x00002b0496664000) > /lib64/ld-linux-x86-64.so.2 (0x000000398a400000) > > The gfortran specs are: > > commodore$ gfortran -v > Using built-in specs. > Target: x86_64-redhat-linux > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man > --infodir=/usr/share/info --enable-shared --enable-threads=posix > --enable-checking=release --with-system-zlib --enable-__cxa_atexit > --disable-libunwind-exceptions --enable-libgcj-multifile > --enable-languages=c,c++,objc,obj-c++,java,fortran,ada > --enable-java-awt=gtk --disable-dssi --disable-plugin > --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre > --with-cpu=generic --host=x86_64-redhat-linux > Thread model: posix > gcc version 4.1.2 20080704 (Red Hat 4.1.2-50) > > > I also tried running the tests in gdb, using the instructions that were > provided, but that didn't give me a stack trace either. > > A. > > On 13 September 2017 at 00:18, Andrew Nelson wrote: > >> On 12 Sep 2017 11:57 pm, "Pauli Virtanen" >> >> Possibly, the fortran compiler on the cluster is some more exotic than >> gfortran, its unformatted i/o is not compatible, and it handles errors by >> crashing rather than doing something else. >> >> >> I'm using gfortran, I'm not sure what version. >> >> > > > -- > _____________________________________ > Dr. Andrew Nelson > > > _____________________________________ > -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Sep 13 06:16:36 2017 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 13 Sep 2017 12:16:36 +0200 Subject: [SciPy-Dev] Debugging a segfault In-Reply-To: References: <1ce1259c-c335-b731-d466-ddbd1b6993ea@iki.fi> Message-ID: Andrew Nelson kirjoitti 13.09.2017 klo 04:04: [clip] > I also tried running the tests in gdb, using the instructions that were > provided, but that didn't give me a stack trace either. You need to compile it with debug symbols enabled, `rm -rf build; runtest.py -g ...` From ralf.gommers at gmail.com Wed Sep 13 06:39:26 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 13 Sep 2017 22:39:26 +1200 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: <1505174937.3638904.1102815104.5E1A155A@webmail.messagingengine.com> References: <1505052397.2303.14.camel@iki.fi> <1505164117.2582.92.camel@iki.fi> <1505174937.3638904.1102815104.5E1A155A@webmail.messagingengine.com> Message-ID: On Tue, Sep 12, 2017 at 12:08 PM, Stefan van der Walt wrote: > On Mon, Sep 11, 2017, at 14:08, Pauli Virtanen wrote: > > It can of course be useful to think about writing such things down > > explicitly to produce a document explaining (1), and I think the Apache > > one gives good hints for keeping things productive. But it is less > > clear to me this is something for which formal moderator action would > > be necessary. > > > > However, if I understand correctly, the reason why people want these > > things is more about (2). Indeed, this is standard stuff in the > > workplace and in moderation of internet forums (usually in "Rules" in > > the latter). Agreed that many people want it for (2). There's an important difference though. Forum rules and workplace policies are usually not very visible, while with a CoC for an open source project one wants it to be quite visible. Therefore the importance of it having a positive tone and statements about what we value is a lot more important than for something like a workplace policy. It seems a good idea to structure this part so that it > > does not fail if someone acts in bad faith, and so that the moderation > > plan is reasonable to the reader and possible to implement. > > I feel it is important to mix in a bit of (1) with (2), the reason being > that almost every person reading the CoC will not ever act in bad faith. > You'd think that those people could simply ignore language related to > enforcement, but in previous discussions (e.g., around the Jupyter CoC) > that turned out not to be the case: it is all too easy to frighten > people into not speaking up. > > So, I'd recommend focusing on a description of the kind of community we > want, instead of what we're trying to avoid; and postponing the > enforcement language until later in the document, making it clear that > enforcement only comes through (somewhat wide) deliberation of trusted > community members (and, preferably, also after engagement with the > offending party). > > This way, we can hopefully instill trust in our CoC as a process, rather > than a set of rules. > That sounds quite good to me. The process at a high level (for all but the most severe cases) should be something like: 1. complaint 2. reasonable discussion/feedback 3. mediation (if feedback didn't help) 4. enforcement via transparent decision by CoC committee (if mediation failed) And not what some people may be afraid of, and sometimes actually happens in practice: 1. complaint 2. enforcement For a new CoC draft, taking most of the Apache doc and tacking the more rules/enforcement oriented content of the Contributor Covenant onto the end seems like a good starting point. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Sep 13 08:00:30 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 14 Sep 2017 00:00:30 +1200 Subject: [SciPy-Dev] Clarkson Woodruff Transform implementation (Randomized Numerical Linear Algebra) In-Reply-To: References: Message-ID: On Tue, Aug 29, 2017 at 9:49 PM, Jordi Montes wrote: > Hello, > > I just made a pull request to the official repository with the > implementation of the Clarkson Woodruff Transform (Count Min Sketch) for > dense matrices. > > You can find more details of the pull request here > . > Hi all, this PR (https://github.com/scipy/scipy/pull/7809) is now in decent shape. If you have an interest in this topic, please have a look. If there are no further comments, I'll hit the green button soon. Cheers, Ralf > I would not like to made it badly (this is my pull request to the project) > so any advice is welcome (Main implementation/tests/documentation). > > Jordi Montes. > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Sep 13 08:12:48 2017 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 13 Sep 2017 13:12:48 +0100 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: References: <1505052397.2303.14.camel@iki.fi> <1505164117.2582.92.camel@iki.fi> <1505174937.3638904.1102815104.5E1A155A@webmail.messagingengine.com> Message-ID: Hi, On Wed, Sep 13, 2017 at 11:39 AM, Ralf Gommers wrote: > > > On Tue, Sep 12, 2017 at 12:08 PM, Stefan van der Walt > wrote: >> >> On Mon, Sep 11, 2017, at 14:08, Pauli Virtanen wrote: >> > It can of course be useful to think about writing such things down >> > explicitly to produce a document explaining (1), and I think the Apache >> > one gives good hints for keeping things productive. But it is less >> > clear to me this is something for which formal moderator action would >> > be necessary. >> > >> > However, if I understand correctly, the reason why people want these >> > things is more about (2). Indeed, this is standard stuff in the >> > workplace and in moderation of internet forums (usually in "Rules" in >> > the latter). > > > Agreed that many people want it for (2). There's an important difference > though. Forum rules and workplace policies are usually not very visible, > while with a CoC for an open source project one wants it to be quite > visible. Therefore the importance of it having a positive tone and > statements about what we value is a lot more important than for something > like a workplace policy. Yes - I agree strongly. I think this does do dual service as a statement of values as well as well as a threat of enforcement. >> It seems a good idea to structure this part so that it >> > does not fail if someone acts in bad faith, and so that the moderation >> > plan is reasonable to the reader and possible to implement. >> >> I feel it is important to mix in a bit of (1) with (2), the reason being >> that almost every person reading the CoC will not ever act in bad faith. >> You'd think that those people could simply ignore language related to >> enforcement, but in previous discussions (e.g., around the Jupyter CoC) >> that turned out not to be the case: it is all too easy to frighten >> people into not speaking up. >> >> So, I'd recommend focusing on a description of the kind of community we >> want, instead of what we're trying to avoid; and postponing the >> enforcement language until later in the document, making it clear that >> enforcement only comes through (somewhat wide) deliberation of trusted >> community members (and, preferably, also after engagement with the >> offending party). >> >> This way, we can hopefully instill trust in our CoC as a process, rather >> than a set of rules. > > > That sounds quite good to me. The process at a high level (for all but the > most severe cases) should be something like: > > 1. complaint > 2. reasonable discussion/feedback > 3. mediation (if feedback didn't help) > 4. enforcement via transparent decision by CoC committee (if mediation > failed) Yes, I like that list a lot. Although, I was proposing in the Jupyter discussion, that informal mediation be triggered really early, to help people get over the feeling of being isolated, that can easily arise in on-line communication - see : https://github.com/jupyter/governance/pull/23#issuecomment-269352416 . This was partly based on my reading of [1] (see quote), which suggested that one reason that women can find online forums intimidating is the lack of a mentor to guide them through the thickets of unfamiliar jargon, habits, and cliques. And partly because, having read that, I realized I felt the same way. > And not what some people may be afraid of, and sometimes actually happens in > practice: > 1. complaint > 2. enforcement > > For a new CoC draft, taking most of the Apache doc and tacking the more > rules/enforcement oriented content of the Contributor Covenant onto the end > seems like a good starting point. Yes, I agree with that too ... :) Cheers, Matthew [1] https://suegardner.org/2011/02/19/nine-reasons-why-women-dont-edit-wikipedia-in-their-own-words """ The few times I?ve touched wikipedia, I?ve been struck by how isolating it can feel. It?s a very fend for yourself kind of place for me. Anywhere else online, my first impulse is to put out feelers. I make friends, ask for links to FAQs and guides, and inevitably someone takes me under their wing and shows me the ropes of whatever niche culture I?m obsessed with that month. """ From andyfaff at gmail.com Wed Sep 13 23:59:16 2017 From: andyfaff at gmail.com (Andrew Nelson) Date: Thu, 14 Sep 2017 13:59:16 +1000 Subject: [SciPy-Dev] Debugging a segfault In-Reply-To: References: <1ce1259c-c335-b731-d466-ddbd1b6993ea@iki.fi> Message-ID: I tried using all your suggestions Pauli, but didn't get anymore information (probably user error, I'm new to this). So I resorted to doing the test in a Python interpreter: Python 3.6.2 (default, Sep 8 2017, 16:44:49) [GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> from scipy.io import FortranFile, _test_fortran >>> filename = 'test.dat' >>> m, n, k = 5, 3, 2 >>> a = np.random.randn(m, n, k) >>> with FortranFile(filename, 'w') as f: ... f.write_record(a.T) >>> >>> a2 = _test_fortran.read_unformatted_double(m, n, k, filename) At line 7 of file scipy/io/_test_fortran.f Fortran runtime error: Invalid argument So line 129 in test_fortran.py crashes the interpreter. Line 130 doesn't crash. Lines 135-144 don't crash. Line 152 also crashes the interpreter: >>> a2, b2 = _test_fortran.read_unformatted_mixed(m, n, k, filename) At line 28 of file scipy/io/_test_fortran.f Fortran runtime error: Invalid argument I'm taking a stab at something, but line 7 and line 28 of _test_fortran.f both refer to reading doubles. Presumably unformatted doubles in this test (as you suggested). However, I checked that scipy was compiled against gfortran, I've also checked that numpy was compiled with gfortran. A. On 13 September 2017 at 20:16, Pauli Virtanen wrote: > Andrew Nelson kirjoitti 13.09.2017 klo 04:04: > [clip] > >> I also tried running the tests in gdb, using the instructions that were >> provided, but that didn't give me a stack trace either. >> > > You need to compile it with debug symbols enabled, `rm -rf build; > runtest.py -g ...` > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Sep 14 03:55:29 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 14 Sep 2017 19:55:29 +1200 Subject: [SciPy-Dev] SciPy 1.0 release schedule proposal In-Reply-To: References: Message-ID: On Sun, Aug 20, 2017 at 6:23 PM, Ralf Gommers wrote: > Hi all, > > We're now at 5 months after the 0.19.0 release, and well behind our > planned 1.0 release date [1]. So it's high time for a more concrete plan - > here it is: > > Sep 16: Branch 1.0.x > Sep 17: beta1 > Sep 27: rc1 > Oct 7: rc2 > Oct 17: final release > Hi all, here's a friendly reminder that we're branching 1.0.x in 2 days from now. There aren't any serious blockers in the issues marked for 1.0.0, so let's try to get as much merged as possible, and we'll bump the rest. My plan is to work as far as possible through https://github.com/scipy/scipy/milestone/32 for everything except the scipy.signal and scipy.sparse issues/PRs. For signal I'm hoping that Warren and Eric L. together can merge most open PRs. For sparse the linalg issues can be fixed later and backported, and the sparse matrix ones it looks like we won't get to (unless CJ you have fixes up your sleeve?). Ralf > This gives us one month to finish and merge everything that we really > would like to see in 1.0 - should be enough time for things that are > already in progress. The list of things now marked for 1.0 [2] is not too > long, and not all of it is critical. It would be useful if everyone could > add PRs/issues that they think are critical to the 1.0 milestone. Adding > yourself as an "assignee" on PRs/issues you plan to tackle would also be > helpful. > > Besides some really nice new features, we made major strides in > infrastructure (CI, testing) and project organisation for this release. Fom > my perspective there are three critical things left: > > 1. Windows wheels. Looks like we're pretty much there, but we need to > ensure that 1.0 is the first release that has Windows wheels available. > 2. Adding a code of conduct. This is the last thing we imho need from an > "open source project maturity" perspective (an FSA would be nice, but can > wait). > 3. Merging a lot of PRs. There's ~150 PRs open now, I hope in the next > month we can focus on reducing that number and getting some nice > improvements in that have been waiting for quite some time. > > Finally, we've discussed before writing a paper about SciPy to coincide > with this release. I'll follow up on that separately. > > Thoughts? > > Cheers, > Ralf > > [1] https://mail.python.org/pipermail/scipy-dev/2016-September/021485.html > [2] https://github.com/scipy/scipy/milestones/1.0 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From larson.eric.d at gmail.com Thu Sep 14 09:39:59 2017 From: larson.eric.d at gmail.com (Eric Larson) Date: Thu, 14 Sep 2017 09:39:59 -0400 Subject: [SciPy-Dev] SciPy 1.0 release schedule proposal In-Reply-To: References: Message-ID: > > My plan is to work as far as possible through > https://github.com/scipy/scipy/milestone/32 for everything except the > scipy.signal and scipy.sparse issues/PRs. For signal I'm hoping that Warren > and Eric L. together can merge most open PRs. > One PR is Warren's, which I'll merge tomorrow. The other three open PRs are mine. One of these depends on an open linalg PR , so it has two levels to get through. I'm happy to iterate quickly on them as necessary, but I'm not sure if it will be possible to get them in by tomorrow. Would it be okay to backport them, too? Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From perimosocordiae at gmail.com Thu Sep 14 10:47:27 2017 From: perimosocordiae at gmail.com (CJ Carey) Date: Thu, 14 Sep 2017 10:47:27 -0400 Subject: [SciPy-Dev] SciPy 1.0 release schedule proposal In-Reply-To: References: Message-ID: I've cleaned up / pushed back the remaining scipy.sparse items for the 1.0 milestone. There's one scipy.sparse.linalg issue open (gh-7600) but it's outside the scope of what I'm familiar with. On Thu, Sep 14, 2017 at 9:39 AM, Eric Larson wrote: > My plan is to work as far as possible through >> https://github.com/scipy/scipy/milestone/32 for everything except the >> scipy.signal and scipy.sparse issues/PRs. For signal I'm hoping that Warren >> and Eric L. together can merge most open PRs. >> > > One PR is Warren's, which I'll merge tomorrow. > > The other three open PRs are mine. One of these depends on an open linalg > PR , so it has two levels to > get through. I'm happy to iterate quickly on them as necessary, but I'm not > sure if it will be possible to get them in by tomorrow. Would it be okay to > backport them, too? > > Eric > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Sep 15 05:42:04 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 15 Sep 2017 21:42:04 +1200 Subject: [SciPy-Dev] SciPy 1.0 release schedule proposal In-Reply-To: References: Message-ID: On Fri, Sep 15, 2017 at 1:39 AM, Eric Larson wrote: > My plan is to work as far as possible through >> https://github.com/scipy/scipy/milestone/32 for everything except the >> scipy.signal and scipy.sparse issues/PRs. For signal I'm hoping that Warren >> and Eric L. together can merge most open PRs. >> > > One PR is Warren's, which I'll merge tomorrow. > > The other three open PRs are mine. One of these depends on an open linalg > PR , so it has two levels to > get through. I'm happy to iterate quickly on them as necessary, but I'm not > sure if it will be possible to get them in by tomorrow. Would it be okay to > backport them, too? > I think all those PRs except possibly the dpss one will make it in. I'll work on the beta release before branching, so can postpone that another half day (till Sunday afternoon my time, which is Sat evening US time). We never backport enhancements after branching though, and I wouldn't like to start now. So if people want that dpss window function in 1.0.0, it'll require rapid review and iteration. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Sep 15 05:42:24 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 15 Sep 2017 21:42:24 +1200 Subject: [SciPy-Dev] SciPy 1.0 release schedule proposal In-Reply-To: References: Message-ID: On Fri, Sep 15, 2017 at 2:47 AM, CJ Carey wrote: > I've cleaned up / pushed back the remaining scipy.sparse items for the 1.0 > milestone. > > There's one scipy.sparse.linalg issue open (gh-7600) but it's outside the > scope of what I'm familiar with. > Thanks CJ! Ralf > On Thu, Sep 14, 2017 at 9:39 AM, Eric Larson > wrote: > >> My plan is to work as far as possible through >>> https://github.com/scipy/scipy/milestone/32 for everything except the >>> scipy.signal and scipy.sparse issues/PRs. For signal I'm hoping that Warren >>> and Eric L. together can merge most open PRs. >>> >> >> One PR is Warren's, which I'll merge tomorrow. >> >> The other three open PRs are mine. One of these depends on an open linalg >> PR , so it has two levels to >> get through. I'm happy to iterate quickly on them as necessary, but I'm not >> sure if it will be possible to get them in by tomorrow. Would it be okay to >> backport them, too? >> >> Eric >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Sep 16 04:06:00 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 16 Sep 2017 20:06:00 +1200 Subject: [SciPy-Dev] deprecating all misc.pilutil functions for 1.0 Message-ID: Hi all, We've discussed this before, but never actually cleaned up all the misc.pilutil functions. I've just submitted https://github.com/scipy/scipy/pull/7869 which deprecates all of them - they're beyond rescuing and have better alternatives in Matplotlib, scikit-image and Pillow. Details in the PR description. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Sep 16 17:34:58 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 17 Sep 2017 09:34:58 +1200 Subject: [SciPy-Dev] SciPy 1.0 release schedule proposal In-Reply-To: References: Message-ID: On Fri, Sep 15, 2017 at 9:42 PM, Ralf Gommers wrote: > > > On Fri, Sep 15, 2017 at 1:39 AM, Eric Larson > wrote: > >> My plan is to work as far as possible through >>> https://github.com/scipy/scipy/milestone/32 for everything except the >>> scipy.signal and scipy.sparse issues/PRs. For signal I'm hoping that Warren >>> and Eric L. together can merge most open PRs. >>> >> >> One PR is Warren's, which I'll merge tomorrow. >> >> The other three open PRs are mine. One of these depends on an open linalg >> PR , so it has two levels to >> get through. I'm happy to iterate quickly on them as necessary, but I'm not >> sure if it will be possible to get them in by tomorrow. Would it be okay to >> backport them, too? >> > > I think all those PRs except possibly the dpss one will make it in. I'll > work on the beta release before branching, so can postpone that another > half day (till Sunday afternoon my time, which is Sat evening US time). > > We never backport enhancements after branching though, and I wouldn't like > to start now. So if people want that dpss window function in 1.0.0, it'll > require rapid review and iteration. > Almost therre! I sent two deprecation related PRs yesterday that need to go in. They're already reviewed (thanks Eric, Stefan) but will at least give those 24 hrs after submission so everyone has a chance to have a quick look at them. Here's a list of PRs that I plan on merging within about 10 hours from now unless there are new comments/objections: - Wrappers for some LAPACK functions (by Ilhan) https://github.com/scipy/scipy/pull/7851 - Automatic FIR order for decimate (by Eric L) https://github.com/scipy/scipy/pull/7835 - Add eigvals_tridiagonal (by Eric L) https://github.com/scipy/scipy/pull/7810 - Speed up freqz computation (by Eric L) https://github.com/scipy/scipy/pull/7738 - Removing deprecated function (by me) https://github.com/scipy/scipy/pull/7870 - Deprecate misc.pilutil functions (by me) https://github.com/scipy/scipy/pull/7869 - Update 1.0 release notes (by me) https://github.com/scipy/scipy/pull/7868 Anything else is a bonus. I've also started adding PRs for new features that are ready but haven't gotten the reviewer attention they deserve to the 1.1.0 milestone ( https://github.com/scipy/scipy/milestone/34). Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From larson.eric.d at gmail.com Sat Sep 16 17:36:28 2017 From: larson.eric.d at gmail.com (Eric Larson) Date: Sat, 16 Sep 2017 17:36:28 -0400 Subject: [SciPy-Dev] SciPy 1.0 release schedule proposal In-Reply-To: References: Message-ID: Sounds good to me. Thanks for taking the reins on this. Eric On Sat, Sep 16, 2017 at 5:34 PM, Ralf Gommers wrote: > > > On Fri, Sep 15, 2017 at 9:42 PM, Ralf Gommers > wrote: > >> >> >> On Fri, Sep 15, 2017 at 1:39 AM, Eric Larson >> wrote: >> >>> My plan is to work as far as possible through >>>> https://github.com/scipy/scipy/milestone/32 for everything except the >>>> scipy.signal and scipy.sparse issues/PRs. For signal I'm hoping that Warren >>>> and Eric L. together can merge most open PRs. >>>> >>> >>> One PR is Warren's, which I'll merge tomorrow. >>> >>> The other three open PRs are mine. One of these depends on an open linalg >>> PR , so it has two levels to >>> get through. I'm happy to iterate quickly on them as necessary, but I'm not >>> sure if it will be possible to get them in by tomorrow. Would it be okay to >>> backport them, too? >>> >> >> I think all those PRs except possibly the dpss one will make it in. I'll >> work on the beta release before branching, so can postpone that another >> half day (till Sunday afternoon my time, which is Sat evening US time). >> >> We never backport enhancements after branching though, and I wouldn't >> like to start now. So if people want that dpss window function in 1.0.0, >> it'll require rapid review and iteration. >> > > Almost therre! > > I sent two deprecation related PRs yesterday that need to go in. They're > already reviewed (thanks Eric, Stefan) but will at least give those 24 hrs > after submission so everyone has a chance to have a quick look at them. > > Here's a list of PRs that I plan on merging within about 10 hours from now > unless there are new comments/objections: > > - Wrappers for some LAPACK functions (by Ilhan) https://github.com/scipy/ > scipy/pull/7851 > - Automatic FIR order for decimate (by Eric L) https://github.com/scipy/ > scipy/pull/7835 > - Add eigvals_tridiagonal (by Eric L) https://github.com/scipy/ > scipy/pull/7810 > - Speed up freqz computation (by Eric L) https://github.com/scipy/ > scipy/pull/7738 > - Removing deprecated function (by me) https://github.com/scipy/ > scipy/pull/7870 > - Deprecate misc.pilutil functions (by me) https://github.com/scipy/ > scipy/pull/7869 > - Update 1.0 release notes (by me) https://github.com/scipy/ > scipy/pull/7868 > > Anything else is a bonus. > > I've also started adding PRs for new features that are ready but haven't > gotten the reviewer attention they deserve to the 1.1.0 milestone ( > https://github.com/scipy/scipy/milestone/34). > > Cheers, > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Sep 17 06:19:35 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 17 Sep 2017 22:19:35 +1200 Subject: [SciPy-Dev] 1.0.x branched Message-ID: Hi all, FYI: a 1.0.x branch has been created, and master is open for new features for 1.1.0 Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Sep 17 06:48:35 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 17 Sep 2017 22:48:35 +1200 Subject: [SciPy-Dev] ANN: SciPy 1.0 beta release Message-ID: Hi all, I'm excited to be able to announce the availability of the first beta release of Scipy 1.0. This is a big release, and a version number that has been 16 years in the making. It contains a few more deprecations and backwards incompatible changes than an average release. Therefore please do test it on your own code, and report any issues on the Github issue tracker or on the scipy-dev mailing list. Sources: https://github.com/scipy/scipy/releases/tag/v1.0.0b1 Binary wheels: will follow tomorrow, I'll announce those when ready (TravisCI is under maintenance right now) Thanks to everyone who contributed to this release! Ralf Release notes (full notes including authors, closed issued and merged PRs at the Github Releases link above): ========================== SciPy 1.0.0 Release Notes ========================== .. note:: Scipy 1.0.0 is not released yet! .. contents:: SciPy 1.0.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 1.0.x branch, and on adding new features on the master branch. Some of the highlights of this release are: - Major build improvements. Windows wheels are available on PyPI for the first time, and continuous integration has been set up on Windows and OS X in addition to Linux. - A set of new ODE solvers and a unified interface to them (`scipy.integrate.solve_ivp`). - Two new trust region optimizers and a new linear programming method, with improved performance compared to what `scipy.optimize` offered previously. - Many new BLAS and LAPACK functions were wrapped. The BLAS wrappers are now complete. This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater. This is also the last release to support LAPACK 3.1.x - 3.3.x. Moving the lowest supported LAPACK version to >3.2.x was long blocked by Apple Accelerate providing the LAPACK 3.2.1 API. We have decided that it's time to either drop Accelerate or, if there is enough interest, provide shims for functions added in more recent LAPACK versions so it can still be used. New features ============ `scipy.cluster` improvements ---------------------------- `scipy.cluster.hierarchy.optimal_leaf_ordering`, a function to reorder a linkage matrix to minimize distances between adjacent leaves, was added. `scipy.fftpack` improvements ---------------------------- N-dimensional versions of the discrete sine and cosine transforms and their inverses were added as ``dctn``, ``idctn``, ``dstn`` and ``idstn``. `scipy.integrate` improvements ------------------------------ A set of new ODE solvers have been added to `scipy.integrate`. The convenience function `scipy.integrate.solve_ivp` allows uniform access to all solvers. The individual solvers (``RK23``, ``RK45``, ``Radau``, ``BDF`` and ``LSODA``) can also be used directly. `scipy.linalg` improvements ---------------------------- The BLAS wrappers in `scipy.linalg.blas` have been completed. Added functions are ``*gbmv``, ``*hbmv``, ``*hpmv``, ``*hpr``, ``*hpr2``, ``*spmv``, ``*spr``, ``*tbmv``, ``*tbsv``, ``*tpmv``, ``*tpsv``, ``*trsm``, ``*trsv``, ``*sbmv``, ``*spr2``, Wrappers for the LAPACK functions ``*gels``, ``*stev``, ``*sytrd``, ``*hetrd``, ``*sytf2``, ``*hetrf``, ``*sytrf``, ``*sycon``, ``*hecon``, ``*gglse``, ``*stebz``, ``*stemr``, ``*sterf``, and ``*stein`` have been added. The function `scipy.linalg.subspace_angles` has been added to compute the subspace angles between two matrices. The function `scipy.linalg.clarkson_woodruff_transform` has been added. It finds low-rank matrix approximation via the Clarkson-Woodruff Transform. The functions `scipy.linalg.eigh_tridiagonal` and `scipy.linalg.eigvalsh_tridiagonal`, which find the eigenvalues and eigenvectors of tridiagonal hermitian/symmetric matrices, were added. `scipy.ndimage` improvements ---------------------------- Support for homogeneous coordinate transforms has been added to `scipy.ndimage.affine_transform`. The ``ndimage`` C code underwent a significant refactoring, and is now a lot easier to understand and maintain. `scipy.optimize` improvements ----------------------------- The methods ``trust-region-exact`` and ``trust-krylov`` have been added to the function `scipy.optimize.minimize`. These new trust-region methods solve the subproblem with higher accuracy at the cost of more Hessian factorizations (compared to dogleg) or more matrix vector products (compared to ncg) but usually require less nonlinear iterations and are able to deal with indefinite Hessians. They seem very competitive against the other Newton methods implemented in scipy. `scipy.optimize.linprog` gained an interior point method. Its performance is superior (both in accuracy and speed) to the older simplex method. `scipy.signal` improvements --------------------------- An argument ``fs`` (sampling frequency) was added to the following functions: ``firwin``, ``firwin2``, ``firls``, and ``remez``. This makes these functions consistent with many other functions in `scipy.signal` in which the sampling frequency can be specified. `scipy.signal.freqz` has been sped up significantly for FIR filters. `scipy.sparse` improvements --------------------------- Iterating over and slicing of CSC and CSR matrices is now faster by up to ~35%. The ``tocsr`` method of COO matrices is now several times faster. The ``diagonal`` method of sparse matrices now takes a parameter, indicating which diagonal to return. `scipy.sparse.linalg` improvements ---------------------------------- A new iterative solver for large-scale nonsymmetric sparse linear systems, `scipy.sparse.linalg.gcrotmk`, was added. It implements ``GCROT(m,k)``, a flexible variant of ``GCROT``. `scipy.sparse.linalg.lsmr` now accepts an initial guess, yielding potentially faster convergence. SuperLU was updated to version 5.2.1. `scipy.spatial` improvements ---------------------------- Many distance metrics in `scipy.spatial.distance` gained support for weights. The signatures of `scipy.spatial.distance.pdist` and `scipy.spatial.distance.cdist` were changed to ``*args, **kwargs`` in order to support a wider range of metrics (e.g. string-based metrics that need extra keywords). Also, an optional ``out`` parameter was added to ``pdist`` and ``cdist`` allowing the user to specify where the resulting distance matrix is to be stored `scipy.stats` improvements -------------------------- The methods ``cdf`` and ``logcdf`` were added to `scipy.stats.multivariate_normal`, providing the cumulative distribution function of the multivariate normal distribution. New statistical distance functions were added, namely `scipy.stats.wasserstein_distance` for the first Wasserstein distance and `scipy.stats.energy_distance` for the energy distance. Deprecated features =================== The following functions in `scipy.misc` are deprecated: ``bytescale``, ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, ``imsave``, ``imshow`` and ``toimage``. Most of those functions have unexpected behavior (like rescaling and type casting image data without the user asking for that). Other functions simply have better alternatives. ``scipy.interpolate.interpolate_wrapper`` and all functions in that submodule are deprecated. This was a never finished set of wrapper functions which is not relevant anymore. The ``fillvalue`` of `scipy.signal.convolve2d` will be cast directly to the dtypes of the input arrays in the future and checked that it is a scalar or an array with a single element. Backwards incompatible changes ============================== The following deprecated functions have been removed from `scipy.stats`: ``betai``, ``chisqprob``, ``f_value``, ``histogram``, ``histogram2``, ``pdf_fromgamma``, ``signaltonoise``, ``square_of_sums``, ``ss`` and ``threshold``. The following deprecated functions have been removed from `scipy.stats.mstats`: ``betai``, ``f_value_wilks_lambda``, ``signaltonoise`` and ``threshold``. The deprecated ``a`` and ``reta`` keywords have been removed from `scipy.stats.shapiro`. The deprecated functions ``sparse.csgraph.cs_graph_components`` and ``sparse.linalg.symeig`` have been removed from `scipy.sparse`. The following deprecated keywords have been removed in `scipy.sparse.linalg`: ``drop_tol`` from ``splu``, and ``xtype`` from ``bicg``, ``bicgstab``, ``cg``, ``cgs``, ``gmres``, ``qmr`` and ``minres``. The deprecated functions ``expm2`` and ``expm3`` have been removed from `scipy.linalg`. The deprecated keyword ``q`` was removed from `scipy.linalg.expm`. And the deprecated submodule ``linalg.calc_lwork`` was removed. The deprecated functions ``C2K``, ``K2C``, ``F2C``, ``C2F``, ``F2K`` and ``K2F`` have been removed from `scipy.constants`. The deprecated ``ppform`` class was removed from `scipy.interpolate`. The deprecated keyword ``iprint`` was removed from `scipy.optimize.fmin_cobyla`. The default value for the ``zero_phase`` keyword of `scipy.signal.decimate` has been changed to True. The ``kmeans`` and ``kmeans2`` functions in `scipy.cluster.vq` changed the method used for random initialization, so using a fixed random seed will not necessarily produce the same results as in previous versions. `scipy.special.gammaln` does not accept complex arguments anymore. The deprecated functions ``sph_jn``, ``sph_yn``, ``sph_jnyn``, ``sph_in``, ``sph_kn``, and ``sph_inkn`` have been removed. Users should instead use the functions ``spherical_jn``, ``spherical_yn``, ``spherical_in``, and ``spherical_kn``. Be aware that the new functions have different signatures. The cross-class properties of `scipy.signal.lti` systems have been removed. The following properties/setters have been removed: Name - (accessing/setting has been removed) - (setting has been removed) * StateSpace - (``num``, ``den``, ``gain``) - (``zeros``, ``poles``) * TransferFunction (``A``, ``B``, ``C``, ``D``, ``gain``) - (``zeros``, ``poles``) * ZerosPolesGain (``A``, ``B``, ``C``, ``D``, ``num``, ``den``) - () ``signal.freqz(b, a)`` with ``b`` or ``a`` >1-D raises a ``ValueError``. This was a corner case for which it was unclear that the behavior was well-defined. The method ``var`` of `scipy.stats.dirichlet` now returns a scalar rather than an ndarray when the length of alpha is 1. Other changes ============= SciPy now has a formal governance structure. It consists of a BDFL (Pauli Virtanen) and a Steering Committee. See `the governance document < https://github.com/scipy/scipy/blob/master/doc/source/dev/governance/governance.rst >`_ for details. It is now possible to build SciPy on Windows with MSVC + gfortran! Continuous integration has been set up for this build configuration on Appveyor, building against OpenBLAS. Continuous integration for OS X has been set up on TravisCI. The SciPy test suite has been migrated from ``nose`` to ``pytest``. ``scipy/_distributor_init.py`` was added to allow redistributors of SciPy to add custom code that needs to run when importing SciPy (e.g. checks for hardware, DLL search paths, etc.). Support for PEP 518 (specifying build system requirements) was added - see ``pyproject.toml`` in the root of the SciPy repository. In order to have consistent function names, the function ``scipy.linalg.solve_lyapunov`` is renamed to `scipy.linalg.solve_continuous_lyapunov`. The old name is kept for backwards-compatibility. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Sep 17 11:32:15 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 17 Sep 2017 09:32:15 -0600 Subject: [SciPy-Dev] [Numpy-discussion] ANN: SciPy 1.0 beta release In-Reply-To: References: Message-ID: On Sun, Sep 17, 2017 at 4:48 AM, Ralf Gommers wrote: > Hi all, > > I'm excited to be able to announce the availability of the first beta > release of Scipy 1.0. This is a big release, and a version number that > has been 16 years in the making. It contains a few more deprecations and > backwards incompatible changes than an average release. Therefore please do > test it on your own code, and report any issues on the Github issue tracker > or on the scipy-dev mailing list. > > Sources: https://github.com/scipy/scipy/releases/tag/v1.0.0b1 > Binary wheels: will follow tomorrow, I'll announce those when ready > (TravisCI is under maintenance right now) > > Thanks to everyone who contributed to this release! > Congratulations to all, and an extra congratulations to Matthew and everyone else involved in getting the scipy wheels building on all the supported platforms. For those unfamiliar with the history, Ralf became release manager for NumPy 1.4.1 back in early 2010 and switched to full time SciPy release manager in 2011. It has been a long, productive, seven years. Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Sep 18 04:53:09 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 18 Sep 2017 20:53:09 +1200 Subject: [SciPy-Dev] [Numpy-discussion] ANN: SciPy 1.0 beta release In-Reply-To: References: Message-ID: On Mon, Sep 18, 2017 at 3:32 AM, Charles R Harris wrote: > > > On Sun, Sep 17, 2017 at 4:48 AM, Ralf Gommers > wrote: > >> >> >> Thanks to everyone who contributed to this release! >> > > Congratulations to all, and an extra congratulations to Matthew and > everyone else involved in getting the scipy wheels building on all the > supported platforms. > +1 > For those unfamiliar with the history, Ralf became release manager for > NumPy 1.4.1 back in early 2010 and switched to full time SciPy release > manager in 2011. It has been a long, productive, seven years. > It has been for sure, both long and productive:) I can't claim all the release manager credits though: Pauli did 0.14.1-0.15.x and Evgeni 0.17.0-0.19.0. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Sep 18 05:59:15 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 18 Sep 2017 21:59:15 +1200 Subject: [SciPy-Dev] ANN: SciPy 1.0 beta release In-Reply-To: References: Message-ID: On Sun, Sep 17, 2017 at 10:48 PM, Ralf Gommers wrote: > Hi all, > > I'm excited to be able to announce the availability of the first beta > release of Scipy 1.0. This is a big release, and a version number that > has been 16 years in the making. It contains a few more deprecations and > backwards incompatible changes than an average release. Therefore please do > test it on your own code, and report any issues on the Github issue tracker > or on the scipy-dev mailing list. > > Sources: https://github.com/scipy/scipy/releases/tag/v1.0.0b1 > Binary wheels: will follow tomorrow, I'll announce those when ready > (TravisCI is under maintenance right now) > Binary wheels for Windows, Linux and OS X (for all supported Python versions, 32-bit and 64-bit) can be found at http://wheels.scipy.org. To install directly with pip: pip install scipy=='1.0.0b1' -f http://wheels.scipy.org --trusted-host wheels.scipy.org (add --user and/or --upgrade as required to that command). Alternatively, just download the wheel you need and do `pip install scipy-1.0.0b1-.whl`. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Sep 19 05:10:15 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 19 Sep 2017 21:10:15 +1200 Subject: [SciPy-Dev] [SciPy-User] ANN: SciPy 1.0 beta release In-Reply-To: References: Message-ID: On Mon, Sep 18, 2017 at 10:36 PM, Matthew Brett wrote: > Hi, > > On Mon, Sep 18, 2017 at 11:14 AM, Ralf Gommers > wrote: > > > > > > On Mon, Sep 18, 2017 at 10:11 PM, Matthew Brett > > > wrote: > >> > >> Hi, > >> > >> On Mon, Sep 18, 2017 at 11:07 AM, Thomas Kluyver > wrote: > >> > On 18 September 2017 at 10:59, Ralf Gommers > >> > wrote: > >> >> > >> >> Binary wheels for Windows, Linux and OS X (for all supported Python > >> >> versions, 32-bit and 64-bit) can be found at http://wheels.scipy.org > . > >> >> To > >> >> install directly with pip: > >> >> > >> >> pip install scipy=='1.0.0b1' -f http://wheels.scipy.org > >> >> --trusted-host > >> >> wheels.scipy.org > >> > > >> > > >> > I don't want to criticise the hard work that has gone into making this > >> > available, but I'm disappointed that we're telling people to install > >> > software over an insecure HTTP connection. > >> > >> I personally prefer the following recipe: > >> > >> pip install -f > >> https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a4 > 3.ssl.cf2.rackcdn.com > >> scipy=='1.0.0b1' > >> > >> > Can the wheels not be uploaded to PyPI? > >> > >> Sounds like a good idea. I can do that - any objections? > > > > > > That would be helpful Matthew, I'm about to sign off for today. > > Done - new instructions for testing: > > pip install --pre --upgrade scipy > Thanks Matthew! Replying to all lists with the better install instructions. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From masoud.mansouryar at yahoo.com Tue Sep 19 06:37:12 2017 From: masoud.mansouryar at yahoo.com (Masoud Mansouryar) Date: Tue, 19 Sep 2017 12:37:12 +0200 Subject: [SciPy-Dev] Constrained Optimization with Scipy Message-ID: Hi Friends, I am working on an Optimization problem in Python, which is defined like this: def objective(x, sign=1.0): P_max = x[0] P_min = x[1] s_max = x[2] s_min = x[3] s_data, P_f, P_g = PG(a,b,P_max,P_min,P_f_max,s_max,s_min) A_tot = AM(s_data,c,d) a_cost_opt = a_costFunc(A_tot) g_cost_opt = g_costFunc(P_f,P_g) res_max = -(a_cost_opt + g_cost_opt) print((a_cost_opt + g_cost_opt)) return res_max x0 = (2.5,2.5,.8,.2) # Starting points bd0 = (1,5) bd1 = (1,5) bd2 = (0.7,0.9) bd3 = (0.1,0.3) bnds = (bd0,bd1,bd2,bd3) # bounds minimizer_kwargs = dict(method="L-BFGS-B", bounds=bnds) sol = sci.optimize.basinhopping(objective, x0, minimizer_kwargs=minimizer_kwargs, niter=2, niter_success=5, T=0.5, stepsize=.1, disp=True) As you see in my code, due to the complexity of my objective function, I am using basinhopping algorithm and so far it is giving me a good result. That negative sign in my res_max is to find the maximum indeed (maximum interest), so I thought a negative minimum is a maximum. The functions which are being called inside my objective function, have non-linear calculations, and all of their parameters are numpy array. But there are some points unclear for me about this optimization: 1. Is my objective function convex, differentiable, non-linear, discrete? 2. How can I find that my result is a global minimum, not a local minimum? (I want the global minimum) 3. Which algorithm is actually minimizing my function, basinhopping or L-BFGS-B? 4. Is scipy.optimize.basinhopping the best framework which I need? Your help is really appreciated. Thank you, Masoud -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Tue Sep 19 09:23:23 2017 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 19 Sep 2017 15:23:23 +0200 Subject: [SciPy-Dev] ANN: SfePy 2017.3 Message-ID: <300388fd-de7f-ad26-248a-e81de1f18240@ntc.zcu.cz> I am pleased to announce release 2017.3 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method or by the isogeometric analysis (limited support). It is distributed under the new BSD license. Home page: http://sfepy.org Mailing list: https://mail.python.org/mm3/mailman3/lists/sfepy.python.org/ Git (source) repository, issue tracker: https://github.com/sfepy/sfepy Highlights of this release -------------------------- - support preconditioning in SciPy and PyAMG based linear solvers - user-defined preconditioners for PETSc linear solvers - parallel multiscale (macro-micro) homogenization-based computations - improved tutorial and installation instructions For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Cheers, Robert Cimrman --- Contributors to this release in alphabetical order: Robert Cimrman Lubos Kejzlar Vladimir Lukes Matyas Novak From tristanhearn at gmail.com Tue Sep 19 09:37:01 2017 From: tristanhearn at gmail.com (Tristan H) Date: Tue, 19 Sep 2017 09:37:01 -0400 Subject: [SciPy-Dev] Enhancements to scipy.interpolate.RegularGridInterpolator Message-ID: Hi, long time user, first time (potential) contributor. I recently developed some techniques for interpolation on regular grids that includes smooth spline interpolation in arbitrary dimensions. This also includes calculation of gradients with respect to the interpolations. This method is basically set of interpolations that acts like a separable tensor transformation: I apply instances of `UnivariateSpline` to the last dimension, and "fold" the values into the next dimension, and continue until a final value (and its gradient) is reached. This is useful to my work for various surrogate modelling purposes, but is general enough that it could be useful to others. To date, I found no existing n-dimensional interpolation library for structured data that includes gradients, so there is at least some novelty. I've actually gone ahead with a potential integration into `RegularGridInterpolator` as an additional set of 'method' options with tests, a rough cut at documentation, and a light refactor in `scipy.interpolate` to place `RegularGridInterpolator` and `interpn` into their own source files and test files. The refactor was originally done to get around a circular import issue based on a requirement on PPoly, though it is not strictly necessary any longer. I'd be happy to talk about the idea more or hear any opinions, or could open a PR if there is interest in seeing the code. Best, -Tristan From ralf.gommers at gmail.com Fri Sep 22 06:33:26 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 22 Sep 2017 22:33:26 +1200 Subject: [SciPy-Dev] Enhancements to scipy.interpolate.RegularGridInterpolator In-Reply-To: References: Message-ID: On Wed, Sep 20, 2017 at 1:37 AM, Tristan H wrote: > Hi, long time user, first time (potential) contributor. > Nice, welcome;) I recently developed some techniques for interpolation on regular grids > that includes smooth spline interpolation in arbitrary dimensions. This > also > includes calculation of gradients with respect to the interpolations. > > This method is basically set of interpolations that acts like a > separable tensor transformation: I apply instances of `UnivariateSpline` to > the last dimension, and "fold" the values into the next dimension, and > continue > until a final value (and its gradient) is reached. > > This is useful to my work for various surrogate modelling purposes, but is > general enough that it could be useful to others. To date, I found no > existing > n-dimensional interpolation library for structured data that includes > gradients, > so there is at least some novelty. > > I've actually gone ahead with a potential integration into > `RegularGridInterpolator` > as an additional set of 'method' options with tests, a rough cut at > documentation, and a > light refactor in `scipy.interpolate` to place > `RegularGridInterpolator` and `interpn` into > their own source files and test files. The refactor was originally > done to get around a circular > import issue based on a requirement on PPoly, though it is not > strictly necessary any longer. > > I'd be happy to talk about the idea more or hear any opinions, or > could open a PR > if there is interest in seeing the code. > Looks interesting. Here's a link to the PR for people who don't follow Github: https://github.com/scipy/scipy/pull/7903 Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Sep 22 06:45:11 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 22 Sep 2017 22:45:11 +1200 Subject: [SciPy-Dev] welcome Ilhan to the core team Message-ID: Hi all, On behalf of the SciPy developers I'd like to welcome Ilhan Polat as a member of the core dev team. Ilhan has been contributing to scipy.linalg for a year now, and is responsible for a lot of the new BLAS and LAPACK wrappers as well as improvements to various solvers - https://github.com/scipy/scipy/pulls/ilayn I'm looking forward to his continued contributions! Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Fri Sep 22 09:04:16 2017 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 22 Sep 2017 09:04:16 -0400 Subject: [SciPy-Dev] welcome Ilhan to the core team In-Reply-To: References: Message-ID: On Fri, Sep 22, 2017 at 6:45 AM, Ralf Gommers wrote: > Hi all, > > On behalf of the SciPy developers I'd like to welcome Ilhan Polat as a > member of the core dev team. > > Ilhan has been contributing to scipy.linalg for a year now, and is > responsible for a lot of the new BLAS and LAPACK wrappers as well as > improvements to various solvers - https://github.com/scipy/ > scipy/pulls/ilayn > > I'm looking forward to his continued contributions! > > Cheers, > Ralf > > Welcome Ilhan! Thanks for the great work so far, looking forward to more. Warren _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffmanvillejr at gmail.com Sat Sep 23 23:05:57 2017 From: jeffmanvillejr at gmail.com (Jeffrey Manville) Date: Sat, 23 Sep 2017 23:05:57 -0400 Subject: [SciPy-Dev] Add feature to approx_fprime Message-ID: Hello, I have been using the approx_fprime and I noticed that the source code as it is only uses the one sided numerical differentiation. I had to build my own two sided one by copying and modifying the one sided code. One sided: F'(x) = ( F(x+h) - F(x) ) / h aka Newton method Two sided: F'(x) = ( F(x+h) - F(x-h) ) / (2*h) aka Symmetric method The two sided method tends to be more accurate, but can be more computationally expensive. Is this something that would be good to add? I am newer to python and I haven't contributed to open source yet, but it seems like a good fit. I think it would be good to pass in a kwarg called method and set the default to one sided, so no one's code breaks, but they can choose which method to use. Should I keep the decision logic at approx_fprime and have two separate functions for the two methods? Here's a wikipedia article on it Here's a link to the source code Cheers, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From eryole at gmail.com Sun Sep 24 03:46:02 2017 From: eryole at gmail.com (Nicolas CELLIER) Date: Sun, 24 Sep 2017 09:46:02 +0200 Subject: [SciPy-Dev] Add feature to approx_fprime In-Reply-To: References: Message-ID: <8dd96501-8952-472e-81d8-a39ed609359d@gmail.com> Hm, tell me if I'm wrong, but in that case you can choose an arbitrary small epsilon. So accuracy should not be a problem (except in case of rounding error). By curiosity, what is your application and why do you think that one sided isn't enough? And if accuracy loss is really a problem, have you checked automatic differenciation? Nicolas C. Le 24 sept. 2017, ? 05:09, Jeffrey Manville a ?crit: Hello, I have been using the approx_fprime and I noticed that the source code as it is only uses the one sided numerical differentiation. I had to build my own two sided one by copying and modifying the one sided code. One sided: F'(x) = ( F(x+h) - F(x) ) / h ? ? ? ? ? ? ? ? ? ? ? ? aka Newton methodTwo sided:??F'(x) = ( F(x+h) - F(x-h) ) / (2*h) ? ? ? ? ? ? ? ?aka Symmetric method The two sided method tends to be more accurate, but can be more computationally expensive. ? Is this something that would be good to add? I am newer to python and I haven't contributed to open source yet, but it seems like a good fit. I think it would be good to pass in a kwarg called method and set the default to one sided, so no one's code breaks, but they can choose which method to use.? Should I keep the decision logic at approx_fprime and have two separate functions for the two methods? Here's a wikipedia article on it Here's a link to the?source code Cheers, Jeff SciPy-Dev mailing list SciPy-Dev at python.org https://mail.python.org/mailman/listinfo/scipy-dev Le 24 sept. 2017 05:09, ? 05:09, Jeffrey Manville a ?crit: >Hello, > >I have been using the approx_fprime and I noticed that the source code >as >it is only uses the one sided numerical differentiation. I had to build >my >own two sided one by copying and modifying the one sided code. > >One sided: F'(x) = ( F(x+h) - F(x) ) / h aka >Newton >method >Two sided: F'(x) = ( F(x+h) - F(x-h) ) / (2*h) aka >Symmetric method > >The two sided method tends to be more accurate, but can be more >computationally expensive. > >Is this something that would be good to add? I am newer to python and I >haven't contributed to open source yet, but it seems like a good fit. > >I think it would be good to pass in a kwarg called method and set the >default to one sided, so no one's code breaks, but they can choose >which >method to use. > >Should I keep the decision logic at approx_fprime and have two separate >functions for the two methods? > >Here's a wikipedia article on it > >Here's a link to the source code > > >Cheers, > >Jeff > > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-Dev mailing list >SciPy-Dev at python.org >https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From phillip.m.feldman at gmail.com Sun Sep 24 14:12:07 2017 From: phillip.m.feldman at gmail.com (Phillip Feldman) Date: Sun, 24 Sep 2017 11:12:07 -0700 Subject: [SciPy-Dev] Add feature to approx_fprime In-Reply-To: References: Message-ID: In my opinion, this would be a useful addition. The computational cost looks essentially the same because the number of evaluations of F is the same. The Wikipedia article that you cite also provides a formula for the so-called five-point method. This method is more computationally expensive, but converges very rapidly, so it might be good to include this option as well. Phillip On Sat, Sep 23, 2017 at 8:05 PM, Jeffrey Manville wrote: > Hello, > > I have been using the approx_fprime and I noticed that the source code as > it is only uses the one sided numerical differentiation. I had to build my > own two sided one by copying and modifying the one sided code. > > One sided: F'(x) = ( F(x+h) - F(x) ) / h aka > Newton method > Two sided: F'(x) = ( F(x+h) - F(x-h) ) / (2*h) aka > Symmetric method > > The two sided method tends to be more accurate, but can be more > computationally expensive. > > Is this something that would be good to add? I am newer to python and I > haven't contributed to open source yet, but it seems like a good fit. > > I think it would be good to pass in a kwarg called method and set the > default to one sided, so no one's code breaks, but they can choose which > method to use. > > Should I keep the decision logic at approx_fprime and have two separate > functions for the two methods? > > Here's a wikipedia article on it > > Here's a link to the source code > > > Cheers, > > Jeff > > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffmanvillejr at gmail.com Sun Sep 24 14:34:42 2017 From: jeffmanvillejr at gmail.com (Jeffrey Manville) Date: Sun, 24 Sep 2017 14:34:42 -0400 Subject: [SciPy-Dev] Add feature to approx_fprime (Nicolas CELLIER) Message-ID: Hello Nick, So, I am implementing Andrew Ng's Machine learning coursework in Python which was originally in Matlab/Octave. I was using the approx_fprime to evaluate by implementation of a back propagation algorithm. To be honest, I didn't know about automatic differentiation. I suspect that it can be prohibitively expensive for many machine learning algorithms, but probably not for my project. I could use a smaller epsilon, but then I am just getting closer and closer to the floating point error problem. *My project:* In my project, the "difference" between backprop and the numerical gradient was expected to be <1e-9 with an epsilon of 1e-4. I was getting about 2.48e-5 with the current approx_fprime vs 1.65e-11 with the two sided method and the same epsilon. The two sided was significantly more accurate. *The problem:* The problem with the one sided method is that it is inherently biased. The change isn't centered around the point in question for the one sided method. The two methods are essentially the same method with the difference being "where" they guess that calculated derivative is. One sided guesses that the calculated derivative is at one side of the two points. The two sided guesses the derivative is in the middle. *Proof:* I attached an excel doc that has a kind of example of the difference. I have the project on github. Here is the branch that I am currently working on. If you pull it and run ex4checker.py you will see the two sided. If you change line 3: import checker as op to import checker as op you will see the relative difference change from the two sided to the one sided. Cheers, Jeff On Sun, Sep 24, 2017 at 12:00 PM, wrote: > Send SciPy-Dev mailing list submissions to > scipy-dev at python.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.python.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at python.org > > You can reach the person managing the list at > scipy-dev-owner at python.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Add feature to approx_fprime (Jeffrey Manville) > 2. Re: Add feature to approx_fprime (Nicolas CELLIER) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sat, 23 Sep 2017 23:05:57 -0400 > From: Jeffrey Manville > To: scipy-dev at python.org > Subject: [SciPy-Dev] Add feature to approx_fprime > Message-ID: > gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hello, > > I have been using the approx_fprime and I noticed that the source code as > it is only uses the one sided numerical differentiation. I had to build my > own two sided one by copying and modifying the one sided code. > > One sided: F'(x) = ( F(x+h) - F(x) ) / h aka Newton > method > Two sided: F'(x) = ( F(x+h) - F(x-h) ) / (2*h) aka > Symmetric method > > The two sided method tends to be more accurate, but can be more > computationally expensive. > > Is this something that would be good to add? I am newer to python and I > haven't contributed to open source yet, but it seems like a good fit. > > I think it would be good to pass in a kwarg called method and set the > default to one sided, so no one's code breaks, but they can choose which > method to use. > > Should I keep the decision logic at approx_fprime and have two separate > functions for the two methods? > > Here's a wikipedia article on it > > Here's a link to the source code > optimize.py#L633-L688> > > Cheers, > > Jeff > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: 20170923/87340d02/attachment-0001.html> > > ------------------------------ > > Message: 2 > Date: Sun, 24 Sep 2017 09:46:02 +0200 > From: Nicolas CELLIER > To: SciPy Developers List > Subject: Re: [SciPy-Dev] Add feature to approx_fprime > Message-ID: <8dd96501-8952-472e-81d8-a39ed609359d at gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hm, tell me if I'm wrong, but in that case you can choose an arbitrary > small epsilon. So accuracy should not be a problem (except in case of > rounding error). > > By curiosity, what is your application and why do you think that one sided > isn't enough? > > And if accuracy loss is really a problem, have you checked automatic > differenciation? > > Nicolas C. > Le 24 sept. 2017, ? 05:09, Jeffrey Manville a > ?crit: > > Hello, > > I have been using the approx_fprime and I noticed that the source code as > it is only uses the one sided numerical differentiation. I had to build my > own two sided one by copying and modifying the one sided code. > > One sided: F'(x) = ( F(x+h) - F(x) ) / h ? ? ? ? ? ? ? ? ? ? ? ? aka > Newton methodTwo sided:??F'(x) = ( F(x+h) - F(x-h) ) / (2*h) ? ? ? ? ? ? ? > ?aka Symmetric method > The two sided method tends to be more accurate, but can be more > computationally expensive. ? > Is this something that would be good to add? I am newer to python and I > haven't contributed to open source yet, but it seems like a good fit. > > I think it would be good to pass in a kwarg called method and set the > default to one sided, so no one's code breaks, but they can choose which > method to use.? > > Should I keep the decision logic at approx_fprime and have two separate > functions for the two methods? > > Here's a wikipedia article on it > Here's a link to the?source code > > Cheers, > Jeff > > > > > > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > Le 24 sept. 2017 05:09, ? 05:09, Jeffrey Manville < > jeffmanvillejr at gmail.com> a ?crit: > >Hello, > > > >I have been using the approx_fprime and I noticed that the source code > >as > >it is only uses the one sided numerical differentiation. I had to build > >my > >own two sided one by copying and modifying the one sided code. > > > >One sided: F'(x) = ( F(x+h) - F(x) ) / h aka > >Newton > >method > >Two sided: F'(x) = ( F(x+h) - F(x-h) ) / (2*h) aka > >Symmetric method > > > >The two sided method tends to be more accurate, but can be more > >computationally expensive. > > > >Is this something that would be good to add? I am newer to python and I > >haven't contributed to open source yet, but it seems like a good fit. > > > >I think it would be good to pass in a kwarg called method and set the > >default to one sided, so no one's code breaks, but they can choose > >which > >method to use. > > > >Should I keep the decision logic at approx_fprime and have two separate > >functions for the two methods? > > > >Here's a wikipedia article on it > > > >Here's a link to the source code > > /optimize.py#L633-L688> > > > >Cheers, > > > >Jeff > > > > > >------------------------------------------------------------------------ > > > >_______________________________________________ > >SciPy-Dev mailing list > >SciPy-Dev at python.org > >https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: 20170924/60433893/attachment-0001.html> > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > > ------------------------------ > > End of SciPy-Dev Digest, Vol 167, Issue 26 > ****************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Example of One-sided vs Two-sided.xlsx Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Size: 20304 bytes Desc: not available URL: From sievert.scott at gmail.com Sun Sep 24 15:10:27 2017 From: sievert.scott at gmail.com (Scott Sievert) Date: Sun, 24 Sep 2017 14:10:27 -0500 Subject: [SciPy-Dev] Add feature to approx_fprime (Nicolas CELLIER) In-Reply-To: References: Message-ID: <747CD49D-AF33-45E4-8A58-609FD464A5D0@gmail.com> > I suspect that it can be prohibitively expensive for many machine > learning algorithms This method of differentiation is expensive for machine learning (ML) applications. This methods grows like O(n) when the variable is n-dimensional. [Automatic differentiation] is more efficient and widely used in machine learning. For example, [HIPS/autograd] is a thinly wrapped NumPy that can compute the gradient in approximately O(5k) when the function takes k seconds to compute. It does this by recording operations done on a variable then using the chain rule. Various ML libraries and their automatic differentiation strategies can be found at https://docs.chainer.org/en/stable/comparison.html. Scott [HIPS/autograd]:https://github.com/HIPS/autograd [Automatic differentiation]:https://en.wikipedia.org/wiki/Automatic_differentiation On 24 Sep 2017, at 13:34, Jeffrey Manville wrote: > Hello Nick, > > So, I am implementing Andrew Ng's Machine learning coursework in > Python > which was originally in Matlab/Octave. I was using the approx_fprime > to > evaluate by implementation of a back propagation algorithm. > > To be honest, I didn't know about automatic differentiation. I suspect > that > it can be prohibitively expensive for many machine learning > algorithms, but > probably not for my project. I could use a smaller epsilon, but then I > am > just getting closer and closer to the floating point error problem. > > *My project:* > In my project, the "difference" between backprop and the numerical > gradient > was expected to be <1e-9 with an epsilon of 1e-4. I was getting about > 2.48e-5 with the current approx_fprime vs 1.65e-11 with the two sided > method and the same epsilon. The two sided was significantly more > accurate. > > *The problem:* > The problem with the one sided method is that it is inherently biased. > The > change isn't centered around the point in question for the one sided > method. > > The two methods are essentially the same method with the difference > being > "where" they guess that calculated derivative is. One sided guesses > that > the calculated derivative is at one side of the two points. The two > sided > guesses the derivative is in the middle. > > > *Proof:* > I attached an excel doc that has a kind of example of the difference. > > I have the project on github. Here is the branch > > that I > am currently working on. If you pull it and run ex4checker.py you will > see > the two sided. If you change line 3: import checker as op to import > checker > as op you will see the relative difference change from the two sided > to > the one sided. > > Cheers, > > Jeff > > On Sun, Sep 24, 2017 at 12:00 PM, > wrote: > >> Send SciPy-Dev mailing list submissions to >> scipy-dev at python.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://mail.python.org/mailman/listinfo/scipy-dev >> or, via email, send a message with subject or body 'help' to >> scipy-dev-request at python.org >> >> You can reach the person managing the list at >> scipy-dev-owner at python.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of SciPy-Dev digest..." >> >> >> Today's Topics: >> >> 1. Add feature to approx_fprime (Jeffrey Manville) >> 2. Re: Add feature to approx_fprime (Nicolas CELLIER) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Sat, 23 Sep 2017 23:05:57 -0400 >> From: Jeffrey Manville >> To: scipy-dev at python.org >> Subject: [SciPy-Dev] Add feature to approx_fprime >> Message-ID: >> > gmail.com> >> Content-Type: text/plain; charset="utf-8" >> >> Hello, >> >> I have been using the approx_fprime and I noticed that the source >> code as >> it is only uses the one sided numerical differentiation. I had to >> build my >> own two sided one by copying and modifying the one sided code. >> >> One sided: F'(x) = ( F(x+h) - F(x) ) / h aka >> Newton >> method >> Two sided: F'(x) = ( F(x+h) - F(x-h) ) / (2*h) aka >> Symmetric method >> >> The two sided method tends to be more accurate, but can be more >> computationally expensive. >> >> Is this something that would be good to add? I am newer to python and >> I >> haven't contributed to open source yet, but it seems like a good fit. >> >> I think it would be good to pass in a kwarg called method and set the >> default to one sided, so no one's code breaks, but they can choose >> which >> method to use. >> >> Should I keep the decision logic at approx_fprime and have two >> separate >> functions for the two methods? >> >> Here's a wikipedia article on it >> >> Here's a link to the source code >> > optimize.py#L633-L688> >> >> Cheers, >> >> Jeff >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: > 20170923/87340d02/attachment-0001.html> >> >> ------------------------------ >> >> Message: 2 >> Date: Sun, 24 Sep 2017 09:46:02 +0200 >> From: Nicolas CELLIER >> To: SciPy Developers List >> Subject: Re: [SciPy-Dev] Add feature to approx_fprime >> Message-ID: <8dd96501-8952-472e-81d8-a39ed609359d at gmail.com> >> Content-Type: text/plain; charset="utf-8" >> >> Hm, tell me if I'm wrong, but in that case you can choose an >> arbitrary >> small epsilon. So accuracy should not be a problem (except in case of >> rounding error). >> >> By curiosity, what is your application and why do you think that one >> sided >> isn't enough? >> >> And if accuracy loss is really a problem, have you checked automatic >> differenciation? >> >> Nicolas C. >> Le 24 sept. 2017, ? 05:09, Jeffrey Manville >> a >> ?crit: >> >> Hello, >> >> I have been using the approx_fprime and I noticed that the source >> code as >> it is only uses the one sided numerical differentiation. I had to >> build my >> own two sided one by copying and modifying the one sided code. >> >> One sided: F'(x) = ( F(x+h) - F(x) ) / h ? ? ? ? ? ? ? ? ? ? ? ? aka >> Newton methodTwo sided:??F'(x) = ( F(x+h) - F(x-h) ) / (2*h) ? ? ? ? >> ? ? ? >> ?aka Symmetric method >> The two sided method tends to be more accurate, but can be more >> computationally expensive. ? >> Is this something that would be good to add? I am newer to python and >> I >> haven't contributed to open source yet, but it seems like a good fit. >> >> I think it would be good to pass in a kwarg called method and set the >> default to one sided, so no one's code breaks, but they can choose >> which >> method to use.? >> >> Should I keep the decision logic at approx_fprime and have two >> separate >> functions for the two methods? >> >> Here's a wikipedia article on it >> Here's a link to the?source code >> >> Cheers, >> Jeff >> >> >> >> >> >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> Le 24 sept. 2017 05:09, ? 05:09, Jeffrey Manville < >> jeffmanvillejr at gmail.com> a ?crit: >>> Hello, >>> >>> I have been using the approx_fprime and I noticed that the source >>> code >>> as >>> it is only uses the one sided numerical differentiation. I had to >>> build >>> my >>> own two sided one by copying and modifying the one sided code. >>> >>> One sided: F'(x) = ( F(x+h) - F(x) ) / h aka >>> Newton >>> method >>> Two sided: F'(x) = ( F(x+h) - F(x-h) ) / (2*h) aka >>> Symmetric method >>> >>> The two sided method tends to be more accurate, but can be more >>> computationally expensive. >>> >>> Is this something that would be good to add? I am newer to python >>> and I >>> haven't contributed to open source yet, but it seems like a good >>> fit. >>> >>> I think it would be good to pass in a kwarg called method and set >>> the >>> default to one sided, so no one's code breaks, but they can choose >>> which >>> method to use. >>> >>> Should I keep the decision logic at approx_fprime and have two >>> separate >>> functions for the two methods? >>> >>> Here's a wikipedia article on it >>> >>> Here's a link to the source code >>> > /optimize.py#L633-L688> >>> >>> Cheers, >>> >>> Jeff >>> >>> >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: > 20170924/60433893/attachment-0001.html> >> >> ------------------------------ >> >> Subject: Digest Footer >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> >> ------------------------------ >> >> End of SciPy-Dev Digest, Vol 167, Issue 26 >> ****************************************** >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Sep 24 15:52:08 2017 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 24 Sep 2017 21:52:08 +0200 Subject: [SciPy-Dev] Add feature to approx_fprime In-Reply-To: References: Message-ID: <1506282728.2439.13.camel@iki.fi> la, 2017-09-23 kello 23:05 -0400, Jeffrey Manville kirjoitti: > I have been using the approx_fprime and I noticed that the source > code as > it is only uses the one sided numerical differentiation. I had to > build my > own two sided one by copying and modifying the one sided code. The purpose of approx_fprime was mainly to serve as an internal differentiation tool for scipy.optimize when the user doesn't provide gradients explicitly. That it is a public method in scipy.optimize is sort of a historical artifact. I do not think it is an useful starting point for building a more general or more powerful numerical differentiation library, which then also raises the question that maybe it would be better to just deprecate/soft-deprecate it rather than trying to extend it. scipy.optimize currently has a second, more advanced numerical differentiator that it uses internally (see _numdiff.py), which also does multipoint schemes. There has been previous interest to add a module for numerical differentiation to scipy. There however are other existing modules for this on PyPi (e.g. numdifftools --- in addition to non-numerics modules for AD). So before starting, it would be a good idea to think about the scope of such a module, what functinality it should have etc. Some work towards this was done this summer but afaik it stalled, I don't recall what the status of intermediate results are. -- Pauli Virtanen From rmcgibbo at gmail.com Sun Sep 24 16:15:27 2017 From: rmcgibbo at gmail.com (Robert T. McGibbon) Date: Sun, 24 Sep 2017 16:15:27 -0400 Subject: [SciPy-Dev] Add feature to approx_fprime In-Reply-To: <1506282728.2439.13.camel@iki.fi> References: <1506282728.2439.13.camel@iki.fi> Message-ID: On Sun, Sep 24, 2017 at 3:52 PM, Pauli Virtanen wrote: > > scipy.optimize currently has a second, more advanced numerical > differentiator that it uses internally (see _numdiff.py), which also > does multipoint schemes. > For people looking for something simple to use in code right now: ``` from scipy.optimize._numdiff import approx_derivative ``` -Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Sep 24 16:47:16 2017 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 24 Sep 2017 22:47:16 +0200 Subject: [SciPy-Dev] Add feature to approx_fprime In-Reply-To: References: <1506282728.2439.13.camel@iki.fi> Message-ID: <1506286036.2439.25.camel@iki.fi> su, 2017-09-24 kello 16:15 -0400, Robert T. McGibbon kirjoitti: > On Sun, Sep 24, 2017 at 3:52 PM, Pauli Virtanen wrote: > > > > > scipy.optimize currently has a second, more advanced numerical > > differentiator that it uses internally (see _numdiff.py), which > > also > > does multipoint schemes. > > > > For people looking for something simple to use in code right now: > > ``` > from scipy.optimize._numdiff import approx_derivative > ``` Please don't: it's a non-public method (see the leading underscore in the module name), and it may be removed in any release, and then your code is broken. If you want to use it in your own project, you can copy the `_numdiff.py` file (and what it needs, probably just _group_columns.pyx), maybe slapping the BSD license blurb from LICENSE.txt on top. Pauli From warren.weckesser at gmail.com Sun Sep 24 17:05:42 2017 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 24 Sep 2017 17:05:42 -0400 Subject: [SciPy-Dev] Add feature to approx_fprime In-Reply-To: <1506282728.2439.13.camel@iki.fi> References: <1506282728.2439.13.camel@iki.fi> Message-ID: On Sun, Sep 24, 2017 at 3:52 PM, Pauli Virtanen wrote: > la, 2017-09-23 kello 23:05 -0400, Jeffrey Manville kirjoitti: > > I have been using the approx_fprime and I noticed that the source > > code as > > it is only uses the one sided numerical differentiation. I had to > > build my > > own two sided one by copying and modifying the one sided code. > > The purpose of approx_fprime was mainly to serve as an internal > differentiation tool for scipy.optimize when the user doesn't provide > gradients explicitly. That it is a public method in scipy.optimize is > sort of a historical artifact. > > I do not think it is an useful starting point for building a more > general or more powerful numerical differentiation library, which then > also raises the question that maybe it would be better to just > deprecate/soft-deprecate it rather than trying to extend it. > > scipy.optimize currently has a second, more advanced numerical > differentiator that it uses internally (see _numdiff.py), which also > does multipoint schemes. > > There has been previous interest to add a module for numerical > differentiation to scipy. There however are other existing modules for > this on PyPi (e.g. numdifftools --- in addition to non-numerics modules > for AD). So before starting, it would be a good idea to think about the > scope of such a module, what functinality it should have etc. Some work > towards this was done this summer but afaik it stalled, I don't recall > what the status of intermediate results are. > > Don't forget that there is also `scipy.misc.derivative` ( https://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.derivative.html), which uses central differences. See https://github.com/scipy/scipy/issues/7457 and the links therein for more background on the ongoing interest in improving scipy's tools for numerical differentiation. Warren -- > Pauli Virtanen > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffmanvillejr at gmail.com Sun Sep 24 17:08:23 2017 From: jeffmanvillejr at gmail.com (Jeffrey Manville) Date: Sun, 24 Sep 2017 17:08:23 -0400 Subject: [SciPy-Dev] Add feature to approx_fprime Message-ID: >[Automatic differentiation] is more efficient and widely used in machine learning. Good to know. Another thing to add to the "Learn to Do" list. However, does automatic differentiation this negate the value of this? Scipy already has one of these methods and its the less accurate method. Is it inappropriate to have it in the library? I really don't know how this works. Are features decided by general consensus? Do I get drawn and quartered for bad ideas? On Sun, Sep 24, 2017 at 3:10 PM, wrote: > Send SciPy-Dev mailing list submissions to > scipy-dev at python.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.python.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at python.org > > You can reach the person managing the list at > scipy-dev-owner at python.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Re: Add feature to approx_fprime (Nicolas CELLIER) (Scott Sievert) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 24 Sep 2017 14:10:27 -0500 > From: "Scott Sievert" > To: "SciPy Developers List" > Subject: Re: [SciPy-Dev] Add feature to approx_fprime (Nicolas > CELLIER) > Message-ID: <747CD49D-AF33-45E4-8A58-609FD464A5D0 at gmail.com> > Content-Type: text/plain; charset="us-ascii"; Format="flowed" > > > I suspect that it can be prohibitively expensive for many machine > > learning algorithms > > This method of differentiation is expensive for machine learning (ML) > applications. This methods grows like O(n) when the variable is > n-dimensional. > > [Automatic differentiation] is more efficient and widely used in machine > learning. For example, [HIPS/autograd] is a thinly wrapped NumPy that > can compute the gradient in approximately O(5k) when the function takes > k seconds to compute. It does this by recording operations done on a > variable then using the chain rule. > > Various ML libraries and their automatic differentiation strategies can > be found at https://docs.chainer.org/en/stable/comparison.html. > > Scott > > [HIPS/autograd]:https://github.com/HIPS/autograd > [Automatic > differentiation]:https://en.wikipedia.org/wiki/Automatic_differentiation > > > On 24 Sep 2017, at 13:34, Jeffrey Manville wrote: > > > Hello Nick, > > > > So, I am implementing Andrew Ng's Machine learning coursework in > > Python > > which was originally in Matlab/Octave. I was using the approx_fprime > > to > > evaluate by implementation of a back propagation algorithm. > > > > To be honest, I didn't know about automatic differentiation. I suspect > > that > > it can be prohibitively expensive for many machine learning > > algorithms, but > > probably not for my project. I could use a smaller epsilon, but then I > > am > > just getting closer and closer to the floating point error problem. > > > > *My project:* > > In my project, the "difference" between backprop and the numerical > > gradient > > was expected to be <1e-9 with an epsilon of 1e-4. I was getting about > > 2.48e-5 with the current approx_fprime vs 1.65e-11 with the two sided > > method and the same epsilon. The two sided was significantly more > > accurate. > > > > *The problem:* > > The problem with the one sided method is that it is inherently biased. > > The > > change isn't centered around the point in question for the one sided > > method. > > > > The two methods are essentially the same method with the difference > > being > > "where" they guess that calculated derivative is. One sided guesses > > that > > the calculated derivative is at one side of the two points. The two > > sided > > guesses the derivative is in the middle. > > > > > > *Proof:* > > I attached an excel doc that has a kind of example of the difference. > > > > I have the project on github. Here is the branch > > > > that I > > am currently working on. If you pull it and run ex4checker.py you will > > see > > the two sided. If you change line 3: import checker as op to import > > checker > > as op you will see the relative difference change from the two sided > > to > > the one sided. > > > > Cheers, > > > > Jeff > > > > On Sun, Sep 24, 2017 at 12:00 PM, > > wrote: > > > >> Send SciPy-Dev mailing list submissions to > >> scipy-dev at python.org > >> > >> To subscribe or unsubscribe via the World Wide Web, visit > >> https://mail.python.org/mailman/listinfo/scipy-dev > >> or, via email, send a message with subject or body 'help' to > >> scipy-dev-request at python.org > >> > >> You can reach the person managing the list at > >> scipy-dev-owner at python.org > >> > >> When replying, please edit your Subject line so it is more specific > >> than "Re: Contents of SciPy-Dev digest..." > >> > >> > >> Today's Topics: > >> > >> 1. Add feature to approx_fprime (Jeffrey Manville) > >> 2. Re: Add feature to approx_fprime (Nicolas CELLIER) > >> > >> > >> ---------------------------------------------------------------------- > >> > >> Message: 1 > >> Date: Sat, 23 Sep 2017 23:05:57 -0400 > >> From: Jeffrey Manville > >> To: scipy-dev at python.org > >> Subject: [SciPy-Dev] Add feature to approx_fprime > >> Message-ID: > >> >> gmail.com> > >> Content-Type: text/plain; charset="utf-8" > >> > >> Hello, > >> > >> I have been using the approx_fprime and I noticed that the source > >> code as > >> it is only uses the one sided numerical differentiation. I had to > >> build my > >> own two sided one by copying and modifying the one sided code. > >> > >> One sided: F'(x) = ( F(x+h) - F(x) ) / h aka > >> Newton > >> method > >> Two sided: F'(x) = ( F(x+h) - F(x-h) ) / (2*h) aka > >> Symmetric method > >> > >> The two sided method tends to be more accurate, but can be more > >> computationally expensive. > >> > >> Is this something that would be good to add? I am newer to python and > >> I > >> haven't contributed to open source yet, but it seems like a good fit. > >> > >> I think it would be good to pass in a kwarg called method and set the > >> default to one sided, so no one's code breaks, but they can choose > >> which > >> method to use. > >> > >> Should I keep the decision logic at approx_fprime and have two > >> separate > >> functions for the two methods? > >> > >> Here's a wikipedia article on it > >> > >> Here's a link to the source code > >> >> optimize.py#L633-L688> > >> > >> Cheers, > >> > >> Jeff > >> -------------- next part -------------- > >> An HTML attachment was scrubbed... > >> URL: >> 20170923/87340d02/attachment-0001.html> > >> > >> ------------------------------ > >> > >> Message: 2 > >> Date: Sun, 24 Sep 2017 09:46:02 +0200 > >> From: Nicolas CELLIER > >> To: SciPy Developers List > >> Subject: Re: [SciPy-Dev] Add feature to approx_fprime > >> Message-ID: <8dd96501-8952-472e-81d8-a39ed609359d at gmail.com> > >> Content-Type: text/plain; charset="utf-8" > >> > >> Hm, tell me if I'm wrong, but in that case you can choose an > >> arbitrary > >> small epsilon. So accuracy should not be a problem (except in case of > >> rounding error). > >> > >> By curiosity, what is your application and why do you think that one > >> sided > >> isn't enough? > >> > >> And if accuracy loss is really a problem, have you checked automatic > >> differenciation? > >> > >> Nicolas C. > >> Le 24 sept. 2017, ? 05:09, Jeffrey Manville > >> a > >> ?crit: > >> > >> Hello, > >> > >> I have been using the approx_fprime and I noticed that the source > >> code as > >> it is only uses the one sided numerical differentiation. I had to > >> build my > >> own two sided one by copying and modifying the one sided code. > >> > >> One sided: F'(x) = ( F(x+h) - F(x) ) / h ? ? ? ? ? ? ? ? ? ? ? ? aka > >> Newton methodTwo sided:??F'(x) = ( F(x+h) - F(x-h) ) / (2*h) ? ? ? ? > >> ? ? ? > >> ?aka Symmetric method > >> The two sided method tends to be more accurate, but can be more > >> computationally expensive. ? > >> Is this something that would be good to add? I am newer to python and > >> I > >> haven't contributed to open source yet, but it seems like a good fit. > >> > >> I think it would be good to pass in a kwarg called method and set the > >> default to one sided, so no one's code breaks, but they can choose > >> which > >> method to use.? > >> > >> Should I keep the decision logic at approx_fprime and have two > >> separate > >> functions for the two methods? > >> > >> Here's a wikipedia article on it > >> Here's a link to the?source code > >> > >> Cheers, > >> Jeff > >> > >> > >> > >> > >> > >> SciPy-Dev mailing list > >> SciPy-Dev at python.org > >> https://mail.python.org/mailman/listinfo/scipy-dev > >> > >> Le 24 sept. 2017 05:09, ? 05:09, Jeffrey Manville < > >> jeffmanvillejr at gmail.com> a ?crit: > >>> Hello, > >>> > >>> I have been using the approx_fprime and I noticed that the source > >>> code > >>> as > >>> it is only uses the one sided numerical differentiation. I had to > >>> build > >>> my > >>> own two sided one by copying and modifying the one sided code. > >>> > >>> One sided: F'(x) = ( F(x+h) - F(x) ) / h aka > >>> Newton > >>> method > >>> Two sided: F'(x) = ( F(x+h) - F(x-h) ) / (2*h) aka > >>> Symmetric method > >>> > >>> The two sided method tends to be more accurate, but can be more > >>> computationally expensive. > >>> > >>> Is this something that would be good to add? I am newer to python > >>> and I > >>> haven't contributed to open source yet, but it seems like a good > >>> fit. > >>> > >>> I think it would be good to pass in a kwarg called method and set > >>> the > >>> default to one sided, so no one's code breaks, but they can choose > >>> which > >>> method to use. > >>> > >>> Should I keep the decision logic at approx_fprime and have two > >>> separate > >>> functions for the two methods? > >>> > >>> Here's a wikipedia article on it > >>> > >>> Here's a link to the source code > >>> >> /optimize.py#L633-L688> > >>> > >>> Cheers, > >>> > >>> Jeff > >>> > >>> > >>> ------------------------------------------------------------ > ------------ > >>> > >>> _______________________________________________ > >>> SciPy-Dev mailing list > >>> SciPy-Dev at python.org > >>> https://mail.python.org/mailman/listinfo/scipy-dev > >> -------------- next part -------------- > >> An HTML attachment was scrubbed... > >> URL: >> 20170924/60433893/attachment-0001.html> > >> > >> ------------------------------ > >> > >> Subject: Digest Footer > >> > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at python.org > >> https://mail.python.org/mailman/listinfo/scipy-dev > >> > >> > >> ------------------------------ > >> > >> End of SciPy-Dev Digest, Vol 167, Issue 26 > >> ****************************************** > >> > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at python.org > > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: attachments/20170924/94e68b29/attachment.html> > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > > ------------------------------ > > End of SciPy-Dev Digest, Vol 167, Issue 28 > ****************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Mon Sep 25 02:40:43 2017 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Mon, 25 Sep 2017 08:40:43 +0200 Subject: [SciPy-Dev] Add feature to approx_fprime In-Reply-To: References: Message-ID: On 24 September 2017 at 23:08, Jeffrey Manville wrote: > Do I get drawn and quartered for bad ideas? > Absolutely not! Having new ideas is good, even if they are bad. What you have done is the way it is supposed to be: you had an idea, wrote some example code, and proposed it. It turned out it wasn't a good idea, but there was a nice discussion coming out of it. Ideas are good (but the line is drawn at including curly braces, see "from __future__ import braces") If you want to collaborate and have the time for it, maybe you can continue with the work Pauli mentioned. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Sep 25 04:55:20 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 25 Sep 2017 21:55:20 +1300 Subject: [SciPy-Dev] Add feature to approx_fprime In-Reply-To: <1506282728.2439.13.camel@iki.fi> References: <1506282728.2439.13.camel@iki.fi> Message-ID: On Mon, Sep 25, 2017 at 8:52 AM, Pauli Virtanen wrote: > la, 2017-09-23 kello 23:05 -0400, Jeffrey Manville kirjoitti: > > I have been using the approx_fprime and I noticed that the source > > code as > > it is only uses the one sided numerical differentiation. I had to > > build my > > own two sided one by copying and modifying the one sided code. > > The purpose of approx_fprime was mainly to serve as an internal > differentiation tool for scipy.optimize when the user doesn't provide > gradients explicitly. That it is a public method in scipy.optimize is > sort of a historical artifact. > > I do not think it is an useful starting point for building a more > general or more powerful numerical differentiation library, which then > also raises the question that maybe it would be better to just > deprecate/soft-deprecate it rather than trying to extend it. > > scipy.optimize currently has a second, more advanced numerical > differentiator that it uses internally (see _numdiff.py), which also > does multipoint schemes. > > There has been previous interest to add a module for numerical > differentiation to scipy. There however are other existing modules for > this on PyPi (e.g. numdifftools --- in addition to non-numerics modules > for AD). So before starting, it would be a good idea to think about the > scope of such a module, what functinality it should have etc. Some work > towards this was done this summer but afaik it stalled, I don't recall > what the status of intermediate results are. > Unfortunately the results of that one month of GSoC work are not useable. So there isn't much code to build on. Also, numdifftools development pace has increased since the ~2014 days (IIRC) when we first considered adding it to scipy. And we have things like autograd coming along that understand pretty much all of numpy and even some things in scipy. So after a couple of failed attempts at creating a scipy.diff, I'm no longer convinced that it's worth spending a lot of effort on. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Tue Sep 26 08:41:49 2017 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 26 Sep 2017 12:41:49 +0000 Subject: [SciPy-Dev] Add feature to approx_fprime In-Reply-To: References: <1506282728.2439.13.camel@iki.fi> Message-ID: On Mon, Sep 25, 2017 at 10:55 AM Ralf Gommers wrote: > > Unfortunately the results of that one month of GSoC work are not useable. > > So there isn't much code to build on. Also, numdifftools development pace > has increased since the ~2014 days (IIRC) when we first considered adding > it to scipy. And we have things like autograd coming along that understand > pretty much all of numpy and even some things in scipy. So after a couple > of failed attempts at creating a scipy.diff, I'm no longer convinced that > it's worth spending a lot of effort on. > Just to further muddy the waters, I felt the need for a better numerical differentiator so I put together an adaptive one that is able to use parallelism: https://github.com/aarchiba/futuretools This allows otherwise non-parallel optimization algorithms to experience a speedup equal to roughly twice the dimensionality of the problem. It hasn't seen heavy use, though, so don't use it without careful checking. numdifftools is aimed at high accuracy and fails to exploit parallelism; I don't think it's suitable for use within an optimization algorithm, but if you need derivatives for, say, error analysis it may be worth using. Numerical differentiation is hard, and there are a number of approaches to obtaining it with quite complicated cost-benefit analyses. I don't like that the scipy default is terrible, but a minor improvement that doubles the computational cost (f(x) is almost always needed anyway) is probably not that helpful. More useful would be if scipy had a good derivative-free optimization algorithm, but the one I know of, BOBYQA, has licensing complexities and anyways is available in the python package nlopt. (scipy's powell is really terrible, at least at making progress on my problem.) Anne -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Sep 27 06:29:46 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 27 Sep 2017 23:29:46 +1300 Subject: [SciPy-Dev] the demise of SciPy Central Message-ID: Hi all, SciPy Central has been down for weeks, and barely anyone has noticed (2 bug reports). Almost no new content has been created the last year, and the use cases it tried to address are now covered pretty well by StackOverflow and Github/gists. A bit more discussion with Surya and Pauli, who have done most of the maintenance for the site the last couple of years, happened on https://github.com/scipy/SciPyCentral/issues/207. In https://github.com/scipy/scipy.org/pull/226 I've removed links to it from scipy.org. I propose we go ahead and merge that, and shut down the site completely. This is in no way a judgement of the work especially Surya put in during and after his GSoC - more an acknowledgement that times have changed. And maybe that we should collectively try to stay away from sysadmin duties as much as possible (scipy.org, mailman, etc. also have not been fun). Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Sep 27 13:42:35 2017 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 27 Sep 2017 10:42:35 -0700 Subject: [SciPy-Dev] the demise of SciPy Central In-Reply-To: References: Message-ID: <1506534155.3734638.1120303936.2A8549F4@webmail.messagingengine.com> Hi Ralf On Wed, Sep 27, 2017, at 03:29, Ralf Gommers wrote: > SciPy Central has been down for weeks, and barely anyone has noticed > (2 bug reports). Almost no new content has been created the last year, > and the use cases it tried to address are now covered pretty well by > StackOverflow and Github/gists. A bit more discussion with Surya and > Pauli, who have done most of the maintenance for the site the last > couple of years, happened on > https://github.com/scipy/SciPyCentral/issues/207.> > In https://github.com/scipy/scipy.org/pull/226 I've removed links to > it from scipy.org. I propose we go ahead and merge that, and shut down > the site completely. There are some very useful scripts on there. Do we have a plan for rescuing the content? StackOverflow seems less discoverable, as does GitHub gists. St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Sep 27 17:41:35 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 28 Sep 2017 10:41:35 +1300 Subject: [SciPy-Dev] ANN: first SciPy 1.0.0 release candidate Message-ID: Hi all, I'm excited to be able to announce the availability of the first release candidate of Scipy 1.0. This is a big release, and a version number that has been 16 years in the making. It contains a few more deprecations and backwards incompatible changes than an average release. Therefore please do test it on your own code, and report any issues on the Github issue tracker or on the scipy-dev mailing list. Sources and binary wheels can be found at https://pypi.python.org/pypi/scipy and https://github.com/scipy/scipy/releases/tag/v1.0.0rc1. To install with pip: pip install --pre --upgrade scipy Thanks to everyone who contributed to this release! Ralf Pull requests merged after v1.0.0b1: - `#7876 `__: GEN: Add comments to the tests for clarification - `#7891 `__: ENH: backport #7879 to 1.0.x - `#7902 `__: MAINT: signal: Make freqz handling of multidim. arrays match... - `#7905 `__: REV: restore wminkowski - `#7908 `__: FIX: Avoid bad ``__del__`` (close) behavior - `#7918 `__: TST: mark two optimize.linprog tests as xfail. See gh-7877. - `#7929 `__: MAINT: changed defaults to lower in sytf2, sytrf and hetrf - `#7938 `__: MAINT: backports from 1.0.x - `#7939 `__: Fix umfpack solver construction for win-amd64 ========================== SciPy 1.0.0 Release Notes ========================== .. note:: Scipy 1.0.0 is not released yet! .. contents:: SciPy 1.0.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 1.0.x branch, and on adding new features on the master branch. Some of the highlights of this release are: - Major build improvements. Windows wheels are available on PyPI for the first time, and continuous integration has been set up on Windows and OS X in addition to Linux. - A set of new ODE solvers and a unified interface to them (`scipy.integrate.solve_ivp`). - Two new trust region optimizers and a new linear programming method, with improved performance compared to what `scipy.optimize` offered previously. - Many new BLAS and LAPACK functions were wrapped. The BLAS wrappers are now complete. This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater. This is also the last release to support LAPACK 3.1.x - 3.3.x. Moving the lowest supported LAPACK version to >3.2.x was long blocked by Apple Accelerate providing the LAPACK 3.2.1 API. We have decided that it's time to either drop Accelerate or, if there is enough interest, provide shims for functions added in more recent LAPACK versions so it can still be used. New features ============ `scipy.cluster` improvements ---------------------------- `scipy.cluster.hierarchy.optimal_leaf_ordering`, a function to reorder a linkage matrix to minimize distances between adjacent leaves, was added. `scipy.fftpack` improvements ---------------------------- N-dimensional versions of the discrete sine and cosine transforms and their inverses were added as ``dctn``, ``idctn``, ``dstn`` and ``idstn``. `scipy.integrate` improvements ------------------------------ A set of new ODE solvers have been added to `scipy.integrate`. The convenience function `scipy.integrate.solve_ivp` allows uniform access to all solvers. The individual solvers (``RK23``, ``RK45``, ``Radau``, ``BDF`` and ``LSODA``) can also be used directly. `scipy.linalg` improvements ---------------------------- The BLAS wrappers in `scipy.linalg.blas` have been completed. Added functions are ``*gbmv``, ``*hbmv``, ``*hpmv``, ``*hpr``, ``*hpr2``, ``*spmv``, ``*spr``, ``*tbmv``, ``*tbsv``, ``*tpmv``, ``*tpsv``, ``*trsm``, ``*trsv``, ``*sbmv``, ``*spr2``, Wrappers for the LAPACK functions ``*gels``, ``*stev``, ``*sytrd``, ``*hetrd``, ``*sytf2``, ``*hetrf``, ``*sytrf``, ``*sycon``, ``*hecon``, ``*gglse``, ``*stebz``, ``*stemr``, ``*sterf``, and ``*stein`` have been added. The function `scipy.linalg.subspace_angles` has been added to compute the subspace angles between two matrices. The function `scipy.linalg.clarkson_woodruff_transform` has been added. It finds low-rank matrix approximation via the Clarkson-Woodruff Transform. The functions `scipy.linalg.eigh_tridiagonal` and `scipy.linalg.eigvalsh_tridiagonal`, which find the eigenvalues and eigenvectors of tridiagonal hermitian/symmetric matrices, were added. `scipy.ndimage` improvements ---------------------------- Support for homogeneous coordinate transforms has been added to `scipy.ndimage.affine_transform`. The ``ndimage`` C code underwent a significant refactoring, and is now a lot easier to understand and maintain. `scipy.optimize` improvements ----------------------------- The methods ``trust-region-exact`` and ``trust-krylov`` have been added to the function `scipy.optimize.minimize`. These new trust-region methods solve the subproblem with higher accuracy at the cost of more Hessian factorizations (compared to dogleg) or more matrix vector products (compared to ncg) but usually require less nonlinear iterations and are able to deal with indefinite Hessians. They seem very competitive against the other Newton methods implemented in scipy. `scipy.optimize.linprog` gained an interior point method. Its performance is superior (both in accuracy and speed) to the older simplex method. `scipy.signal` improvements --------------------------- An argument ``fs`` (sampling frequency) was added to the following functions: ``firwin``, ``firwin2``, ``firls``, and ``remez``. This makes these functions consistent with many other functions in `scipy.signal` in which the sampling frequency can be specified. `scipy.signal.freqz` has been sped up significantly for FIR filters. `scipy.sparse` improvements --------------------------- Iterating over and slicing of CSC and CSR matrices is now faster by up to ~35%. The ``tocsr`` method of COO matrices is now several times faster. The ``diagonal`` method of sparse matrices now takes a parameter, indicating which diagonal to return. `scipy.sparse.linalg` improvements ---------------------------------- A new iterative solver for large-scale nonsymmetric sparse linear systems, `scipy.sparse.linalg.gcrotmk`, was added. It implements ``GCROT(m,k)``, a flexible variant of ``GCROT``. `scipy.sparse.linalg.lsmr` now accepts an initial guess, yielding potentially faster convergence. SuperLU was updated to version 5.2.1. `scipy.spatial` improvements ---------------------------- Many distance metrics in `scipy.spatial.distance` gained support for weights. The signatures of `scipy.spatial.distance.pdist` and `scipy.spatial.distance.cdist` were changed to ``*args, **kwargs`` in order to support a wider range of metrics (e.g. string-based metrics that need extra keywords). Also, an optional ``out`` parameter was added to ``pdist`` and ``cdist`` allowing the user to specify where the resulting distance matrix is to be stored `scipy.stats` improvements -------------------------- The methods ``cdf`` and ``logcdf`` were added to `scipy.stats.multivariate_normal`, providing the cumulative distribution function of the multivariate normal distribution. New statistical distance functions were added, namely `scipy.stats.wasserstein_distance` for the first Wasserstein distance and `scipy.stats.energy_distance` for the energy distance. Deprecated features =================== The following functions in `scipy.misc` are deprecated: ``bytescale``, ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, ``imsave``, ``imshow`` and ``toimage``. Most of those functions have unexpected behavior (like rescaling and type casting image data without the user asking for that). Other functions simply have better alternatives. ``scipy.interpolate.interpolate_wrapper`` and all functions in that submodule are deprecated. This was a never finished set of wrapper functions which is not relevant anymore. The ``fillvalue`` of `scipy.signal.convolve2d` will be cast directly to the dtypes of the input arrays in the future and checked that it is a scalar or an array with a single element. ``scipy.spatial.distance.matching`` is deprecated. It is an alias of `scipy.spatial.distance.hamming`, which should be used instead. Implementation of `scipy.spatial.distance.wminkowski` was based on a wrong interpretation of the metric definition. In scipy 1.0 it has been just deprecated in the documentation to keep retro-compatibility but is recommended to use the new version of `scipy.spatial.distance.minkowski` that implements the correct behaviour. Positional arguments of `scipy.spatial.distance.pdist` and `scipy.spatial.distance.cdist` should be replaced with their keyword version. Backwards incompatible changes ============================== The following deprecated functions have been removed from `scipy.stats`: ``betai``, ``chisqprob``, ``f_value``, ``histogram``, ``histogram2``, ``pdf_fromgamma``, ``signaltonoise``, ``square_of_sums``, ``ss`` and ``threshold``. The following deprecated functions have been removed from `scipy.stats.mstats`: ``betai``, ``f_value_wilks_lambda``, ``signaltonoise`` and ``threshold``. The deprecated ``a`` and ``reta`` keywords have been removed from `scipy.stats.shapiro`. The deprecated functions ``sparse.csgraph.cs_graph_components`` and ``sparse.linalg.symeig`` have been removed from `scipy.sparse`. The following deprecated keywords have been removed in `scipy.sparse.linalg`: ``drop_tol`` from ``splu``, and ``xtype`` from ``bicg``, ``bicgstab``, ``cg``, ``cgs``, ``gmres``, ``qmr`` and ``minres``. The deprecated functions ``expm2`` and ``expm3`` have been removed from `scipy.linalg`. The deprecated keyword ``q`` was removed from `scipy.linalg.expm`. And the deprecated submodule ``linalg.calc_lwork`` was removed. The deprecated functions ``C2K``, ``K2C``, ``F2C``, ``C2F``, ``F2K`` and ``K2F`` have been removed from `scipy.constants`. The deprecated ``ppform`` class was removed from `scipy.interpolate`. The deprecated keyword ``iprint`` was removed from `scipy.optimize.fmin_cobyla`. The default value for the ``zero_phase`` keyword of `scipy.signal.decimate` has been changed to True. The ``kmeans`` and ``kmeans2`` functions in `scipy.cluster.vq` changed the method used for random initialization, so using a fixed random seed will not necessarily produce the same results as in previous versions. `scipy.special.gammaln` does not accept complex arguments anymore. The deprecated functions ``sph_jn``, ``sph_yn``, ``sph_jnyn``, ``sph_in``, ``sph_kn``, and ``sph_inkn`` have been removed. Users should instead use the functions ``spherical_jn``, ``spherical_yn``, ``spherical_in``, and ``spherical_kn``. Be aware that the new functions have different signatures. The cross-class properties of `scipy.signal.lti` systems have been removed. The following properties/setters have been removed: Name - (accessing/setting has been removed) - (setting has been removed) * StateSpace - (``num``, ``den``, ``gain``) - (``zeros``, ``poles``) * TransferFunction (``A``, ``B``, ``C``, ``D``, ``gain``) - (``zeros``, ``poles``) * ZerosPolesGain (``A``, ``B``, ``C``, ``D``, ``num``, ``den``) - () ``signal.freqz(b, a)`` with ``b`` or ``a`` >1-D raises a ``ValueError``. This was a corner case for which it was unclear that the behavior was well-defined. The method ``var`` of `scipy.stats.dirichlet` now returns a scalar rather than an ndarray when the length of alpha is 1. Other changes ============= SciPy now has a formal governance structure. It consists of a BDFL (Pauli Virtanen) and a Steering Committee. See `the governance document < https://github.com/scipy/scipy/blob/master/doc/source/dev/governance/governance.rst >`_ for details. It is now possible to build SciPy on Windows with MSVC + gfortran! Continuous integration has been set up for this build configuration on Appveyor, building against OpenBLAS. Continuous integration for OS X has been set up on TravisCI. The SciPy test suite has been migrated from ``nose`` to ``pytest``. ``scipy/_distributor_init.py`` was added to allow redistributors of SciPy to add custom code that needs to run when importing SciPy (e.g. checks for hardware, DLL search paths, etc.). Support for PEP 518 (specifying build system requirements) was added - see ``pyproject.toml`` in the root of the SciPy repository. In order to have consistent function names, the function ``scipy.linalg.solve_lyapunov`` is renamed to `scipy.linalg.solve_continuous_lyapunov`. The old name is kept for backwards-compatibility. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Sep 28 00:10:57 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 27 Sep 2017 22:10:57 -0600 Subject: [SciPy-Dev] NumPy 1.13.2 released. Message-ID: HI All, On behalf of the NumPy team, I am pleased to annouce the release of Numpy 1.13.2. This is a bugfix release for some problems found since 1.13.1. The most important fixes are for CVE-2017-12852 and temporary elision. Users of earlier versions of 1.13 should upgrade. The Python versions supported are 2.7 and 3.4 - 3.6. The Python 3.6 wheels available from PIP are built with Python 3.6.2 and should be compatible with all previous versions of Python 3.6. The Windows wheels are now built with OpenBlas instead ATLAS, which should improve the performance of the linearalgebra functions. Contributors ============ A total of 12 people contributed to this release. People with a "+" by their names contributed a patch for the first time. * Allan Haldane * Brandon Carter * Charles Harris * Eric Wieser * Iryna Shcherbina + * James Bourbeau + * Jonathan Helmus * Julian Taylor * Matti Picus * Michael Lamparski + * Michael Seifert * Ralf Gommers Pull requests merged ==================== A total of 20 pull requests were merged for this release. * #9390 BUG: Return the poly1d coefficients array directly * #9555 BUG: Fix regression in 1.13.x in distutils.mingw32ccompiler. * #9556 BUG: Fix true_divide when dtype=np.float64 specified. * #9557 DOC: Fix some rst markup in numpy/doc/basics.py. * #9558 BLD: Remove -xhost flag from IntelFCompiler. * #9559 DOC: Removes broken docstring example (source code, png, pdf)... * #9580 BUG: Add hypot and cabs functions to WIN32 blacklist. * #9732 BUG: Make scalar function elision check if temp is writeable. * #9736 BUG: Various fixes to np.gradient * #9742 BUG: Fix np.pad for CVE-2017-12852 * #9744 BUG: Check for exception in sort functions, add tests * #9745 DOC: Add whitespace after "versionadded::" directive so it actually... * #9746 BUG: Memory leak in np.dot of size 0 * #9747 BUG: Adjust gfortran version search regex * #9757 BUG: Cython 0.27 breaks NumPy on Python 3. * #9764 BUG: Ensure `_npy_scaled_cexp{,f,l}` is defined when needed. * #9765 BUG: PyArray_CountNonzero does not check for exceptions * #9766 BUG: Fixes histogram monotonicity check for unsigned bin values * #9767 BUG: Ensure consistent result dtype of count_nonzero * #9771 BUG, MAINT: Fix mtrand for Cython 0.27. Enjoy Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Sep 28 03:30:18 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 28 Sep 2017 20:30:18 +1300 Subject: [SciPy-Dev] Add feature to approx_fprime In-Reply-To: References: <1506282728.2439.13.camel@iki.fi> Message-ID: On Wed, Sep 27, 2017 at 1:41 AM, Anne Archibald wrote: > > On Mon, Sep 25, 2017 at 10:55 AM Ralf Gommers > wrote: > >> >> Unfortunately the results of that one month of GSoC work are not useable. >> >> So there isn't much code to build on. Also, numdifftools development pace >> has increased since the ~2014 days (IIRC) when we first considered adding >> it to scipy. And we have things like autograd coming along that understand >> pretty much all of numpy and even some things in scipy. So after a couple >> of failed attempts at creating a scipy.diff, I'm no longer convinced that >> it's worth spending a lot of effort on. >> > > Just to further muddy the waters, I felt the need for a better numerical > differentiator so I put together an adaptive one that is able to use > parallelism: https://github.com/aarchiba/futuretools This allows > otherwise non-parallel optimization algorithms to experience a speedup > equal to roughly twice the dimensionality of the problem. It hasn't seen > heavy use, though, so don't use it without careful checking. > Interesting, thanks for sharing. If you plan to keep developing it, maybe good to add to https://scipy.org/topical-software.html ? Ralf > numdifftools is aimed at high accuracy and fails to exploit parallelism; I > don't think it's suitable for use within an optimization algorithm, but if > you need derivatives for, say, error analysis it may be worth using. > > Numerical differentiation is hard, and there are a number of approaches to > obtaining it with quite complicated cost-benefit analyses. I don't like > that the scipy default is terrible, but a minor improvement that doubles > the computational cost (f(x) is almost always needed anyway) is probably > not that helpful. More useful would be if scipy had a good derivative-free > optimization algorithm, but the one I know of, BOBYQA, has licensing > complexities and anyways is available in the python package nlopt. (scipy's > powell is really terrible, at least at making progress on my problem.) > > Anne > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Sep 28 03:38:20 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 28 Sep 2017 20:38:20 +1300 Subject: [SciPy-Dev] the demise of SciPy Central In-Reply-To: <1506534155.3734638.1120303936.2A8549F4@webmail.messagingengine.com> References: <1506534155.3734638.1120303936.2A8549F4@webmail.messagingengine.com> Message-ID: On Thu, Sep 28, 2017 at 6:42 AM, Stefan van der Walt wrote: > Hi Ralf > > On Wed, Sep 27, 2017, at 03:29, Ralf Gommers wrote: > > SciPy Central has been down for weeks, and barely anyone has noticed (2 > bug reports). Almost no new content has been created the last year, and the > use cases it tried to address are now covered pretty well by StackOverflow > and Github/gists. A bit more discussion with Surya and Pauli, who have done > most of the maintenance for the site the last couple of years, happened on > https://github.com/scipy/SciPyCentral/issues/207. > > In https://github.com/scipy/scipy.org/pull/226 I've removed links to it > from scipy.org. I propose we go ahead and merge that, and shut down the > site completely. > > > There are some very useful scripts on there. Do we have a plan for > rescuing the content? StackOverflow seems less discoverable, as does > GitHub gists. > No plan yet. If I'd have to take a stab at it, I'd suggest: 1. Put the content (https://github.com/scipy/SciPyCentral#backup-and-restore) somewhere accessible 2. Take the last version of each script/notebook, discard the history. 3. Write some code to transform all of those into notebooks suitable for inclusion in https://github.com/scipy/scipy-cookbook 4. Go through by hand, see what's in good shape, and send PRs to https://github.com/scipy/scipy-cookbook for those. That may be quite a bit of work though. At least (1) should be done before decommissioning the server. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Sep 29 19:52:17 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 29 Sep 2017 17:52:17 -0600 Subject: [SciPy-Dev] NumPy 1.13.3 released. Message-ID: HI All, On behalf of the NumPy team, I am pleased to annouce the release of Numpy 1.13.3. This is a re-release of 1.13.2, which suffered from compatibility problems, see issue 9786 . It is a bugfix release for some problems found since 1.13.1. The most important fixes are for CVE-2017-12852 and the new temporary elision. Users of earlier versions of 1.13 should upgrade. The Python versions supported are 2.7 and 3.4 - 3.6. The Python 3.6 wheels available from PIP are built with Python 3.6.2 and should be compatible with all previous versions of Python 3.6. It was cythonized with Cython 0.26.1, which should be free of the bugs found in 0.27 while also being compatible with Python 3.7-dev. The Windows wheels were built with OpenBlas instead ATLAS, which should improve the performance of the linear algebra functions. Wheels and zip archives are available from PyPI , both zip and tar archives are available from Github . Contributors ============ A total of 12 people contributed to this release. People with a "+" by their names contributed a patch for the first time. * Allan Haldane * Brandon Carter * Charles Harris * Eric Wieser * Iryna Shcherbina + * James Bourbeau + * Jonathan Helmus * Julian Taylor * Matti Picus * Michael Lamparski + * Michael Seifert * Ralf Gommers Pull requests merged ==================== A total of 20 pull requests were merged for this release. * #9390 BUG: Return the poly1d coefficients array directly * #9555 BUG: Fix regression in 1.13.x in distutils.mingw32ccompiler. * #9556 BUG: Fix true_divide when dtype=np.float64 specified. * #9557 DOC: Fix some rst markup in numpy/doc/basics.py. * #9558 BLD: Remove -xhost flag from IntelFCompiler. * #9559 DOC: Removes broken docstring example (source code, png, pdf)... * #9580 BUG: Add hypot and cabs functions to WIN32 blacklist. * #9732 BUG: Make scalar function elision check if temp is writeable. * #9736 BUG: Various fixes to np.gradient * #9742 BUG: Fix np.pad for CVE-2017-12852 * #9744 BUG: Check for exception in sort functions, add tests * #9745 DOC: Add whitespace after "versionadded::" directive so it actually... * #9746 BUG: Memory leak in np.dot of size 0 * #9747 BUG: Adjust gfortran version search regex * #9757 BUG: Cython 0.27 breaks NumPy on Python 3. * #9764 BUG: Ensure `_npy_scaled_cexp{,f,l}` is defined when needed. * #9765 BUG: PyArray_CountNonzero does not check for exceptions * #9766 BUG: Fixes histogram monotonicity check for unsigned bin values * #9767 BUG: Ensure consistent result dtype of count_nonzero * #9771 BUG, MAINT: Fix mtrand for Cython 0.27. Enjoy Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From phillip.m.feldman at gmail.com Fri Sep 29 23:31:30 2017 From: phillip.m.feldman at gmail.com (Phillip Feldman) Date: Fri, 29 Sep 2017 20:31:30 -0700 Subject: [SciPy-Dev] support for solving integer programming problems Message-ID: I'm wondering whether anyone has been thinking of adding capability to `scipy.optimize` for solving low-dimensionality integer programming problems. In particular, I'm thinking of such methods as local search or tabu search. Phillip -------------- next part -------------- An HTML attachment was scrubbed... URL: