From phillip.m.feldman at gmail.com Sun Aug 6 13:10:54 2017 From: phillip.m.feldman at gmail.com (Phillip Feldman) Date: Sun, 6 Aug 2017 10:10:54 -0700 Subject: [SciPy-Dev] SciPy roadmap does not address limitations of current filter-related code Message-ID: Some of us in the engineering community are unhappy with the SciPy roadmap. >From our perspective, the main deficiency of scipy.signal is that the filtering functionality is of limited practical utility. We would like to see the following: (1) support for lowpass, bandpass, and bandstop FIR filter design, with the user specifying (a) the passband ripple, (b) the minimum stopband rejection, and (c) the design method, with this last being optional. Rather than forcing the user to specify the order of the filter, which requires many iterations to determine the minimum filter order that will do the job, we'd like to see the code automatically determine the minimum filter order that can meet the user's specs. (2) support for fixed-binary-point arithmetic. (3) support for filtering and the design of filters that use fixed-binary-point arithmetic. Such changes would be a big step in the direction of making Python+NumPy+SciPy a viable alternative to Matlab + the Matlab Signal Processing Toolbox. As an aside, I'd like to comment on the documentation for `scipy.signal.kaiserord`, which says the following: scipy.signal.kaiserord(ripple, width)[source] ripple : float Positive number specifying maximum ripple in passband (dB) and minimum ripple in stopband. When designing a lowpass digital filter, one normally specifies the maximum ripple in the passband and the minimum rejection in the stopband. With this function, there is no way to specify how much rejection one gets in the stopband, and the filter design code is also apparently trying to limit stopband ripple, which is something that no engineer would care about. The documentation can't just be badly worded, because there would have to be another parameter to specify the stopband rejection. Phillip -- Dr. Phillip M. Feldman -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Aug 6 13:39:05 2017 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 6 Aug 2017 19:39:05 +0200 Subject: [SciPy-Dev] SciPy roadmap does not address limitations of current filter-related code In-Reply-To: References: Message-ID: Phillip Feldman kirjoitti 06.08.2017 klo 19:10: > Some of us in the engineering community are unhappy with the SciPy > roadmap. The roadmap is an open document, and suggestions for additions are very useful. It should not be considered as a finalized view of what is to came. In addition to opening mailing list discussions, you can also change the text directly by opening a pull request on github. [clip] > As an aside, I'd like to comment on the documentation for > `scipy.signal.kaiserord`, which says the following: For bug reports, filing an issue ticket on github is a somewhat better way as it ensures your comments will not be lost. -- Pauli Virtanen From dan at whiteaudio.com Sun Aug 6 15:56:20 2017 From: dan at whiteaudio.com (Dan White) Date: Sun, 6 Aug 2017 14:56:20 -0500 Subject: [SciPy-Dev] SciPy roadmap does not address limitations of current filter-related code In-Reply-To: References: Message-ID: Phillip, pyFDA is a project that builds off of scipy.signal and is similar to Matlab's filter design toolbox. https://github.com/chipmuenk/pyFDA A stated goal is working towards fixed-point capability and has the beginnings of HDL generation via myHDL. Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From 446368 at students.wits.ac.za Tue Aug 8 05:18:40 2017 From: 446368 at students.wits.ac.za (Timothy Mehay) Date: Tue, 8 Aug 2017 11:18:40 +0200 Subject: [SciPy-Dev] 1/f noise generation Message-ID: Dear All, During the course of using numpy/scipy for data processing I've noticed a lack of routines for generating non Gaussian noise processes. I'm interested in contributing code for the generation of 1/f (pink) noise, and perhaps other noise processes, similar to the matlab dsp.ColoredNoise routines. Any thoughts on whether such an addition would be worthwhile? Regards Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Tue Aug 8 06:58:58 2017 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 08 Aug 2017 10:58:58 +0000 Subject: [SciPy-Dev] 1/f noise generation In-Reply-To: References: Message-ID: I think you mean non-white, not nongaussian On Tue, Aug 8, 2017, 5:18 AM Timothy Mehay <446368 at students.wits.ac.za> wrote: > Dear All, > > During the course of using numpy/scipy for data processing I've noticed a > lack of routines for generating non Gaussian noise processes. > > I'm interested in contributing code for the generation of 1/f (pink) > noise, and perhaps other noise processes, similar to the matlab > dsp.ColoredNoise routines. > > Any thoughts on whether such an addition would be worthwhile? > > Regards > Tim > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Tue Aug 8 07:02:55 2017 From: andyfaff at gmail.com (Andrew Nelson) Date: Tue, 8 Aug 2017 21:02:55 +1000 Subject: [SciPy-Dev] 1/f noise generation In-Reply-To: References: Message-ID: Don't forget the random number generation from rv_continuous in scipy.stats. On 8 Aug 2017 9:01 pm, "Neal Becker" wrote: I think you mean non-white, not nongaussian On Tue, Aug 8, 2017, 5:18 AM Timothy Mehay <446368 at students.wits.ac.za> wrote: > Dear All, > > During the course of using numpy/scipy for data processing I've noticed a > lack of routines for generating non Gaussian noise processes. > > I'm interested in contributing code for the generation of 1/f (pink) > noise, and perhaps other noise processes, similar to the matlab > dsp.ColoredNoise routines. > > Any thoughts on whether such an addition would be worthwhile? > > Regards > Tim > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > _______________________________________________ SciPy-Dev mailing list SciPy-Dev at python.org https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From msuzen at gmail.com Tue Aug 8 08:15:33 2017 From: msuzen at gmail.com (Suzen, Mehmet) Date: Tue, 8 Aug 2017 14:15:33 +0200 Subject: [SciPy-Dev] 1/f noise generation In-Reply-To: References: Message-ID: On 8 August 2017 at 12:58, Neal Becker wrote: > I think you mean non-white, not nongaussian Excellent point. white noise means constant PSD, not only Gaussian. From phillip.m.feldman at gmail.com Thu Aug 10 01:19:04 2017 From: phillip.m.feldman at gmail.com (Phillip Feldman) Date: Wed, 9 Aug 2017 22:19:04 -0700 Subject: [SciPy-Dev] 1/f noise generation Message-ID: Firstly, white noise means only that the power spectral density is flat, or equivalently, that the autocorrelation function is zero everywhere except at lag zero.One can generate Gaussian noise with an arbitrary power spectral density by filtering white Gaussian noise with a suitable filter. The output of the filter is the convolution of the impulse response of the filter with the autocorrelation function of white noise, which gives us the impulse response of the filter. So, from the convolution theorem and the definition of the PSD, the PSD of the output is the squared magnitude of the frequency response of filter.An alternative method for synthesizing Gaussian noise with an arbitrary power spectral density is to generate tones with unity amplitude and random phase, where the probability of picking any particular tone frequency is proportional to the height of the PSD. The filter method tends to be more computationally efficient. Dr. Phillip M. Feldman *Suzen, Mehmet* msuzen at gmail.com *Tue Aug 8 08:15:33 EDT 2017* - Previous message (by thread): [SciPy-Dev] 1/f noise generation - *Messages sorted by:* [ date ] [ thread ] [ subject ] [ author ] ------------------------------ On 8 August 2017 at 12:58, Neal Becker > wrote: >* I think you mean non-white, not nongaussian * Excellent point. white noise means constant PSD, not only Gaussian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Fri Aug 11 02:43:58 2017 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 11 Aug 2017 02:43:58 -0400 Subject: [SciPy-Dev] SciPy roadmap does not address limitations of current filter-related code In-Reply-To: References: Message-ID: On Sun, Aug 6, 2017 at 1:10 PM, Phillip Feldman wrote: > Some of us in the engineering community are unhappy with the SciPy > roadmap. From our perspective, the main deficiency of scipy.signal is that > the filtering functionality is of limited practical utility. We would like > to see the following: > > (1) support for lowpass, bandpass, and bandstop FIR filter design, with > the user specifying (a) the passband ripple, (b) the minimum stopband > rejection, and (c) the design method, with this last being optional. Rather > than forcing the user to specify the order of the filter, which requires > many iterations to determine the minimum filter order that will do the job, > we'd like to see the code automatically determine the minimum filter order > that can meet the user's specs. > > (2) support for fixed-binary-point arithmetic. > > (3) support for filtering and the design of filters that use > fixed-binary-point arithmetic. > > Such changes would be a big step in the direction of making > Python+NumPy+SciPy a viable alternative to Matlab + the Matlab Signal > Processing Toolbox. > > Hi Phillip, Thanks for the feedback. Those are all great suggestions, and would be valuable additions to SciPy. My comments in this somewhat long response are my own opinions and not necessarily those of any other SciPy developers. The first request, for a method to design a FIR filter (including the determination of the order of the filter) given the desired passband ripple and stopband rejection, can be done with the existing tools in SciPy, if you don't mind a small amount of "brute force" search. I have some code that works reasonably well for lowpass, highpass, bandpass and bandstop FIR filters. I'll make this available soon, but I don't know if it will end up in SciPy itself. Filtering with fixed-binary-point arithmetic is an important topic, but I don't know if any of the folks currently working on SciPy have experience in this area. So I don't see it happening in the immediate future. On the other hand, there are several sharp folks contributing to the 'signal' package these days, and maybe one or more of them will be interested in tackling such a project. Now I have to give what is basically the boilerplate response that anyone who helps maintain a free, open source software project would give--I'm sure you've heard it before: SciPy is developed and maintained by volunteers. We all have our areas of interest and expertise, and we all have our areas of ignorance. If we can get more people with more diverse backgrounds involved with SciPy development, we can improve the library that much faster. "Involved" doesn't necessarily mean writing code--there are several ways for you and your colleagues in the engineering community to contribute. You've already contributed using one the simplest methods: create bug reports and requests for enhancements. Let us know what you need. The more information that you can provide in these reports, the better. In some cases, you may have to educate the SciPy developers. This can include: a. Describe algorithms that you know work well. Provide links to published papers or relevant textbooks where the algorithm is described. b. Find existing open source implementations that might serve as a basis for the SciPy implementation. (Avoid, however, software that is GPL licensed. SciPy will not include software that is derived from GPL software.) For example, in a response to your email, Dan White pointed out pyFDA. If you have a chance to try it, let us know if the content looks suitable for adding to SciPy. Also, if a feature request or bug fix that you need is already under discussion in a github issue or pull request, please feel free to join the discussion. Of course, you can also contribute code. If SciPy is missing a feature that you and your colleagues need, then why not get together, create an implementation, and add it to SciPy via a pull request? Just be prepared for the iterative code review process, in which the SciPy developers become familiar with the code in the pull request and also ensure that the code conforms to the SciPy style, documentation and testing standards. Major enhancements should be discussed first on the mailing list. More details are in https://github.com/scipy/scipy/blob/master/HACKING.rst.txt. I'll reply to your question about kaiserord() in a separate email. Best regards, Warren > As an aside, I'd like to comment on the documentation for > `scipy.signal.kaiserord`, which says the following: > > scipy.signal.kaiserord(ripple, width)[source] > > ripple : float > > Positive number specifying maximum ripple in passband (dB) and minimum > ripple in stopband. > > When designing a lowpass digital filter, one normally specifies the > maximum ripple in the passband and the minimum rejection in the stopband. > With this function, there is no way to specify how much rejection one gets > in the stopband, and the filter design code is also apparently trying to > limit stopband ripple, which is something that no engineer would care > about. The documentation can't just be badly worded, because there would > have to be another parameter to specify the stopband rejection. > > Phillip > -- > Dr. Phillip M. Feldman > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Fri Aug 11 04:11:46 2017 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 11 Aug 2017 04:11:46 -0400 Subject: [SciPy-Dev] SciPy roadmap does not address limitations of current filter-related code In-Reply-To: References: Message-ID: On Sun, Aug 6, 2017 at 1:10 PM, Phillip Feldman wrote: > Some of us in the engineering community are unhappy with the SciPy > roadmap. From our perspective, the main deficiency of scipy.signal is that > the filtering functionality is of limited practical utility. We would like > to see the following: > > (1) support for lowpass, bandpass, and bandstop FIR filter design, with > the user specifying (a) the passband ripple, (b) the minimum stopband > rejection, and (c) the design method, with this last being optional. Rather > than forcing the user to specify the order of the filter, which requires > many iterations to determine the minimum filter order that will do the job, > we'd like to see the code automatically determine the minimum filter order > that can meet the user's specs. > > (2) support for fixed-binary-point arithmetic. > > (3) support for filtering and the design of filters that use > fixed-binary-point arithmetic. > > Such changes would be a big step in the direction of making > Python+NumPy+SciPy a viable alternative to Matlab + the Matlab Signal > Processing Toolbox. > > As an aside, I'd like to comment on the documentation for > `scipy.signal.kaiserord`, which says the following: > > scipy.signal.kaiserord(ripple, width)[source] > > ripple : float > > Positive number specifying maximum ripple in passband (dB) and minimum > ripple in stopband. > > When designing a lowpass digital filter, one normally specifies the > maximum ripple in the passband and the minimum rejection in the stopband. > With this function, there is no way to specify how much rejection one gets > in the stopband, and the filter design code is also apparently trying to > limit stopband ripple, which is something that no engineer would care > about. The documentation can't just be badly worded, because there would > have to be another parameter to specify the stopband rejection. > > Phillip, It looks like the explanation of the 'ripple' argument of the function 'kaiserord' needs some work. You may be familiar with the following, but for anyone else following along, here's a summary of 'kaiserord'. The function 'kaiserord' implements the empirical FIR filter design formulas developed by Kaiser in the late 60's and early 70's. The reference that I have handy is Sections 7.5.3 and 7.6 of the text "Discrete-Time Signal Processing" (3rd ed.) by Oppenheim and Schafer. It is true that in this method, there is only one parameter that controls the passband ripple *and* the stopband rejection. Let delta be the attenuation (not in dB) in the stop band. In the Kaiser method, delta also determines the ripple of the gain in the pass band: it varies between 1-delta and 1+delta. The stop band rejection in dB is A = -20*log10(delta). This value (in dB) is the first argument of 'kaiserord'. Kaiser developed an expression for beta, the Kaiser window parameter, that depends on A, and also a formula for the filter order M in terms of A and w, where w is the transition width between the pass and stop bands. The Kaiser window design method, then, is to determine the order M and Kaiser window parameter beta using Kaiser's formula (implemented in `scipy.signal.kaiserord`), and then design the filter using the window method with a Kaiser window (using, for example, `scipy.signal.firwin`): numtaps, beta = kaiserord(A, w) taps = firwin(numtaps, cutoff, window=('kaiser', beta), [other args as needed]) Adding a good example to the docstring of 'kaiserord()' is on the SciPy to-do list (https://github.com/scipy/scipy/issues/7168). In the meantime, I have attached a self-contained script that demonstrates the Kaiser method for a lowpass filter. It generates the attached plot. Best regards, Warren > Phillip > -- > Dr. Phillip M. Feldman > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: kaiser_lowpass_filter_design.py Type: text/x-python-script Size: 2739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: kaiser_lowpass_filter.png Type: image/png Size: 120208 bytes Desc: not available URL: From njs at pobox.com Fri Aug 11 16:54:55 2017 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 11 Aug 2017 13:54:55 -0700 Subject: [SciPy-Dev] 1/f noise generation In-Reply-To: References: Message-ID: On Wed, Aug 9, 2017 at 10:19 PM, Phillip Feldman wrote: > Firstly, white noise means only that the power spectral density is flat, or > equivalently, that the autocorrelation function is zero everywhere except at > lag zero. > > One can generate Gaussian noise with an arbitrary power spectral density by > filtering white Gaussian noise with a suitable filter. The output of the > filter is the convolution of the impulse response of the filter with the > autocorrelation function of white noise, which gives us the impulse response > of the filter. So, from the convolution theorem and the definition of the > PSD, the PSD of the output is the squared magnitude of the frequency > response of filter. It's not actually possible to generate true 1/f noise this way -- technically 1/f noise is non-stationary and doesn't have a PSD. (You can run a PSD estimator on any finite sample of 1/f noise, and get some answer, but as your samples get larger your estimate won't converge, because you keep discovering more and more power at lower and lower frequencies.) So there are specialized methods for generating 1/f noise, involving things like fractional differencing or wavelets. -n -- Nathaniel J. Smith -- https://vorpus.org From ndbecker2 at gmail.com Fri Aug 11 17:41:45 2017 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 11 Aug 2017 21:41:45 +0000 Subject: [SciPy-Dev] 1/f noise generation In-Reply-To: References: Message-ID: good point! On Fri, Aug 11, 2017 at 4:55 PM Nathaniel Smith wrote: > On Wed, Aug 9, 2017 at 10:19 PM, Phillip Feldman > wrote: > > Firstly, white noise means only that the power spectral density is flat, > or > > equivalently, that the autocorrelation function is zero everywhere > except at > > lag zero. > > > > One can generate Gaussian noise with an arbitrary power spectral density > by > > filtering white Gaussian noise with a suitable filter. The output of the > > filter is the convolution of the impulse response of the filter with the > > autocorrelation function of white noise, which gives us the impulse > response > > of the filter. So, from the convolution theorem and the definition of the > > PSD, the PSD of the output is the squared magnitude of the frequency > > response of filter. > > It's not actually possible to generate true 1/f noise this way -- > technically 1/f noise is non-stationary and doesn't have a PSD. (You > can run a PSD estimator on any finite sample of 1/f noise, and get > some answer, but as your samples get larger your estimate won't > converge, because you keep discovering more and more power at lower > and lower frequencies.) So there are specialized methods for > generating 1/f noise, involving things like fractional differencing or > wavelets. > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phillip.m.feldman at gmail.com Fri Aug 11 18:08:34 2017 From: phillip.m.feldman at gmail.com (Phillip Feldman) Date: Fri, 11 Aug 2017 15:08:34 -0700 Subject: [SciPy-Dev] 1/f noise generation In-Reply-To: References: Message-ID: Because the integral of 1/f from zero to infinity is infinite, the methods that I described won't work if one really wants to reproduce the 1/f characteristic over all frequency. But, for things like communications systems applications, where any receiving system has a finite bandwidth, one doesn't care about the PSD outside the bandwidth of the system, and one would consequently be simulating something that matches the 1/f characteristic over a finite bandwidth, and does just about anything else outside. A PSD that matches the 1/f curve over an interval [f1, f2], where f1>0, and is zero outside that interval, corresponds to a well-behaved process, and no special methods are required. Phillip On Fri, Aug 11, 2017 at 2:41 PM, Neal Becker wrote: > good point! > > On Fri, Aug 11, 2017 at 4:55 PM Nathaniel Smith wrote: > >> On Wed, Aug 9, 2017 at 10:19 PM, Phillip Feldman >> wrote: >> > Firstly, white noise means only that the power spectral density is >> flat, or >> > equivalently, that the autocorrelation function is zero everywhere >> except at >> > lag zero. >> > >> > One can generate Gaussian noise with an arbitrary power spectral >> density by >> > filtering white Gaussian noise with a suitable filter. The output of the >> > filter is the convolution of the impulse response of the filter with the >> > autocorrelation function of white noise, which gives us the impulse >> response >> > of the filter. So, from the convolution theorem and the definition of >> the >> > PSD, the PSD of the output is the squared magnitude of the frequency >> > response of filter. >> >> It's not actually possible to generate true 1/f noise this way -- >> technically 1/f noise is non-stationary and doesn't have a PSD. (You >> can run a PSD estimator on any finite sample of 1/f noise, and get >> some answer, but as your samples get larger your estimate won't >> converge, because you keep discovering more and more power at lower >> and lower frequencies.) So there are specialized methods for >> generating 1/f noise, involving things like fractional differencing or >> wavelets. >> >> -n >> >> -- >> Nathaniel J. Smith -- https://vorpus.org >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Fri Aug 18 15:52:59 2017 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 18 Aug 2017 12:52:59 -0700 Subject: [SciPy-Dev] NetworkX 2.0b1 released Message-ID: Hi All, I am happy to announce the **beta** release of NetworkX 2.0! NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. This release supports Python 2.7 and 3.4-3.6 and contains many new features. This release is the result of over two years of work with over 600 pull requests by 85 contributors. We have made **major changes** to the methods in the Multi/Di/Graph classes and before the 2.0 release we need feedback on those changes. If you have code that imports networkx, please take some time to check that you are able to update your code to work with the new release. Please see the draft of the 2.0 release announcement: http://networkx.readthedocs.io/en/latest/news.html#networkx-2-0 In particular, we would like feedback on the migration guide from 1.X to 2.0: http://networkx.readthedocs.io/en/latest/release/migration_guide_from_1.x_to_2.0.html Since it is a beta release, pip won't automatically install it. So $ pip install networkx still installs networkx-1.11 still. But $ pip install --pre networkx will install networkx-2.0b1. If you already have networkx installed then you need to do $ pip install --pre --upgrade networkx For more information, please visit our `website `_ and our `gallery of examples `_. Please send comments and questions to the `networkx-discuss mailing list `_ or create an issue `here `_. Best regards, Jarrod From ralf.gommers at gmail.com Sat Aug 19 20:53:59 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 20 Aug 2017 12:53:59 +1200 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy Message-ID: Hi all, I propose that we as SciPy developers and community adopt a Code of Conduct. As you probably know, Code of Conduct (CoC) documents are becoming more common every year for open source projects, and there are a number of good reasons to adopt a CoC: 1. It gives us the opportunity to explicitly express the values and behaviors we'd like to see in our community. 2. It is designed to make everyone feel welcome (and while I think we're a welcoming community anyway, not having a CoC may look explicitly unwelcoming to some potential contributors nowadays). 3. It gives us a tool to address a set of problems if and when they occur, as well as a way for anyone to report issues or behavior that is unacceptable to them (much better than having those people potentially leave the community). 4. SciPy is not yet a fiscally sponsored project of NumFOCUS, however I think we'd like to be in the near future. NumFOCUS has started to require having a CoC as a prerequisite for new projects joining it. The PSF has the same requirement for any sponsorship for events/projects that it gives. Also note that GitHub has starting checking the presence of a CoC fairly prominently (https://github.com/scipy/scipy/community), and has also produced a guide with things to think about when formulating a CoC: https://opensource.guide/code-of-conduct/. I recommend reading that guide (as well as others guides on that site), it's really good. To get to a CoC document, a good approach is to borrow text from a CoC that has been in use for a while and has proven to be valuable, and then modify where needed (similar to a software license - don't invent your own). I considered three existing CoC's: - The Contributor Covenant (http://contributor-covenant.org/version/1/2/0/): simple, concise, the most widely used one. The NumFOCUS recommended one is based on it as well (https://www.numfocus.org/about/code-of-conduct/). - The Python Community Code of Conduct ( https://www.python.org/psf/codeofconduct/): also simple, addresses mostly the spirit in which the Python community is operating / should operate. - The Jupyter Code of Conduct ( https://github.com/jupyter/governance/tree/master/conduct): much more detailed, in part derived from the Speak up! and Django ones, more appropriate for large communities. I think the Python Community CoC isn't a good basis, it's more a statement of intent for a wider community while it's missing too many elements that a project CoC should have. That leaves the choice between the short and simple Contributor Covenant, and the more extensive Jupyter one. Personally I like the tone of the Jupyter one *much* more, so I have started from that one and made the following modifications to make it fit SciPy better: 1. Removed the part about reporting during events (we don't organize those) and removed the language about events from the faq. 2. Changes to reporting options: removed the form (email should be enough), and added reporting to NumFOCUS as a second option. 3. Changes to enforcement manual: reply within 72 hours rather than 24 hours (we don't have paid work on SciPy, so 24 hours is not very realistic). Changed the committee from 5 to 3 members (5 is a lot, and we're significantly smaller than Jupyter/Django). Here is a WIP PR with the CoC content: https://github.com/scipy/scipy/pull/7764. I suggest to bring up any larger questions/issues here, and detailed textual comments on the PR. Once the content is agreed upon I will change it to reST and integrate it with the rest of the docs. Thoughts? Volunteers for the committee? Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Aug 20 02:23:08 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 20 Aug 2017 18:23:08 +1200 Subject: [SciPy-Dev] SciPy 1.0 release schedule proposal Message-ID: Hi all, We're now at 5 months after the 0.19.0 release, and well behind our planned 1.0 release date [1]. So it's high time for a more concrete plan - here it is: Sep 16: Branch 1.0.x Sep 17: beta1 Sep 27: rc1 Oct 7: rc2 Oct 17: final release This gives us one month to finish and merge everything that we really would like to see in 1.0 - should be enough time for things that are already in progress. The list of things now marked for 1.0 [2] is not too long, and not all of it is critical. It would be useful if everyone could add PRs/issues that they think are critical to the 1.0 milestone. Adding yourself as an "assignee" on PRs/issues you plan to tackle would also be helpful. Besides some really nice new features, we made major strides in infrastructure (CI, testing) and project organisation for this release. Fom my perspective there are three critical things left: 1. Windows wheels. Looks like we're pretty much there, but we need to ensure that 1.0 is the first release that has Windows wheels available. 2. Adding a code of conduct. This is the last thing we imho need from an "open source project maturity" perspective (an FSA would be nice, but can wait). 3. Merging a lot of PRs. There's ~150 PRs open now, I hope in the next month we can focus on reducing that number and getting some nice improvements in that have been waiting for quite some time. Finally, we've discussed before writing a paper about SciPy to coincide with this release. I'll follow up on that separately. Thoughts? Cheers, Ralf [1] https://mail.python.org/pipermail/scipy-dev/2016-September/021485.html [2] https://github.com/scipy/scipy/milestones/1.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Aug 20 08:55:56 2017 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 20 Aug 2017 14:55:56 +0200 Subject: [SciPy-Dev] SciPy 1.0 release schedule proposal In-Reply-To: References: Message-ID: <1503233756.2332.4.camel@iki.fi> su, 2017-08-20 kello 18:23 +1200, Ralf Gommers kirjoitti: > Sep 16: Branch 1.0.x > Sep 17: beta1 > Sep 27: rc1 > Oct 7: rc2 > Oct 17: final release Release schedule looks fine to me. [clip] > 3. Merging a lot of PRs. There's ~150 PRs open now, I hope in the > next > month we can focus on reducing that number and getting some nice > improvements in that have been waiting for quite some time. I think the actionable number of PRs is about half of those. [1] The rest are stalled waiting action from the submitter side, which in some cases is probably not going to happen unless someone else picks up the ball. Pauli [1] https://pav.iki.fi/scipy-needs-work/ From alebarde at gmail.com Mon Aug 21 05:57:23 2017 From: alebarde at gmail.com (alebarde at gmail.com) Date: Mon, 21 Aug 2017 11:57:23 +0200 Subject: [SciPy-Dev] SciPy 1.0 release schedule proposal In-Reply-To: References: Message-ID: Hi all, If possible, I would like to see PR7629 ( https://github.com/scipy/scipy/pull/7629) in 1.0 Alessandro 2017-08-20 8:23 GMT+02:00 Ralf Gommers : > Hi all, > > We're now at 5 months after the 0.19.0 release, and well behind our > planned 1.0 release date [1]. So it's high time for a more concrete plan - > here it is: > > Sep 16: Branch 1.0.x > Sep 17: beta1 > Sep 27: rc1 > Oct 7: rc2 > Oct 17: final release > > This gives us one month to finish and merge everything that we really > would like to see in 1.0 - should be enough time for things that are > already in progress. The list of things now marked for 1.0 [2] is not too > long, and not all of it is critical. It would be useful if everyone could > add PRs/issues that they think are critical to the 1.0 milestone. Adding > yourself as an "assignee" on PRs/issues you plan to tackle would also be > helpful. > > Besides some really nice new features, we made major strides in > infrastructure (CI, testing) and project organisation for this release. Fom > my perspective there are three critical things left: > > 1. Windows wheels. Looks like we're pretty much there, but we need to > ensure that 1.0 is the first release that has Windows wheels available. > 2. Adding a code of conduct. This is the last thing we imho need from an > "open source project maturity" perspective (an FSA would be nice, but can > wait). > 3. Merging a lot of PRs. There's ~150 PRs open now, I hope in the next > month we can focus on reducing that number and getting some nice > improvements in that have been waiting for quite some time. > > Finally, we've discussed before writing a paper about SciPy to coincide > with this release. I'll follow up on that separately. > > Thoughts? > > Cheers, > Ralf > > [1] https://mail.python.org/pipermail/scipy-dev/2016-September/021485.html > [2] https://github.com/scipy/scipy/milestones/1.0 > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -- -------------------------------------------------------------------------- NOTICE: Dlgs 196/2003 this e-mail and any attachments thereto may contain confidential information and are intended for the sole use of the recipient(s) named above. If you are not the intended recipient of this message you are hereby notified that any dissemination or copying of this message is strictly prohibited. If you have received this e-mail in error, please notify the sender either by telephone or by e-mail and delete the material from any computer. Thank you. -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Aug 21 06:10:17 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 21 Aug 2017 22:10:17 +1200 Subject: [SciPy-Dev] SciPy 1.0 release schedule proposal In-Reply-To: References: Message-ID: On Mon, Aug 21, 2017 at 9:57 PM, alebarde at gmail.com wrote: > Hi all, > If possible, I would like to see PR7629 (https://github.com/scipy/ > scipy/pull/7629) in 1.0 > Yes, that PR, gh-7630 and gh-7637 (all about pdist/cdist) have been hovering close to the top of my review list for a while. I've added all of them to the 1.0 milestone. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Aug 24 06:11:29 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 24 Aug 2017 22:11:29 +1200 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: References: Message-ID: On Sun, Aug 20, 2017 at 12:53 PM, Ralf Gommers wrote: > Hi all, > > I propose that we as SciPy developers and community adopt a Code of > Conduct. > > As you probably know, Code of Conduct (CoC) documents are becoming more > common every year for open source projects, and there are a number of good > reasons to adopt a CoC: > 1. It gives us the opportunity to explicitly express the values and > behaviors we'd like to see in our community. > 2. It is designed to make everyone feel welcome (and while I think we're a > welcoming community anyway, not having a CoC may look explicitly > unwelcoming to some potential contributors nowadays). > 3. It gives us a tool to address a set of problems if and when they occur, > as well as a way for anyone to report issues or behavior that is > unacceptable to them (much better than having those people potentially > leave the community). > 4. SciPy is not yet a fiscally sponsored project of NumFOCUS, however I > think we'd like to be in the near future. NumFOCUS has started to require > having a CoC as a prerequisite for new projects joining it. The PSF has > the same requirement for any sponsorship for events/projects that it gives. > > Also note that GitHub has starting checking the presence of a CoC fairly > prominently (https://github.com/scipy/scipy/community), and has also > produced a guide with things to think about when formulating a CoC: > https://opensource.guide/code-of-conduct/. I recommend reading that guide > (as well as others guides on that site), it's really good. > > To get to a CoC document, a good approach is to borrow text from a CoC > that has been in use for a while and has proven to be valuable, and then > modify where needed (similar to a software license - don't invent your > own). I considered three existing CoC's: > - The Contributor Covenant (http://contributor-covenant.org/version/1/2/0/): > simple, concise, the most widely used one. The NumFOCUS recommended one is > based on it as well (https://www.numfocus.org/about/code-of-conduct/). > - The Python Community Code of Conduct (https://www.python.org/psf/ > codeofconduct/): also simple, addresses mostly the spirit in which the > Python community is operating / should operate. > - The Jupyter Code of Conduct (https://github.com/jupyter/ > governance/tree/master/conduct): much more detailed, in part derived from > the Speak up! and Django ones, more appropriate for large communities. > > I think the Python Community CoC isn't a good basis, it's more a statement > of intent for a wider community while it's missing too many elements that a > project CoC should have. That leaves the choice between the short and > simple Contributor Covenant, and the more extensive Jupyter one. Personally > I like the tone of the Jupyter one *much* more, > It was pointed out to me that the process/discussion of introducing a CoC for Jupyter happened in a very sub-optimal way, and not everyone who was involved was happy with the end result. Clearly we want to avoid repeating any mistakes that were made there, so here's a plan: a) Have multiple ways to give feedback (not everyone may be comfortable doing so on a public list), and take that seriously. Feedback in private is very welcome. Also, we'll hold an open conference call (after step (b) is done, I'll send out a time/date later). b) Iterate on the content of the PR a bit to address item 4 below. Matthew has offered to take the lead on this. My high level objectives were: 1. We need to have a CoC 2. It has to have a friendly tone 3. It has to have an enforcement mechanism A (imho valid) criticism of the Jupyter CoC is that it's too vague on what is/isn't allowed and what potential consequences are of doing something that isn't allowed. It's clearly not possible to cover every scenario, however things can be illustrated by example. E.g. just being a little unfriendly will not get you banned from the mailing list (but maybe will result in a private email asking to keep it friendly), while threatening with physical violence will. And robust discussion is still very welcome. So objective to add: 4. Be clear on what is and isn't okay, and how that will be handled. So update to follow. Hopefully we end up with everyone being happy with both the content of the CoC and the way we arrived at it. Cheers, Ralf > so I have started from that one and made the following modifications to > make it fit SciPy better: > > 1. Removed the part about reporting during events (we don't organize > those) and removed the language about events from the faq. > 2. Changes to reporting options: removed the form (email should be > enough), and added reporting to NumFOCUS as a second option. > 3. Changes to enforcement manual: reply within 72 hours rather than 24 > hours (we don't have paid work on SciPy, so 24 hours is not very > realistic). Changed the committee from 5 to 3 members (5 is a lot, and > we're significantly smaller than Jupyter/Django). > > Here is a WIP PR with the CoC content: https://github.com/scipy/ > scipy/pull/7764. I suggest to bring up any larger questions/issues here, > and detailed textual comments on the PR. Once the content is agreed upon I > will change it to reST and integrate it with the rest of the docs. > > Thoughts? Volunteers for the committee? > > Cheers, > Ralf > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Aug 24 18:56:37 2017 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 25 Aug 2017 00:56:37 +0200 Subject: [SciPy-Dev] Scipy Windows wheels Message-ID: <1503615397.2351.22.camel@iki.fi> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Dear all, Prerelease binary wheels for Scipy on Windows 32-bit & 64-bit are now available in case you would like to test them. Currently, the plans are that binary wheels will also be provided for future releases on PyPi, so that you will be able to do simply "pip install scipy" also on Windows. At least, assuming we manage to test these wheels well enough for which help would be useful. You can install the scipy prerelease packages as shown below. Note that they are meant for testing only, and correspond to the current Scipy development version. Please report issues found on the Scipy issue tracker on github (be sure to mention how you installed scipy and python). The wheels are meant to be used with the Python obtained from https://p ython.org --- these are not meant to be used with e.g. Conda, although it may be they work. The work leading to a viable automatized compilation approach was done in https://github.com/scipy/scipy/pull/7616 https://github.com/numpy/numpy/pull/9431 Example: C:\Users\pauli\src\env2\Scripts>pip install -f https://7933911d6844c6c53a7d-47bd50c35cd79bd838daf386af554a83.ssl.cf2.rackcdn.com/ --pre scipy Collecting scipy Downloading https://7933911d6844c6c53a7d-47bd50c35cd79bd838daf386af554a83.ssl.cf2.rackcdn.com/scipy-1.0.0.dev0+20170824221943_2a1fdcf-cp36-none-win32.whl (26.0MB) 100% |????????????????????????????????| 26.0MB 47kB/s Collecting numpy>=1.8.2 (from scipy) Downloading https://7933911d6844c6c53a7d-47bd50c35cd79bd838daf386af554a83.ssl.cf2.rackcdn.com/numpy-1.14.0.dev0+20170824081646_707f33f-cp36-none-win32.whl (6.8MB) 100% |????????????????????????????????| 6.9MB 168kB/s Installing collected packages: numpy, scipy Successfully installed numpy-1.14.0.dev0+707f33f scipy-1.0.0.dev0+2a1fdcf C:\Users\pauli\src\env2\Scripts>python Python 3.6.2 (v3.6.2:5fd33b5, Jul 8 2017, 04:14:34) [MSC v.1900 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy.integrate, scipy.linalg, numpy as np >>> scipy.integrate.quad(lambda x: 1/(1 + x**2), -np.inf, np.inf) (3.141592653589793, 5.155583041103855e-10) >>> scipy.linalg.eigvals([[1,0],[0,2]]) array([ 1.+0.j, 2.+0.j]) >>> exit() - -- Pauli Virtanen -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEiNDtwDFNJaodBiJZHOiglY3YpVcFAlmfWaUACgkQHOiglY3Y pVfxSg//VvwYDanA/tam34I2hrwQQFtQaW3E/ikqNAM91XvQwluc9RsVVDx2zspk 9ywXyfFbnNpMK+/emow8o6kuaFes1NvutobbAOQ4L9jj0ofD9pCWVM6SLkRIkSea km7RdK13vpWtrghPkvkqGFNhY2eDSuV4S8qeR+78KwSUADYjB0m1Yfpfm6LtUKOX tKwGhDGWzi1vcBPJqgQQYJDjBbVNbY5aao6QnjLeNkgXW6RZYhxUyBeWph7GPrEL pNFvoOknjxa5nItZvt948+7PsgZZarHGlyqeAy8Nb0Bkukm1Uovo7V5gMfiDS6nT cA2xNkELh9Zoyr+9kRaaDh2B0U6qWPOiU/IE6VvCK72N70tzdi59a0GkzLzVen2b hgK5RBsa5fL9sxo4oN/bcApnUp6K98XAV4eJhIlZPbnnvSfqbKobX7D1G+qBokBN 90XnWLUjkJpzr1emqyrQPVbrd8OflIhs2aQv0l5gZKrXuBgGFgoCDwEJmrzd6K+n 1iLr73BuZEFN/jLvT9cx+XbAQkXhCbD2hL4ly0u7BuBzAbOE19iugSnap/sjueRW FlOKSddobW86TeOICKurH9TCcFRu6mu1tQvCkucqkY49gXpu3srzUcdog9gQe46H 2JFNQICFaYWhF7jVY9cwOXssEHOc6PCa0FdOxMX5W/p5k0xuqzA= =tjJg -----END PGP SIGNATURE----- From josef.pktd at gmail.com Thu Aug 24 19:49:33 2017 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 24 Aug 2017 19:49:33 -0400 Subject: [SciPy-Dev] Scipy Windows wheels In-Reply-To: <1503615397.2351.22.camel@iki.fi> References: <1503615397.2351.22.camel@iki.fi> Message-ID: On Thu, Aug 24, 2017 at 6:56 PM, Pauli Virtanen wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Dear all, > > Prerelease binary wheels for Scipy on Windows 32-bit & 64-bit are now > available in case you would like to test them. > > Currently, the plans are that binary wheels will also be provided for > future releases on PyPi, so that you will be able to do simply "pip > install scipy" also on Windows. At least, assuming we manage to test > these wheels well enough for which help would be useful. > > You can install the scipy prerelease packages as shown below. Note that > they are meant for testing only, and correspond to the current Scipy > development version. Please report issues found on the Scipy issue > tracker on github (be sure to mention how you installed scipy and > python). > > The wheels are meant to be used with the Python obtained from https://p > ython.org --- these are not meant to be used with e.g. Conda, although > it may be they work. > What's the numpy requirement? I assume the scipy version should not be used with a currently installed numpy unless it is Fortran compatible. For example, Winpython distributes Gohlke's binaries build with MKL. Is there an automatic check when not installing into an empty virtual environment? Josef > > The work leading to a viable automatized compilation approach was done > in > https://github.com/scipy/scipy/pull/7616 > https://github.com/numpy/numpy/pull/9431 > > Example: > > C:\Users\pauli\src\env2\Scripts>pip install -f > https://7933911d6844c6c53a7d-47bd50c35cd79bd838daf386af554a > 83.ssl.cf2.rackcdn.com/ --pre scipy > Collecting scipy > Downloading https://7933911d6844c6c53a7d-47bd50c35cd79bd838daf386af554a > 83.ssl.cf2.rackcdn.com/scipy-1.0.0.dev0+20170824221943_ > 2a1fdcf-cp36-none-win32.whl (26.0MB) > 100% |????????????????????????????????| 26.0MB 47kB/s > Collecting numpy>=1.8.2 (from scipy) > Downloading https://7933911d6844c6c53a7d-47bd50c35cd79bd838daf386af554a > 83.ssl.cf2.rackcdn.com/numpy-1.14.0.dev0+20170824081646_ > 707f33f-cp36-none-win32.whl (6.8MB) > 100% |????????????????????????????????| 6.9MB 168kB/s > Installing collected packages: numpy, scipy > Successfully installed numpy-1.14.0.dev0+707f33f scipy-1.0.0.dev0+2a1fdcf > > C:\Users\pauli\src\env2\Scripts>python > Python 3.6.2 (v3.6.2:5fd33b5, Jul 8 2017, 04:14:34) [MSC v.1900 32 bit > (Intel)] on win32 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy.integrate, scipy.linalg, numpy as np > >>> scipy.integrate.quad(lambda x: 1/(1 + x**2), -np.inf, np.inf) > (3.141592653589793, 5.155583041103855e-10) > >>> scipy.linalg.eigvals([[1,0],[0,2]]) > array([ 1.+0.j, 2.+0.j]) > >>> exit() > > - -- > Pauli Virtanen > -----BEGIN PGP SIGNATURE----- > > iQIzBAEBCAAdFiEEiNDtwDFNJaodBiJZHOiglY3YpVcFAlmfWaUACgkQHOiglY3Y > pVfxSg//VvwYDanA/tam34I2hrwQQFtQaW3E/ikqNAM91XvQwluc9RsVVDx2zspk > 9ywXyfFbnNpMK+/emow8o6kuaFes1NvutobbAOQ4L9jj0ofD9pCWVM6SLkRIkSea > km7RdK13vpWtrghPkvkqGFNhY2eDSuV4S8qeR+78KwSUADYjB0m1Yfpfm6LtUKOX > tKwGhDGWzi1vcBPJqgQQYJDjBbVNbY5aao6QnjLeNkgXW6RZYhxUyBeWph7GPrEL > pNFvoOknjxa5nItZvt948+7PsgZZarHGlyqeAy8Nb0Bkukm1Uovo7V5gMfiDS6nT > cA2xNkELh9Zoyr+9kRaaDh2B0U6qWPOiU/IE6VvCK72N70tzdi59a0GkzLzVen2b > hgK5RBsa5fL9sxo4oN/bcApnUp6K98XAV4eJhIlZPbnnvSfqbKobX7D1G+qBokBN > 90XnWLUjkJpzr1emqyrQPVbrd8OflIhs2aQv0l5gZKrXuBgGFgoCDwEJmrzd6K+n > 1iLr73BuZEFN/jLvT9cx+XbAQkXhCbD2hL4ly0u7BuBzAbOE19iugSnap/sjueRW > FlOKSddobW86TeOICKurH9TCcFRu6mu1tQvCkucqkY49gXpu3srzUcdog9gQe46H > 2JFNQICFaYWhF7jVY9cwOXssEHOc6PCa0FdOxMX5W/p5k0xuqzA= > =tjJg > -----END PGP SIGNATURE----- > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Aug 24 19:58:05 2017 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 24 Aug 2017 16:58:05 -0700 Subject: [SciPy-Dev] Scipy Windows wheels In-Reply-To: References: <1503615397.2351.22.camel@iki.fi> Message-ID: On Thu, Aug 24, 2017 at 4:49 PM, wrote: > > > On Thu, Aug 24, 2017 at 6:56 PM, Pauli Virtanen wrote: >> >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA256 >> >> Dear all, >> >> Prerelease binary wheels for Scipy on Windows 32-bit & 64-bit are now >> available in case you would like to test them. >> >> Currently, the plans are that binary wheels will also be provided for >> future releases on PyPi, so that you will be able to do simply "pip >> install scipy" also on Windows. At least, assuming we manage to test >> these wheels well enough for which help would be useful. >> >> You can install the scipy prerelease packages as shown below. Note that >> they are meant for testing only, and correspond to the current Scipy >> development version. Please report issues found on the Scipy issue >> tracker on github (be sure to mention how you installed scipy and >> python). >> >> The wheels are meant to be used with the Python obtained from https://p >> ython.org --- these are not meant to be used with e.g. Conda, although >> it may be they work. > > > What's the numpy requirement? > I assume the scipy version should not be used with a currently installed > numpy unless it is Fortran compatible. > For example, Winpython distributes Gohlke's binaries build with MKL. > Is there an automatic check when not installing into an empty virtual > environment? IIUC the only reason Gohlke's scipy builds require a specific numpy is that they use a tricky hack where they assume the numpy package has installed MKL at a specific location. I don't think these builds use any hack like this and should work with any numpy. (This makes the download size larger b/c it means scipy has to carry its own copy of BLAS, but the trade-off in "just works" is worth it IMO.) Also, numpy itself doesn't provide any Fortran APIs, so Fortran ABI shouldn't matter at all. -n -- Nathaniel J. Smith -- https://vorpus.org From pav at iki.fi Thu Aug 24 20:00:37 2017 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 25 Aug 2017 02:00:37 +0200 Subject: [SciPy-Dev] Scipy Windows wheels In-Reply-To: References: <1503615397.2351.22.camel@iki.fi> Message-ID: <1503619237.13583.2.camel@iki.fi> to, 2017-08-24 kello 19:49 -0400, josef.pktd at gmail.com kirjoitti: [clip] > What's the numpy requirement? > I assume the scipy version should not be used with a currently > installed > numpy unless it is Fortran compatible. > For example, Winpython distributes Gohlke's binaries build with MKL. > Is there an automatic check when not installing into an empty virtual > environment? Any official Numpy wheel available on Pypi should be fine. You get ABI version errors at import time as usual if you get it wrong. If winpython installs a python that uses different CRT than the python.org ones, you may get extra problems. Pauli From pav at iki.fi Thu Aug 24 20:08:57 2017 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 25 Aug 2017 02:08:57 +0200 Subject: [SciPy-Dev] Scipy Windows wheels In-Reply-To: References: <1503615397.2351.22.camel@iki.fi> Message-ID: <1503619737.13583.8.camel@iki.fi> to, 2017-08-24 kello 16:58 -0700, Nathaniel Smith kirjoitti: [clip] > IIUC the only reason Gohlke's scipy builds require a specific numpy > is > that they use a tricky hack where they assume the numpy package has > installed MKL at a specific location. I don't think these builds use > any hack like this and should work with any numpy. (This makes the > download size larger b/c it means scipy has to carry its own copy of > BLAS, but the trade-off in "just works" is worth it IMO.) > > Also, numpy itself doesn't provide any Fortran APIs, so Fortran ABI > shouldn't matter at all. These wheels lug their own Openblas and gfortran libs with them, so in theory at least they should only care about the correct Numpy version, and the correct Python CRT. Pauli From pav at iki.fi Fri Aug 25 14:17:53 2017 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 25 Aug 2017 20:17:53 +0200 Subject: [SciPy-Dev] Scipy Windows wheels In-Reply-To: <1503619737.13583.8.camel@iki.fi> References: <1503615397.2351.22.camel@iki.fi> <1503619737.13583.8.camel@iki.fi> Message-ID: <1503685073.2321.0.camel@iki.fi> pe, 2017-08-25 kello 02:08 +0200, Pauli Virtanen kirjoitti: [clip] > These wheels lug their own Openblas and gfortran libs with them, so > in > theory at least they should only care about the correct Numpy > version, > and the correct Python CRT. Indeed, they appear to work fine also in practice with Gohlke's numpy package. Pauli From jetaylor74 at gmail.com Sat Aug 26 02:08:22 2017 From: jetaylor74 at gmail.com (Jonathan Taylor) Date: Fri, 25 Aug 2017 23:08:22 -0700 Subject: [SciPy-Dev] scipy.io.loadmat and matvec multiply Message-ID: I've got a largeish array that I have saved in a .MAT file that I need to use for matvec multiply several times. It seems that if I copy the array before running the matvec I get a significant speedup. Is this known? I can attach a link to a particular .MAT file if helpful. -- Jonathan Taylor Dept. of Statistics Sequoia Hall, 137 390 Serra Mall Stanford, CA 94305 Tel: 650.723.9230 Fax: 650.725.8977 Web: http://www-stat.stanford.edu/~jtaylo -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Aug 26 02:39:05 2017 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 26 Aug 2017 08:39:05 +0200 Subject: [SciPy-Dev] scipy.io.loadmat and matvec multiply In-Reply-To: References: Message-ID: <1503729545.5586.7.camel@iki.fi> pe, 2017-08-25 kello 23:08 -0700, Jonathan Taylor kirjoitti: > I've got a largeish array that I have saved in a .MAT file that I > need to > use for > matvec multiply several times. > > It seems that if I copy the array before running the matvec I get a > significant speedup. Is this known? If you do the copy in a way such that the format of the matrix is different (e.g. different sparse matrix format), then the speed can differ. Check print(type(original_matrix), type(copied_matrix)). -- Pauli Virtanen From njs at pobox.com Sat Aug 26 02:41:08 2017 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 25 Aug 2017 23:41:08 -0700 Subject: [SciPy-Dev] scipy.io.loadmat and matvec multiply In-Reply-To: <1503729545.5586.7.camel@iki.fi> References: <1503729545.5586.7.camel@iki.fi> Message-ID: On Fri, Aug 25, 2017 at 11:39 PM, Pauli Virtanen wrote: > pe, 2017-08-25 kello 23:08 -0700, Jonathan Taylor kirjoitti: >> I've got a largeish array that I have saved in a .MAT file that I >> need to >> use for >> matvec multiply several times. >> >> It seems that if I copy the array before running the matvec I get a >> significant speedup. Is this known? > > If you do the copy in a way such that the format of the matrix is > different (e.g. different sparse matrix format), then the speed can > differ. Check print(type(original_matrix), type(copied_matrix)). If it's a dense matrix, then it's also possible that the original matrix gets Fortran layout, and the copy is C layout. To test that you want: print(original_matrix.strides, copied_matrix.strides) -n -- Nathaniel J. Smith -- https://vorpus.org From jonathan.taylor at stanford.edu Sat Aug 26 03:09:05 2017 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Sat, 26 Aug 2017 00:09:05 -0700 Subject: [SciPy-Dev] scipy.io.loadmat and matvec multiply In-Reply-To: References: <1503729545.5586.7.camel@iki.fi> Message-ID: Yes, it's a dense 2500x2000 matrix. Loaded strides: (8, 16000) Copied strides: (20000, 8) So, matvec is just slower because of strides and where numpy retrieves data? Is there a simple way to do this besides a copy? I can easily afford the copy, just wondering. On Fri, Aug 25, 2017 at 11:41 PM, Nathaniel Smith wrote: > On Fri, Aug 25, 2017 at 11:39 PM, Pauli Virtanen wrote: > > pe, 2017-08-25 kello 23:08 -0700, Jonathan Taylor kirjoitti: > >> I've got a largeish array that I have saved in a .MAT file that I > >> need to > >> use for > >> matvec multiply several times. > >> > >> It seems that if I copy the array before running the matvec I get a > >> significant speedup. Is this known? > > > > If you do the copy in a way such that the format of the matrix is > > different (e.g. different sparse matrix format), then the speed can > > differ. Check print(type(original_matrix), type(copied_matrix)). > > If it's a dense matrix, then it's also possible that the original > matrix gets Fortran layout, and the copy is C layout. To test that you > want: print(original_matrix.strides, copied_matrix.strides) > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -- Jonathan Taylor Dept. of Statistics Sequoia Hall, 137 390 Serra Mall Stanford, CA 94305 Tel: 650.723.9230 Fax: 650.725.8977 Web: http://www-stat.stanford.edu/~jtaylo -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Sat Aug 26 13:10:52 2017 From: shoyer at gmail.com (Stephan Hoyer) Date: Sat, 26 Aug 2017 10:10:52 -0700 Subject: [SciPy-Dev] scipy.io.loadmat and matvec multiply In-Reply-To: References: <1503729545.5586.7.camel@iki.fi> Message-ID: On Sat, Aug 26, 2017 at 12:09 AM, Jonathan Taylor < jonathan.taylor at stanford.edu> wrote: > So, matvec is just slower because of strides and where numpy retrieves > data? Is there a simple way to do this besides a copy? I can easily afford > the copy, just wondering. > No, the only way to change the strides of an array with the same data is to make a copy. Array operations will always be fastest when the smallest strides are along the axis iterated over in the inner-most (summed) loop. So this existing strides of your matrix are not sub-optimal in general, just for this specific operation. They would be suitable, for example, in a vector-matrix multiply. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.taylor at stanford.edu Sat Aug 26 13:40:48 2017 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Sat, 26 Aug 2017 10:40:48 -0700 Subject: [SciPy-Dev] scipy.io.loadmat and matvec multiply In-Reply-To: References: <1503729545.5586.7.camel@iki.fi> Message-ID: Thanks for the help. What I am actually doing is computing a gradient to a least squares objective. That is, X.T.dot(X.dot(beta) - Y) If X is such that X.dot(beta) is fast (i.e. matvec is fast) then am I missing a "simple" optimization here at the cost of a copy? Alternatively, if X is such that vecmat is fast, then what is the best way to do this? A copy seems easiest, and possibly applying the previous "simple" optimization. Based on my understanding of the other replies, I would guess that if X2=X.copy(), then the fastest way would be (X2.dot(beta) - Y).dot(X) This doesn't pan out in my example, the winner is X2.T.dot(X2.dot(beta) - Y) which is about the same as (X2.dot(beta)-Y).dot(X2) I made a small gist: https://gist.github.com/da7b2ef6ef109511af06a9cebbfc8ed1 One difference I see between a numpy array with the same strides and the array loaded from a MAT file is the ALIGNED flag. On Sat, Aug 26, 2017 at 10:10 AM, Stephan Hoyer wrote: > On Sat, Aug 26, 2017 at 12:09 AM, Jonathan Taylor < > jonathan.taylor at stanford.edu> wrote: > >> So, matvec is just slower because of strides and where numpy retrieves >> data? Is there a simple way to do this besides a copy? I can easily afford >> the copy, just wondering. >> > > No, the only way to change the strides of an array with the same data is > to make a copy. > > Array operations will always be fastest when the smallest strides are > along the axis iterated over in the inner-most (summed) loop. So this > existing strides of your matrix are not sub-optimal in general, just for > this specific operation. They would be suitable, for example, in a > vector-matrix multiply. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -- Jonathan Taylor Dept. of Statistics Sequoia Hall, 137 390 Serra Mall Stanford, CA 94305 Tel: 650.723.9230 Fax: 650.725.8977 Web: http://www-stat.stanford.edu/~jtaylo -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Aug 26 14:06:34 2017 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 26 Aug 2017 11:06:34 -0700 Subject: [SciPy-Dev] scipy.io.loadmat and matvec multiply In-Reply-To: References: <1503729545.5586.7.camel@iki.fi> Message-ID: On Sat, Aug 26, 2017 at 12:09 AM, Jonathan Taylor < jonathan.taylor at stanford.edu> wrote: > > Yes, it's a dense 2500x2000 matrix. > > Loaded strides: (8, 16000) > > Copied strides: (20000, 8) > > So, matvec is just slower because of strides and where numpy retrieves data? Is there a simple way to do this besides a copy? I can easily afford the copy, just wondering. It's not simpler, but the most efficient and idiomatic way to ensure C-contiguity is to use np.ascontiguousarray(). This will make a copy only if necessary. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Aug 26 20:13:32 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2017 18:13:32 -0600 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: References: Message-ID: On Thu, Aug 24, 2017 at 4:11 AM, Ralf Gommers wrote: > > > On Sun, Aug 20, 2017 at 12:53 PM, Ralf Gommers > wrote: > >> Hi all, >> >> I propose that we as SciPy developers and community adopt a Code of >> Conduct. >> >> As you probably know, Code of Conduct (CoC) documents are becoming more >> common every year for open source projects, and there are a number of good >> reasons to adopt a CoC: >> 1. It gives us the opportunity to explicitly express the values and >> behaviors we'd like to see in our community. >> 2. It is designed to make everyone feel welcome (and while I think we're >> a welcoming community anyway, not having a CoC may look explicitly >> unwelcoming to some potential contributors nowadays). >> 3. It gives us a tool to address a set of problems if and when they >> occur, as well as a way for anyone to report issues or behavior that is >> unacceptable to them (much better than having those people potentially >> leave the community). >> 4. SciPy is not yet a fiscally sponsored project of NumFOCUS, however I >> think we'd like to be in the near future. NumFOCUS has started to require >> having a CoC as a prerequisite for new projects joining it. The PSF has >> the same requirement for any sponsorship for events/projects that it gives. >> >> Also note that GitHub has starting checking the presence of a CoC fairly >> prominently (https://github.com/scipy/scipy/community), and has also >> produced a guide with things to think about when formulating a CoC: >> https://opensource.guide/code-of-conduct/. I recommend reading that >> guide (as well as others guides on that site), it's really good. >> >> To get to a CoC document, a good approach is to borrow text from a CoC >> that has been in use for a while and has proven to be valuable, and then >> modify where needed (similar to a software license - don't invent your >> own). I considered three existing CoC's: >> - The Contributor Covenant (http://contributor-covenant.o >> rg/version/1/2/0/): simple, concise, the most widely used one. The >> NumFOCUS recommended one is based on it as well ( >> https://www.numfocus.org/about/code-of-conduct/). >> - The Python Community Code of Conduct (https://www.python.org/psf/co >> deofconduct/): also simple, addresses mostly the spirit in which the >> Python community is operating / should operate. >> - The Jupyter Code of Conduct (https://github.com/jupyter/go >> vernance/tree/master/conduct): much more detailed, in part derived from >> the Speak up! and Django ones, more appropriate for large communities. >> >> The contributor covenant looks excellent, short, well structured, and easy to understand. I'd suggest adding just a bit for clarification, for instance, what venues (mailing lists, github) are considered in the SciPy domain, and a bit on whom to contact in case of problems. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Aug 26 21:14:59 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 27 Aug 2017 13:14:59 +1200 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: References: Message-ID: On Sun, Aug 27, 2017 at 12:13 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Thu, Aug 24, 2017 at 4:11 AM, Ralf Gommers > wrote: > >> >> >> On Sun, Aug 20, 2017 at 12:53 PM, Ralf Gommers >> wrote: >> >>> Hi all, >>> >>> I propose that we as SciPy developers and community adopt a Code of >>> Conduct. >>> >>> As you probably know, Code of Conduct (CoC) documents are becoming more >>> common every year for open source projects, and there are a number of good >>> reasons to adopt a CoC: >>> 1. It gives us the opportunity to explicitly express the values and >>> behaviors we'd like to see in our community. >>> 2. It is designed to make everyone feel welcome (and while I think we're >>> a welcoming community anyway, not having a CoC may look explicitly >>> unwelcoming to some potential contributors nowadays). >>> 3. It gives us a tool to address a set of problems if and when they >>> occur, as well as a way for anyone to report issues or behavior that is >>> unacceptable to them (much better than having those people potentially >>> leave the community). >>> 4. SciPy is not yet a fiscally sponsored project of NumFOCUS, however I >>> think we'd like to be in the near future. NumFOCUS has started to require >>> having a CoC as a prerequisite for new projects joining it. The PSF has >>> the same requirement for any sponsorship for events/projects that it gives. >>> >>> Also note that GitHub has starting checking the presence of a CoC fairly >>> prominently (https://github.com/scipy/scipy/community), and has also >>> produced a guide with things to think about when formulating a CoC: >>> https://opensource.guide/code-of-conduct/. I recommend reading that >>> guide (as well as others guides on that site), it's really good. >>> >>> To get to a CoC document, a good approach is to borrow text from a CoC >>> that has been in use for a while and has proven to be valuable, and then >>> modify where needed (similar to a software license - don't invent your >>> own). I considered three existing CoC's: >>> - The Contributor Covenant (http://contributor-covenant.o >>> rg/version/1/2/0/): simple, concise, the most widely used one. The >>> NumFOCUS recommended one is based on it as well ( >>> https://www.numfocus.org/about/code-of-conduct/). >>> - The Python Community Code of Conduct (https://www.python.org/psf/co >>> deofconduct/): also simple, addresses mostly the spirit in which the >>> Python community is operating / should operate. >>> - The Jupyter Code of Conduct (https://github.com/jupyter/go >>> vernance/tree/master/conduct): much more detailed, in part derived from >>> the Speak up! and Django ones, more appropriate for large communities. >>> >>> > The contributor covenant looks excellent, short, well structured, and easy > to understand. I'd suggest adding just a bit for clarification, for > instance, what venues (mailing lists, github) are considered in the SciPy > domain, and a bit on whom to contact in case of problems. > Thanks for the feedback Chuck. Pauli said on GitHub he preferred something shorter and less flowery as well, so that's two votes for the contributor covenant or something more in that direction. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Aug 26 21:32:41 2017 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 26 Aug 2017 18:32:41 -0700 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: References: Message-ID: On Sat, Aug 26, 2017 at 6:14 PM, Ralf Gommers wrote: > > > On Sun, Aug 27, 2017 at 12:13 PM, Charles R Harris > wrote: >> >> >> >> On Thu, Aug 24, 2017 at 4:11 AM, Ralf Gommers >> wrote: >>> >>> >>> >>> On Sun, Aug 20, 2017 at 12:53 PM, Ralf Gommers >>> wrote: >>>> >>>> Hi all, >>>> >>>> I propose that we as SciPy developers and community adopt a Code of >>>> Conduct. >>>> >>>> As you probably know, Code of Conduct (CoC) documents are becoming more >>>> common every year for open source projects, and there are a number of good >>>> reasons to adopt a CoC: >>>> 1. It gives us the opportunity to explicitly express the values and >>>> behaviors we'd like to see in our community. >>>> 2. It is designed to make everyone feel welcome (and while I think we're >>>> a welcoming community anyway, not having a CoC may look explicitly >>>> unwelcoming to some potential contributors nowadays). >>>> 3. It gives us a tool to address a set of problems if and when they >>>> occur, as well as a way for anyone to report issues or behavior that is >>>> unacceptable to them (much better than having those people potentially leave >>>> the community). >>>> 4. SciPy is not yet a fiscally sponsored project of NumFOCUS, however I >>>> think we'd like to be in the near future. NumFOCUS has started to require >>>> having a CoC as a prerequisite for new projects joining it. The PSF has the >>>> same requirement for any sponsorship for events/projects that it gives. >>>> >>>> Also note that GitHub has starting checking the presence of a CoC fairly >>>> prominently (https://github.com/scipy/scipy/community), and has also >>>> produced a guide with things to think about when formulating a CoC: >>>> https://opensource.guide/code-of-conduct/. I recommend reading that guide >>>> (as well as others guides on that site), it's really good. >>>> >>>> To get to a CoC document, a good approach is to borrow text from a CoC >>>> that has been in use for a while and has proven to be valuable, and then >>>> modify where needed (similar to a software license - don't invent your own). >>>> I considered three existing CoC's: >>>> - The Contributor Covenant >>>> (http://contributor-covenant.org/version/1/2/0/): simple, concise, the most >>>> widely used one. The NumFOCUS recommended one is based on it as well >>>> (https://www.numfocus.org/about/code-of-conduct/). >>>> - The Python Community Code of Conduct >>>> (https://www.python.org/psf/codeofconduct/): also simple, addresses mostly >>>> the spirit in which the Python community is operating / should operate. >>>> - The Jupyter Code of Conduct >>>> (https://github.com/jupyter/governance/tree/master/conduct): much more >>>> detailed, in part derived from the Speak up! and Django ones, more >>>> appropriate for large communities. >>>> >> >> The contributor covenant looks excellent, short, well structured, and easy >> to understand. I'd suggest adding just a bit for clarification, for >> instance, what venues (mailing lists, github) are considered in the SciPy >> domain, and a bit on whom to contact in case of problems. > > > Thanks for the feedback Chuck. Pauli said on GitHub he preferred something > shorter and less flowery as well, so that's two votes for the contributor > covenant or something more in that direction. It looks like the link above goes to the 1.2 version of the Contributor Covenant, while the latest version is 1.4: https://www.contributor-covenant.org/version/1/4/ The 1.4 version has a little more detail on venues and a slot to put in a contact address. I do think there needs to be at least somewhat more detail on how reports will be handled -- the last thing you need while trying to handle a painful and delicate issue is to also be trying to figure out, like, which mailing list you can use to talk about it or whether you need to be setting up a new one. At least something like "we'll have a small committee, they'll have a private mailing list, archives will be available to the committee members but otherwise confidential, reports will be acknowledged quickly, if a complaint is about someone on the committee then they'll recuse themselves", that kind of stuff. -n -- Nathaniel J. Smith -- https://vorpus.org From charlesr.harris at gmail.com Sat Aug 26 22:57:20 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 26 Aug 2017 20:57:20 -0600 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: References: Message-ID: On Sat, Aug 26, 2017 at 7:32 PM, Nathaniel Smith wrote: > On Sat, Aug 26, 2017 at 6:14 PM, Ralf Gommers > wrote: > > > > > > On Sun, Aug 27, 2017 at 12:13 PM, Charles R Harris > > wrote: > >> > >> > >> > >> On Thu, Aug 24, 2017 at 4:11 AM, Ralf Gommers > >> wrote: > >>> > >>> > >>> > >>> On Sun, Aug 20, 2017 at 12:53 PM, Ralf Gommers > > >>> wrote: > >>>> > >>>> Hi all, > >>>> > >>>> I propose that we as SciPy developers and community adopt a Code of > >>>> Conduct. > >>>> > >>>> As you probably know, Code of Conduct (CoC) documents are becoming > more > >>>> common every year for open source projects, and there are a number of > good > >>>> reasons to adopt a CoC: > >>>> 1. It gives us the opportunity to explicitly express the values and > >>>> behaviors we'd like to see in our community. > >>>> 2. It is designed to make everyone feel welcome (and while I think > we're > >>>> a welcoming community anyway, not having a CoC may look explicitly > >>>> unwelcoming to some potential contributors nowadays). > >>>> 3. It gives us a tool to address a set of problems if and when they > >>>> occur, as well as a way for anyone to report issues or behavior that > is > >>>> unacceptable to them (much better than having those people > potentially leave > >>>> the community). > >>>> 4. SciPy is not yet a fiscally sponsored project of NumFOCUS, however > I > >>>> think we'd like to be in the near future. NumFOCUS has started to > require > >>>> having a CoC as a prerequisite for new projects joining it. The PSF > has the > >>>> same requirement for any sponsorship for events/projects that it > gives. > >>>> > >>>> Also note that GitHub has starting checking the presence of a CoC > fairly > >>>> prominently (https://github.com/scipy/scipy/community), and has also > >>>> produced a guide with things to think about when formulating a CoC: > >>>> https://opensource.guide/code-of-conduct/. I recommend reading that > guide > >>>> (as well as others guides on that site), it's really good. > >>>> > >>>> To get to a CoC document, a good approach is to borrow text from a CoC > >>>> that has been in use for a while and has proven to be valuable, and > then > >>>> modify where needed (similar to a software license - don't invent > your own). > >>>> I considered three existing CoC's: > >>>> - The Contributor Covenant > >>>> (http://contributor-covenant.org/version/1/2/0/): simple, concise, > the most > >>>> widely used one. The NumFOCUS recommended one is based on it as well > >>>> (https://www.numfocus.org/about/code-of-conduct/). > >>>> - The Python Community Code of Conduct > >>>> (https://www.python.org/psf/codeofconduct/): also simple, addresses > mostly > >>>> the spirit in which the Python community is operating / should > operate. > >>>> - The Jupyter Code of Conduct > >>>> (https://github.com/jupyter/governance/tree/master/conduct): much > more > >>>> detailed, in part derived from the Speak up! and Django ones, more > >>>> appropriate for large communities. > >>>> > >> > >> The contributor covenant looks excellent, short, well structured, and > easy > >> to understand. I'd suggest adding just a bit for clarification, for > >> instance, what venues (mailing lists, github) are considered in the > SciPy > >> domain, and a bit on whom to contact in case of problems. > > > > > > Thanks for the feedback Chuck. Pauli said on GitHub he preferred > something > > shorter and less flowery as well, so that's two votes for the contributor > > covenant or something more in that direction. > > It looks like the link above goes to the 1.2 version of the > Contributor Covenant, while the latest version is 1.4: > > https://www.contributor-covenant.org/version/1/4/ > > The 1.4 version has a little more detail on venues and a slot to put > in a contact address. > > The extra sections are good but I think the long enumerations dilute the message. All means all is a stronger statement than a list of every conceivable attribute. I think the 1.2 version is better in that regard. I do think there needs to be at least somewhat more detail on how > reports will be handled -- the last thing you need while trying to > handle a painful and delicate issue is to also be trying to figure > out, like, which mailing list you can use to talk about it or whether > you need to be setting up a new one. At least something like "we'll > have a small committee, they'll have a private mailing list, archives > will be available to the committee members but otherwise confidential, > reports will be acknowledged quickly, if a complaint is about someone > on the committee then they'll recuse themselves", that kind of stuff. > > At the least we should be in a position to handle a complaint if one shows up. Looks like some of the organizations have a separate enforcement manual, which I think is a good idea. Most SciPy interactions are in public venues which should help. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.barbry at mailoo.org Mon Aug 28 12:14:44 2017 From: marc.barbry at mailoo.org (marc) Date: Mon, 28 Aug 2017 18:14:44 +0200 Subject: [SciPy-Dev] Improving performance of sparse matrix multiplication Message-ID: <38f0eebc-9cc8-c636-64b5-911910f7d542@mailoo.org> Dear Scipy developers, We are developing a program that perform a large number of sparse matrix multiplications. We recently wrote a Python version of this program for several reasons (the original code is in Fortran). We are trying now to improve the performance of the Python version and we noticed that one of the bottlenecks are the sparse matrix multiplications, as example, import numpy as np from scipy.sparse import csr_matrix row = np.array([0, 0, 1, 2, 2, 2]) col = np.array([0, 2, 2, 0, 1, 2]) data = np.array([1, 2, 3, 4, 5, 6], dtype=np.float32) csr = csr_matrix((data, (row, col)), shape=(3, 3)) print(csr.toarray()) A = np.array([1, 2, 3], dtype=np.float32) print(csr*A) I started to look at the Scipy code to see how this functions were implemented, and realized that there is no openmp parallelization over the for loops. Like in function csr_matvec in sparse/sparsetools/csr.h (line 1120). Is it possible to parallelize this loops with openmp? Do you have maybe better ideas to improve the performances for this kind of operations? Best regards, Marc Barbry From jonathan.taylor at stanford.edu Mon Aug 28 13:13:18 2017 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Mon, 28 Aug 2017 10:13:18 -0700 Subject: [SciPy-Dev] scipy.io.loadmat and matvec multiply In-Reply-To: References: <1503729545.5586.7.camel@iki.fi> Message-ID: Thanks for all the help. That said, I'm not sure it is an issue of the strides. I can easily recreate the slowdown as in the above gist ( https://gist.github.com/ da7b2ef6ef109511af06a9cebbfc8ed1 ). Also, modifying the flags of a user-created ndarray so they agree with the loaded one is still noticably faster than using the array from `scipy.io.loadmat` For my purposes, a copy is just fine, but I think this might be an issue that could be looked into. Perhaps I should file an issue on github? On Sat, Aug 26, 2017 at 11:06 AM, Robert Kern wrote: > On Sat, Aug 26, 2017 at 12:09 AM, Jonathan Taylor < > jonathan.taylor at stanford.edu> wrote: > > > > Yes, it's a dense 2500x2000 matrix. > > > > Loaded strides: (8, 16000) > > > > Copied strides: (20000, 8) > > > > So, matvec is just slower because of strides and where numpy retrieves > data? Is there a simple way to do this besides a copy? I can easily afford > the copy, just wondering. > > It's not simpler, but the most efficient and idiomatic way to ensure > C-contiguity is to use np.ascontiguousarray(). This will make a copy only > if necessary. > > -- > Robert Kern > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -- Jonathan Taylor Dept. of Statistics Sequoia Hall, 137 390 Serra Mall Stanford, CA 94305 Tel: 650.723.9230 Fax: 650.725.8977 Web: http://www-stat.stanford.edu/~jtaylo -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Aug 28 13:26:55 2017 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Aug 2017 10:26:55 -0700 Subject: [SciPy-Dev] scipy.io.loadmat and matvec multiply In-Reply-To: References: <1503729545.5586.7.camel@iki.fi> Message-ID: On Mon, Aug 28, 2017 at 10:13 AM, Jonathan Taylor < jonathan.taylor at stanford.edu> wrote: > > Thanks for all the help. > > That said, I'm not sure it is an issue of the strides. I can easily recreate the slowdown as in the above gist ( https://gist.github.com/da7b2ef6ef109511af06a9cebbfc8ed1 ). Also, modifying the flags of a user-created ndarray so they agree with the loaded one is still noticably faster than using the array from `scipy.io.loadmat` Ah yeah, if the data is aligned, then that might end up faster. Your optimized BLAS will be able to use certain CPU instructions that require aligned data. Setting the ALIGNED flag to false won't actually make the data unaligned; your BLAS checks the data itself, not the numpy flag. > For my purposes, a copy is just fine, but I think this might be an issue that could be looked into. Perhaps I should file an issue on github? It might be worth checking if scipy.io.loadmat() can be made to ensure that it always creates aligned arrays. There isn't anything to be done about .dot(), though. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From perimosocordiae at gmail.com Mon Aug 28 13:35:46 2017 From: perimosocordiae at gmail.com (CJ Carey) Date: Mon, 28 Aug 2017 13:35:46 -0400 Subject: [SciPy-Dev] scipy.io.loadmat and matvec multiply In-Reply-To: References: <1503729545.5586.7.camel@iki.fi> Message-ID: I tried reproducing the results from your gist using scipy 0.19.0, and I found that Xmat is aligned after loadmat: In [3]: Xmat = sio.loadmat('data.mat')['X'] In [4]: Xmat.flags Out[4]: C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False On Mon, Aug 28, 2017 at 1:26 PM, Robert Kern wrote: > On Mon, Aug 28, 2017 at 10:13 AM, Jonathan Taylor < > jonathan.taylor at stanford.edu> wrote: > > > > Thanks for all the help. > > > > That said, I'm not sure it is an issue of the strides. I can easily > recreate the slowdown as in the above gist ( https://gist.github.com/ > da7b2ef6ef109511af06a9cebbfc8ed1 ). Also, modifying the flags of a > user-created ndarray so they agree with the loaded one is still noticably > faster than using the array from `scipy.io.loadmat` > > Ah yeah, if the data is aligned, then that might end up faster. Your > optimized BLAS will be able to use certain CPU instructions that require > aligned data. Setting the ALIGNED flag to false won't actually make the > data unaligned; your BLAS checks the data itself, not the numpy flag. > > > For my purposes, a copy is just fine, but I think this might be an issue > that could be looked into. Perhaps I should file an issue on github? > > It might be worth checking if scipy.io.loadmat() can be made to ensure > that it always creates aligned arrays. There isn't anything to be done > about .dot(), though. > > -- > Robert Kern > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.taylor at stanford.edu Mon Aug 28 15:59:45 2017 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Mon, 28 Aug 2017 12:59:45 -0700 Subject: [SciPy-Dev] scipy.io.loadmat and matvec multiply In-Reply-To: References: <1503729545.5586.7.camel@iki.fi> Message-ID: Hmm... I am running scipy 0.19.1 (added that line to gist to be sure). When it loaded as aligned was the timing comparable to `Xnp`? Just to be sure, I created a new conda ipython environment and installed scipy -- version is 0.19.1. Flags are still the same. The docs for scipy.io.loadmat don't seem to have an aligned option -- will ping Matthew Brett.... On Mon, Aug 28, 2017 at 10:35 AM, CJ Carey wrote: > I tried reproducing the results from your gist using scipy 0.19.0, and I > found that Xmat is aligned after loadmat: > > In [3]: Xmat = sio.loadmat('data.mat')['X'] > > In [4]: Xmat.flags > Out[4]: > C_CONTIGUOUS : False > F_CONTIGUOUS : True > OWNDATA : False > WRITEABLE : True > ALIGNED : True > UPDATEIFCOPY : False > > On Mon, Aug 28, 2017 at 1:26 PM, Robert Kern > wrote: > >> On Mon, Aug 28, 2017 at 10:13 AM, Jonathan Taylor < >> jonathan.taylor at stanford.edu> wrote: >> > >> > Thanks for all the help. >> > >> > That said, I'm not sure it is an issue of the strides. I can easily >> recreate the slowdown as in the above gist ( >> https://gist.github.com/da7b2ef6ef109511af06a9cebbfc8ed1 ). Also, >> modifying the flags of a user-created ndarray so they agree with the loaded >> one is still noticably faster than using the array from `scipy.io.loadmat` >> >> Ah yeah, if the data is aligned, then that might end up faster. Your >> optimized BLAS will be able to use certain CPU instructions that require >> aligned data. Setting the ALIGNED flag to false won't actually make the >> data unaligned; your BLAS checks the data itself, not the numpy flag. >> >> > For my purposes, a copy is just fine, but I think this might be an >> issue that could be looked into. Perhaps I should file an issue on github? >> >> It might be worth checking if scipy.io.loadmat() can be made to ensure >> that it always creates aligned arrays. There isn't anything to be done >> about .dot(), though. >> >> -- >> Robert Kern >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -- Jonathan Taylor Dept. of Statistics Sequoia Hall, 137 390 Serra Mall Stanford, CA 94305 Tel: 650.723.9230 Fax: 650.725.8977 Web: http://www-stat.stanford.edu/~jtaylo -------------- next part -------------- An HTML attachment was scrubbed... URL: From antonior92 at gmail.com Mon Aug 28 18:47:45 2017 From: antonior92 at gmail.com (Antonio Ribeiro) Date: Mon, 28 Aug 2017 19:47:45 -0300 Subject: [SciPy-Dev] GSoC 2017 - Large-scale Constrained Optimization Message-ID: <5527EEDC-E048-4B28-A78E-C4F8CAEE80F8@gmail.com> Hi all, During the last few months I have worked on my Google Summer of Code (GSoC) project, that consists of implementing a large-scale optimization algorithm to be integrated to Scipy. The algorithm implemented was an interior point method described in >. A series of blog post describe different aspects of the algorithm and its use >. The implementation can be found on the separate repository: > and is being integrated to SciPy through the pull request #7729 > During my GSoC I have implemented the optimization algorithm, tested it on almost one hundred examples > and created an interface for using it. While the optimization solver is ready to be used and tested, there are still a features I want to include, namely quasi-Newton approximations to the Hessian matrix including: SR1 approximation; BFGS approximation; and, L-BFGS approximation. I included those as optional items on my GSoC proposal and, unfortunately, they will not be ready for the GSoC submission. Furthermore, there are still some questions about how to best integrate my implementation to the optimization SciPy library that are still under discussion. However, I will tend to those final points in the weeks following the end of the program. Finally, I would like to thanks SciPy community and my mentors: Nikolay, Matt and Ralf, with whom I have greatly enjoyed the opportunity to work with during these three months. Ant?nio H. Ribeiro -------------- next part -------------- An HTML attachment was scrubbed... URL: From phillip.m.feldman at gmail.com Tue Aug 29 00:33:30 2017 From: phillip.m.feldman at gmail.com (Phillip Feldman) Date: Mon, 28 Aug 2017 21:33:30 -0700 Subject: [SciPy-Dev] GSoC 2017 - Large-scale Constrained Optimization In-Reply-To: <5527EEDC-E048-4B28-A78E-C4F8CAEE80F8@gmail.com> References: <5527EEDC-E048-4B28-A78E-C4F8CAEE80F8@gmail.com> Message-ID: Hello Antonio, Is it true that there is currently no interior point optimizer in SciPy? If so, your algorithm will be an extremely valuable addition. Phillip P.S. `scipy.optimize.lsq_linear` is described as "interior-point-like", but I'm not entirely sure what this means. On Mon, Aug 28, 2017 at 3:47 PM, Antonio Ribeiro wrote: > Hi all, > > During the last few months I have worked on my Google Summer of Code > (GSoC) project, that consists of implementing a large-scale optimization > algorithm to be integrated to Scipy. > > The algorithm implemented was an interior point method described in < > https://antonior92.github.io/posts/2017/07/interior-point-method/>. A > series of blog post describe different aspects of the algorithm and its use > . > > The implementation can be found on the separate repository: > > > > and is being integrated to SciPy through the pull request #7729 > > > > During my GSoC I have implemented the optimization algorithm, tested it on > almost one hundred examples NumericalResults/> and created an interface for using it. While the > optimization solver is ready to be used and tested, there are still a > features I want to include, namely quasi-Newton approximations to the > Hessian matrix including: SR1 approximation; BFGS approximation; and, > L-BFGS approximation. I included those as optional items on my GSoC > proposal and, unfortunately, they will not be ready for the GSoC > submission. Furthermore, there are still some questions about how to best > integrate my implementation to the optimization SciPy library that are > still under discussion. However, I will tend to those final points in the > weeks following the end of the program. > > Finally, I would like to thanks SciPy community and my mentors: Nikolay, > Matt and Ralf, with whom I have greatly enjoyed the opportunity to work > with during these three months. > > Ant?nio H. Ribeiro > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Aug 29 03:56:27 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 29 Aug 2017 19:56:27 +1200 Subject: [SciPy-Dev] Improving performance of sparse matrix multiplication In-Reply-To: <38f0eebc-9cc8-c636-64b5-911910f7d542@mailoo.org> References: <38f0eebc-9cc8-c636-64b5-911910f7d542@mailoo.org> Message-ID: On Tue, Aug 29, 2017 at 4:14 AM, marc wrote: > Dear Scipy developers, > > We are developing a program that perform a large number of sparse matrix > multiplications. We recently wrote a Python version of this program for > several reasons (the original code is in Fortran). > > We are trying now to improve the performance of the Python version and we > noticed that one of the bottlenecks are the sparse matrix multiplications, > as example, > > import numpy as np > from scipy.sparse import csr_matrix > > row = np.array([0, 0, 1, 2, 2, 2]) > col = np.array([0, 2, 2, 0, 1, 2]) > data = np.array([1, 2, 3, 4, 5, 6], dtype=np.float32) > > csr = csr_matrix((data, (row, col)), shape=(3, 3)) > print(csr.toarray()) > > A = np.array([1, 2, 3], dtype=np.float32) > > print(csr*A) > > > I started to look at the Scipy code to see how this functions were > implemented, and realized that there is no openmp parallelization over the > for loops. Like in function csr_matvec in sparse/sparsetools/csr.h (line > 1120). Is it possible to parallelize this loops with openmp? Short answer: no openmp in scipy. It has been discussed a number of times before, see for example http://numpy-discussion.10968.n7.nabble.com/Cython-based-OpenMP-accelerated-quartic-polynomial-solver-td41263.html#a41298 Cheers, Ralf > Do you have maybe better ideas to improve the performances for this kind > of operations? > > Best regards, > Marc Barbry > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Aug 29 05:18:49 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 29 Aug 2017 21:18:49 +1200 Subject: [SciPy-Dev] GSoC 2017 - Large-scale Constrained Optimization In-Reply-To: <5527EEDC-E048-4B28-A78E-C4F8CAEE80F8@gmail.com> References: <5527EEDC-E048-4B28-A78E-C4F8CAEE80F8@gmail.com> Message-ID: On Tue, Aug 29, 2017 at 10:47 AM, Antonio Ribeiro wrote: > Hi all, > > During the last few months I have worked on my Google Summer of Code > (GSoC) project, that consists of implementing a large-scale optimization > algorithm to be integrated to Scipy. > > The algorithm implemented was an interior point method described in < > https://antonior92.github.io/posts/2017/07/interior-point-method/>. A > series of blog post describe different aspects of the algorithm and its use > . > > The implementation can be found on the separate repository: > > > > and is being integrated to SciPy through the pull request #7729 > > > > During my GSoC I have implemented the optimization algorithm, tested it on > almost one hundred examples NumericalResults/> and created an interface for using it. While the > optimization solver is ready to be used and tested, there are still a > features I want to include, namely quasi-Newton approximations to the > Hessian matrix including: SR1 approximation; BFGS approximation; and, > L-BFGS approximation. I included those as optional items on my GSoC > proposal and, unfortunately, they will not be ready for the GSoC > submission. Furthermore, there are still some questions about how to best > integrate my implementation to the optimization SciPy library that are > still under discussion. However, I will tend to those final points in the > weeks following the end of the program. > > Finally, I would like to thanks SciPy community and my mentors: Nikolay, > Matt and Ralf, with whom I have greatly enjoyed the opportunity to work > with during these three months. > Thank you Antonio, was (is - don't go anywhere!) great working with you:) And thanks to Nikolay and Matt for doing all the hard mentoring work! Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From jomsdev at gmail.com Tue Aug 29 05:49:23 2017 From: jomsdev at gmail.com (Jordi Montes) Date: Tue, 29 Aug 2017 11:49:23 +0200 Subject: [SciPy-Dev] Clarkson Woodruff Transform implementation (Randomized Numerical Linear Algebra) Message-ID: Hello, I just made a pull request to the official repository with the implementation of the Clarkson Woodruff Transform (Count Min Sketch) for dense matrices. You can find more details of the pull request here . I would not like to made it badly (this is my pull request to the project) so any advice is welcome (Main implementation/tests/documentation). Jordi Montes. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michelemartone at users.sourceforge.net Tue Aug 29 06:23:39 2017 From: michelemartone at users.sourceforge.net (Michele Martone) Date: Tue, 29 Aug 2017 12:23:39 +0200 Subject: [SciPy-Dev] Improving performance of sparse matrix multiplication In-Reply-To: <38f0eebc-9cc8-c636-64b5-911910f7d542@mailoo.org> References: <38f0eebc-9cc8-c636-64b5-911910f7d542@mailoo.org> Message-ID: <20170829102339.GA23500@localhost> Dear Marc, did you try the PyRSB prototype: "librsb is a high performance sparse matrix library implementing the Recursive Sparse Blocks format, which is especially well suited for multiplications in iterative methods on huge sparse matrices. PyRSB is a Cython-based Python interface to librsb." https://github.com/michelemartone/pyrsb ? How large are your matrices ? Are they symmetric ? If your matrices are large you might get quite of a speedup; if symmetric, even better. Best regards, Michele p.s.: PyRSB (a thin interface) is a prototype, but librsb itself http://librsb.sourceforge.net/ is in a mature state and usable also from Fortran, and OpenMP based. On 20170828 at 18:14, marc wrote: > Dear Scipy developers, > > We are developing a program that perform a large number of sparse matrix > multiplications. We recently wrote a Python version of this program for > several reasons (the original code is in Fortran). > > We are trying now to improve the performance of the Python version and we > noticed that one of the bottlenecks are the sparse matrix multiplications, > as example, > > import numpy as np > from scipy.sparse import csr_matrix > > row = np.array([0, 0, 1, 2, 2, 2]) > col = np.array([0, 2, 2, 0, 1, 2]) > data = np.array([1, 2, 3, 4, 5, 6], dtype=np.float32) > > csr = csr_matrix((data, (row, col)), shape=(3, 3)) > print(csr.toarray()) > > A = np.array([1, 2, 3], dtype=np.float32) > > print(csr*A) > > > I started to look at the Scipy code to see how this functions were > implemented, and realized that there is no openmp parallelization over the > for loops. Like in function csr_matvec in sparse/sparsetools/csr.h (line > 1120). Is it possible to parallelize this loops with openmp? Do you have > maybe better ideas to improve the performances for this kind of operations? > > Best regards, > Marc Barbry > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From profamelissa at gmail.com Tue Aug 29 09:32:35 2017 From: profamelissa at gmail.com (=?UTF-8?Q?Melissa_Weber_Mendon=C3=A7a?=) Date: Tue, 29 Aug 2017 13:32:35 +0000 Subject: [SciPy-Dev] scikit-procrustes Message-ID: Hello everybody, My name is Melissa, I'm a professor and researcher at the Federal University of Santa Catarina, in Brazil. I'm an applied mathematician working with Numerical Optimization and I've been working for some time with Procrustes Problems. So I realized there was no implementation of methods to solve the unbalanced Procrustes problem in python, and I decided to build a scikit. Here it is: - https://github.com/melissawm/skprocrustes http://scikits.scipy.org/scikit-procrustes All are welcome to criticize and to send pull requests if you feel you have something to contribute, I'd appreciate it. I hope this is useful for someone. Cheers! Melissa p.s. I'm not an expert in packaging and so it's highly possible that my code is not optimal, if you have any suggestions I'd appreciate as I'm trying to get better at this :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From profamelissa at gmail.com Tue Aug 29 09:32:44 2017 From: profamelissa at gmail.com (=?UTF-8?Q?Melissa_Weber_Mendon=C3=A7a?=) Date: Tue, 29 Aug 2017 13:32:44 +0000 Subject: [SciPy-Dev] GSoC 2017 - Large-scale Constrained Optimization In-Reply-To: References: <5527EEDC-E048-4B28-A78E-C4F8CAEE80F8@gmail.com> Message-ID: Hello, This is really great. Thank you, Ant?nio! I will definitely try to learn from your implementation. Cheers, Melissa Em ter, 29 de ago de 2017 ?s 06:19, Ralf Gommers escreveu: > On Tue, Aug 29, 2017 at 10:47 AM, Antonio Ribeiro > wrote: > >> Hi all, >> >> During the last few months I have worked on my Google Summer of Code >> (GSoC) project, that consists of implementing a large-scale optimization >> algorithm to be integrated to Scipy. >> >> The algorithm implemented was an interior point method described in < >> https://antonior92.github.io/posts/2017/07/interior-point-method/>. A >> series of blog post describe different aspects of the algorithm and its use >> . >> >> The implementation can be found on the separate repository: >> >> >> >> and is being integrated to SciPy through the pull request #7729 >> >> >> >> During my GSoC I have implemented the optimization algorithm, tested it >> on almost one hundred examples < >> https://antonior92.github.io/posts/2017/00/NumericalResults/> and >> created an interface for using it. While the optimization solver is ready >> to be used and tested, there are still a features I want to include, namely >> quasi-Newton approximations to the Hessian matrix including: SR1 >> approximation; BFGS approximation; and, L-BFGS approximation. I included >> those as optional items on my GSoC proposal and, unfortunately, they will >> not be ready for the GSoC submission. Furthermore, there are still some >> questions about how to best integrate my implementation to the optimization >> SciPy library that are still under discussion. However, I will tend to >> those final points in the weeks following the end of the program. >> >> Finally, I would like to thanks SciPy community and my mentors: Nikolay, >> Matt and Ralf, with whom I have greatly enjoyed the opportunity to work >> with during these three months. >> > > Thank you Antonio, was (is - don't go anywhere!) great working with you:) > And thanks to Nikolay and Matt for doing all the hard mentoring work! > > Cheers, > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sievert.scott at gmail.com Tue Aug 29 10:00:50 2017 From: sievert.scott at gmail.com (Scott Sievert) Date: Tue, 29 Aug 2017 09:00:50 -0500 Subject: [SciPy-Dev] scikit-procrustes In-Reply-To: References: Message-ID: How is the unbalanced problem different, and what applications does it have? I've used the Procrustes transform for some low dimensional embedding problems, and I'm curious to hear. Scott On Aug 29, 2017, 8:33 AM -0500, Melissa Weber Mendon?a , wrote: > Hello everybody, > > My name is Melissa, I'm a professor and researcher at the Federal University of Santa Catarina, in Brazil. I'm an applied mathematician working with Numerical Optimization and I've been working for some time with Procrustes Problems. So I realized there was no implementation of methods to solve the unbalanced Procrustes problem in python, and I decided to build a scikit. Here it is: > > > ? https://github.com/melissawm/skprocrustes > > > http://scikits.scipy.org/scikit-procrustes > > All are welcome to criticize and to send pull requests if you feel you have something to contribute, I'd appreciate it. I hope this is useful for someone. > > Cheers! > > Melissa > > p.s. I'm not an expert in packaging and so it's highly possible that my code is not optimal, if you have any suggestions I'd appreciate as I'm trying to get better at this :) > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From profamelissa at gmail.com Tue Aug 29 10:21:59 2017 From: profamelissa at gmail.com (=?UTF-8?Q?Melissa_Weber_Mendon=C3=A7a?=) Date: Tue, 29 Aug 2017 14:21:59 +0000 Subject: [SciPy-Dev] scikit-procrustes In-Reply-To: References: Message-ID: In general, in unbalanced problems you are looking for a solution with more rows than columns (i.e. more data than variables, in the statistical sense). So the problem does not have a closed form solution, but needs to be solved via iterative methods. The balanced case is solved by performing one SVD, and the unbalanced problem needs roughly one SVD per iteration, and in ill conditioned problems this might mean a large number of SVD's. Together with a colleague we have studied a new method for this (cited in the docs), and I had to test it against other previously developed methods, so I had to implement the scikit for my research purposes. From what I gather, in Psychometrics it might be useful to use the unbalanced problem if you are looking to find redundancies in the variables, for example. - Melissa Em ter, 29 de ago de 2017 ?s 11:02, Scott Sievert escreveu: > How is the unbalanced problem different, and what applications does it > have? I've used the Procrustes transform for some low dimensional embedding > problems, and I'm curious to hear. > > Scott > > On Aug 29, 2017, 8:33 AM -0500, Melissa Weber Mendon?a < > profamelissa at gmail.com>, wrote: > > Hello everybody, > > My name is Melissa, I'm a professor and researcher at the Federal > University of Santa Catarina, in Brazil. I'm an applied mathematician > working with Numerical Optimization and I've been working for some time > with Procrustes Problems. So I realized there was no implementation of > methods to solve the unbalanced Procrustes problem in python, and I decided > to build a scikit. Here it is: > > > - https://github.com/melissawm/skprocrustes > > > http://scikits.scipy.org/scikit-procrustes > > All are welcome to criticize and to send pull requests if you feel you > have something to contribute, I'd appreciate it. I hope this is useful for > someone. > > Cheers! > > Melissa > > p.s. I'm not an expert in packaging and so it's highly possible that my > code is not optimal, if you have any suggestions I'd appreciate as I'm > trying to get better at this :) > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nonhermitian at gmail.com Tue Aug 29 10:33:00 2017 From: nonhermitian at gmail.com (Paul Nation) Date: Tue, 29 Aug 2017 08:33:00 -0600 Subject: [SciPy-Dev] Improving performance of sparse matrix multiplication In-Reply-To: References: <38f0eebc-9cc8-c636-64b5-911910f7d542@mailoo.org> <20170829102339.GA23500@localhost> Message-ID: It is possible to write your own openmp implimentation, performing the row multiplications in parallel, using Cython or whatever low language you like. Since sparse matrix dense vector multiplication is memory bandwidth limited, you will see good performance if the problem fits into CPU cache. Otherwise, you will see only marginal improvements. If your problem has structure, then that should be exploited to make things faster. Also permutations like reverse Cuthill-Mckee, in SciPy, can give a modest boost. Best, Paul On Aug 29, 2017 04:26, "Michele Martone" < michelemartone at users.sourceforge.net> wrote: Dear Marc, did you try the PyRSB prototype: "librsb is a high performance sparse matrix library implementing the Recursive Sparse Blocks format, which is especially well suited for multiplications in iterative methods on huge sparse matrices. PyRSB is a Cython-based Python interface to librsb." https://github.com/michelemartone/pyrsb ? How large are your matrices ? Are they symmetric ? If your matrices are large you might get quite of a speedup; if symmetric, even better. Best regards, Michele p.s.: PyRSB (a thin interface) is a prototype, but librsb itself http://librsb.sourceforge.net/ is in a mature state and usable also from Fortran, and OpenMP based. On 20170828 at 18:14, marc wrote: > Dear Scipy developers, > > We are developing a program that perform a large number of sparse matrix > multiplications. We recently wrote a Python version of this program for > several reasons (the original code is in Fortran). > > We are trying now to improve the performance of the Python version and we > noticed that one of the bottlenecks are the sparse matrix multiplications, > as example, > > import numpy as np > from scipy.sparse import csr_matrix > > row = np.array([0, 0, 1, 2, 2, 2]) > col = np.array([0, 2, 2, 0, 1, 2]) > data = np.array([1, 2, 3, 4, 5, 6], dtype=np.float32) > > csr = csr_matrix((data, (row, col)), shape=(3, 3)) > print(csr.toarray()) > > A = np.array([1, 2, 3], dtype=np.float32) > > print(csr*A) > > > I started to look at the Scipy code to see how this functions were > implemented, and realized that there is no openmp parallelization over the > for loops. Like in function csr_matvec in sparse/sparsetools/csr.h (line > 1120). Is it possible to parallelize this loops with openmp? Do you have > maybe better ideas to improve the performances for this kind of operations? > > Best regards, > Marc Barbry > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev _______________________________________________ SciPy-Dev mailing list SciPy-Dev at python.org https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Aug 30 01:32:49 2017 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 29 Aug 2017 22:32:49 -0700 Subject: [SciPy-Dev] establishing a Code of Conduct for SciPy In-Reply-To: References: Message-ID: On Sat, Aug 26, 2017 at 7:57 PM, Charles R Harris wrote: > > > On Sat, Aug 26, 2017 at 7:32 PM, Nathaniel Smith wrote: >> >> On Sat, Aug 26, 2017 at 6:14 PM, Ralf Gommers >> wrote: >> > >> > Thanks for the feedback Chuck. Pauli said on GitHub he preferred >> > something >> > shorter and less flowery as well, so that's two votes for the >> > contributor >> > covenant or something more in that direction. >> >> It looks like the link above goes to the 1.2 version of the >> Contributor Covenant, while the latest version is 1.4: >> >> https://www.contributor-covenant.org/version/1/4/ >> >> The 1.4 version has a little more detail on venues and a slot to put >> in a contact address. >> > > The extra sections are good but I think the long enumerations dilute the > message. All means all is a stronger statement than a list of every > conceivable attribute. I think the 1.2 version is better in that regard. Folks who look like you or me aren't really the intended audience for this ? we already know we're included :-). The important thing is how the message comes across to those who might otherwise feel excluded. So I think this is a place where we should defer to the experts. -n -- Nathaniel J. Smith -- https://vorpus.org From ralf.gommers at gmail.com Wed Aug 30 05:58:14 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 30 Aug 2017 21:58:14 +1200 Subject: [SciPy-Dev] scikit-procrustes In-Reply-To: References: Message-ID: On Wed, Aug 30, 2017 at 2:21 AM, Melissa Weber Mendon?a < profamelissa at gmail.com> wrote: > In general, in unbalanced problems you are looking for a solution with > more rows than columns (i.e. more data than variables, in the statistical > sense). So the problem does not have a closed form solution, but needs to > be solved via iterative methods. The balanced case is solved by performing > one SVD, and the unbalanced problem needs roughly one SVD per iteration, > and in ill conditioned problems this might mean a large number of SVD's. > > Together with a colleague we have studied a new method for this (cited in > the docs), and I had to test it against other previously developed methods, > so I had to implement the scikit for my research purposes. From what I > gather, in Psychometrics it might be useful to use the unbalanced problem > if you are looking to find redundancies in the variables, for example. > > - Melissa > > Em ter, 29 de ago de 2017 ?s 11:02, Scott Sievert > escreveu: > >> How is the unbalanced problem different, and what applications does it >> have? I've used the Procrustes transform for some low dimensional embedding >> problems, and I'm curious to hear. >> >> Scott >> >> On Aug 29, 2017, 8:33 AM -0500, Melissa Weber Mendon?a < >> profamelissa at gmail.com>, wrote: >> >> Hello everybody, >> >> My name is Melissa, I'm a professor and researcher at the Federal >> University of Santa Catarina, in Brazil. I'm an applied mathematician >> working with Numerical Optimization and I've been working for some time >> with Procrustes Problems. So I realized there was no implementation of >> methods to solve the unbalanced Procrustes problem in python, and I decided >> to build a scikit. Here it is: >> >> >> - https://github.com/melissawm/skprocrustes >> >> >> http://scikits.scipy.org/scikit-procrustes >> >> All are welcome to criticize and to send pull requests if you feel you >> have something to contribute, I'd appreciate it. I hope this is useful for >> someone. >> >> Hi Melissa, welcome! For those of us not very familiar with Procrustes Problems, it may be good to point out the differences between scipy.linalg.orthogonal_procrustes, scipy.spatial.procrustes and your scikit. And perhaps it makes sense to cross-link to/from your scikit in the docstrings of those scipy functions? Cheers, Ralf >> Cheers! >> >> Melissa >> >> p.s. I'm not an expert in packaging and so it's highly possible that my >> code is not optimal, if you have any suggestions I'd appreciate as I'm >> trying to get better at this :) >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Thu Aug 31 01:22:26 2017 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 31 Aug 2017 01:22:26 -0400 Subject: [SciPy-Dev] Proposed API change for some functions in scipy.signal Message-ID: I've been using quite a few of the functions in scipy.signal recently, and I've been reminded of some of the quirks of the API. I have submitted a pull request to clean up one of those quirks. In the new pull request, I have added the argument 'fs' (the sampling frequency) to the following functions: firwin, firwin2, firls, and remez. For firwin, firwin2 and firls, the new argument replaces 'nyq', and for remez, it replaces 'Hz'. This makes these functions consistent with the other functions in which the sampling frequency is specified using 'fs': periodogram, welch, csd, coherence, spectrogram, stft, and istft. 'fs', or in LaTeX, $f_s$, is common notation for the sampling frequency in the DSP literature. In the pull request, the old argument is given a "soft" deprecation. That means the docstring says the argument is deprecated, but code to actually generate a DeprecationWarning has not been added yet. I'm fine with adding that now, but some might prefer a relatively long and soft transition for these changes. (Personally, I don't mind if they hang around for a while, but they should be gone by 2.0. :) I haven't changed the default value of the sampling frequency. For the FIR filter design functions firls, firwin and firwin2, the default is nyq=1.0 (equivalent to fs=2), while for remez the default is Hz=1 (i.e. fs=1). The functions that currently already use 'fs' all have the default fs=1. Changing the default for the FIR design functions would be a much more disruptive change. Comments here or on the pull request are welcome. P.S. I can see future pull requests in which 'fs' is added to functions that currently don't have an argument to specify the sampling frequency. I'm looking at you, freqz. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Thu Aug 31 06:13:21 2017 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 31 Aug 2017 12:13:21 +0200 Subject: [SciPy-Dev] Proposed API change for some functions in scipy.signal In-Reply-To: References: Message-ID: <20170831101321.GB169189@phare.normalesup.org> Hi Warren, Good to see that you are working on improving scipy.signal API. I have not strong opinion on the changes that you propose. I am really not an expert in DSP. I do find scipy.signal difficult to work with. While we are at it: is there a reason that scipy.signal.spline_filter only support 2D arrays? I naively thought that it would be good functionality to have on 1D signals. Ga?l On Thu, Aug 31, 2017 at 01:22:26AM -0400, Warren Weckesser wrote: > I've been using quite a few of the functions in scipy.signal recently, and I've > been reminded of some of the quirks of the API. I have submitted a pull request > to clean up one of those quirks. > In the new pull request, I have added the argument 'fs' (the sampling > frequency) to the following functions:? firwin, firwin2, firls, and remez.? For > firwin, firwin2 and firls, the new argument replaces 'nyq', and for remez, it > replaces 'Hz'.? This makes these functions consistent with the other functions > in which the sampling frequency is specified using 'fs': periodogram, welch, > csd, coherence, spectrogram, stft, and istft. 'fs', or in LaTeX, $f_s$, is > common notation for the sampling frequency in the DSP literature. > In the pull request, the old argument is given a "soft" deprecation. That means > the docstring says the argument is deprecated, but code to actually generate a > DeprecationWarning has not been added yet.? I'm fine with adding that now, but > some might prefer a relatively long and soft transition for these changes.? > (Personally, I don't mind if they hang around for a while, but they should be > gone by 2.0. :) > I haven't changed the default value of the sampling frequency.? For the FIR > filter design functions firls, firwin and firwin2, the default is nyq=1.0 > (equivalent to fs=2), while for remez the default is Hz=1 (i.e. fs=1).? The > functions that currently already use 'fs' all have the default fs=1. Changing > the default for the FIR design functions would be a much more disruptive > change. > Comments here or on the pull request are welcome. > P.S. I can see future pull requests in which 'fs' is added to functions that > currently don't have an argument to specify the sampling frequency.? I'm > looking at you, freqz. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -- Gael Varoquaux Researcher, INRIA Parietal NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France Phone: ++ 33-1-69-08-79-68 http://gael-varoquaux.info http://twitter.com/GaelVaroquaux From warren.weckesser at gmail.com Thu Aug 31 13:55:39 2017 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 31 Aug 2017 13:55:39 -0400 Subject: [SciPy-Dev] Proposed API change for some functions in scipy.signal In-Reply-To: <20170831101321.GB169189@phare.normalesup.org> References: <20170831101321.GB169189@phare.normalesup.org> Message-ID: On Thu, Aug 31, 2017 at 6:13 AM, Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > Hi Warren, > > Good to see that you are working on improving scipy.signal API. I have > not strong opinion on the changes that you propose. I am really not an > expert in DSP. I do find scipy.signal difficult to work with. > > While we are at it: is there a reason that scipy.signal.spline_filter > only support 2D arrays? I naively thought that it would be good > functionality to have on 1D signals. > > Sorry, I don't know the reason for that. Warren Ga?l > > On Thu, Aug 31, 2017 at 01:22:26AM -0400, Warren Weckesser wrote: > > I've been using quite a few of the functions in scipy.signal recently, > and I've > > been reminded of some of the quirks of the API. I have submitted a pull > request > > to clean up one of those quirks. > > > In the new pull request, I have added the argument 'fs' (the sampling > > frequency) to the following functions: firwin, firwin2, firls, and > remez. For > > firwin, firwin2 and firls, the new argument replaces 'nyq', and for > remez, it > > replaces 'Hz'. This makes these functions consistent with the other > functions > > in which the sampling frequency is specified using 'fs': periodogram, > welch, > > csd, coherence, spectrogram, stft, and istft. 'fs', or in LaTeX, $f_s$, > is > > common notation for the sampling frequency in the DSP literature. > > > In the pull request, the old argument is given a "soft" deprecation. > That means > > the docstring says the argument is deprecated, but code to actually > generate a > > DeprecationWarning has not been added yet. I'm fine with adding that > now, but > > some might prefer a relatively long and soft transition for these > changes. > > (Personally, I don't mind if they hang around for a while, but they > should be > > gone by 2.0. :) > > > I haven't changed the default value of the sampling frequency. For the > FIR > > filter design functions firls, firwin and firwin2, the default is nyq=1.0 > > (equivalent to fs=2), while for remez the default is Hz=1 (i.e. fs=1). > The > > functions that currently already use 'fs' all have the default fs=1. > Changing > > the default for the FIR design functions would be a much more disruptive > > change. > > > Comments here or on the pull request are welcome. > > > P.S. I can see future pull requests in which 'fs' is added to functions > that > > currently don't have an argument to specify the sampling frequency. I'm > > looking at you, freqz. > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at python.org > > https://mail.python.org/mailman/listinfo/scipy-dev > > > -- > Gael Varoquaux > Researcher, INRIA Parietal > NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France > Phone: ++ 33-1-69-08-79-68 > http://gael-varoquaux.info http://twitter.com/GaelVaroquaux > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phillip.m.feldman at gmail.com Thu Aug 31 14:41:50 2017 From: phillip.m.feldman at gmail.com (Phillip Feldman) Date: Thu, 31 Aug 2017 11:41:50 -0700 Subject: [SciPy-Dev] Proposed API change for some functions in scipy.signal In-Reply-To: References: Message-ID: This is an excellent change! What does the code do if the caller specifies both `fs` and `nyq`? On Wed, Aug 30, 2017 at 10:22 PM, Warren Weckesser < warren.weckesser at gmail.com> wrote: > I've been using quite a few of the functions in scipy.signal recently, and > I've been reminded of some of the quirks of the API. I have submitted a > pull request to clean up one of those quirks. > > In the new pull request, I have added the argument 'fs' (the sampling > frequency) to the following functions: firwin, firwin2, firls, and remez. > For firwin, firwin2 and firls, the new argument replaces 'nyq', and for > remez, it replaces 'Hz'. This makes these functions consistent with the > other functions in which the sampling frequency is specified using 'fs': > periodogram, welch, csd, coherence, spectrogram, stft, and istft. 'fs', or > in LaTeX, $f_s$, is common notation for the sampling frequency in the DSP > literature. > > In the pull request, the old argument is given a "soft" deprecation. That > means the docstring says the argument is deprecated, but code to actually > generate a DeprecationWarning has not been added yet. I'm fine with adding > that now, but some might prefer a relatively long and soft transition for > these changes. (Personally, I don't mind if they hang around for a while, > but they should be gone by 2.0. :) > > I haven't changed the default value of the sampling frequency. For the > FIR filter design functions firls, firwin and firwin2, the default is > nyq=1.0 (equivalent to fs=2), while for remez the default is Hz=1 (i.e. > fs=1). The functions that currently already use 'fs' all have the default > fs=1. Changing the default for the FIR design functions would be a much > more disruptive change. > > Comments here or on the pull request are welcome. > > P.S. I can see future pull requests in which 'fs' is added to functions > that currently don't have an argument to specify the sampling frequency. > I'm looking at you, freqz. > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From larson.eric.d at gmail.com Thu Aug 31 14:43:54 2017 From: larson.eric.d at gmail.com (Eric Larson) Date: Thu, 31 Aug 2017 14:43:54 -0400 Subject: [SciPy-Dev] Proposed API change for some functions in scipy.signal In-Reply-To: References: Message-ID: It currently is set to throw an error . Eric On Thu, Aug 31, 2017 at 2:41 PM, Phillip Feldman < phillip.m.feldman at gmail.com> wrote: > This is an excellent change! > > What does the code do if the caller specifies both `fs` and `nyq`? > > On Wed, Aug 30, 2017 at 10:22 PM, Warren Weckesser < > warren.weckesser at gmail.com> wrote: > >> I've been using quite a few of the functions in scipy.signal recently, >> and I've been reminded of some of the quirks of the API. I have submitted a >> pull request to clean up one of those quirks. >> >> In the new pull request, I have added the argument 'fs' (the sampling >> frequency) to the following functions: firwin, firwin2, firls, and remez. >> For firwin, firwin2 and firls, the new argument replaces 'nyq', and for >> remez, it replaces 'Hz'. This makes these functions consistent with the >> other functions in which the sampling frequency is specified using 'fs': >> periodogram, welch, csd, coherence, spectrogram, stft, and istft. 'fs', or >> in LaTeX, $f_s$, is common notation for the sampling frequency in the DSP >> literature. >> >> In the pull request, the old argument is given a "soft" deprecation. That >> means the docstring says the argument is deprecated, but code to actually >> generate a DeprecationWarning has not been added yet. I'm fine with adding >> that now, but some might prefer a relatively long and soft transition for >> these changes. (Personally, I don't mind if they hang around for a while, >> but they should be gone by 2.0. :) >> >> I haven't changed the default value of the sampling frequency. For the >> FIR filter design functions firls, firwin and firwin2, the default is >> nyq=1.0 (equivalent to fs=2), while for remez the default is Hz=1 (i.e. >> fs=1). The functions that currently already use 'fs' all have the default >> fs=1. Changing the default for the FIR design functions would be a much >> more disruptive change. >> >> Comments here or on the pull request are welcome. >> >> P.S. I can see future pull requests in which 'fs' is added to functions >> that currently don't have an argument to specify the sampling frequency. >> I'm looking at you, freqz. >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Thu Aug 31 14:45:10 2017 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 31 Aug 2017 14:45:10 -0400 Subject: [SciPy-Dev] Proposed API change for some functions in scipy.signal In-Reply-To: References: Message-ID: On Thu, Aug 31, 2017 at 2:41 PM, Phillip Feldman < phillip.m.feldman at gmail.com> wrote: > This is an excellent change! > > Thanks Phillip. What does the code do if the caller specifies both `fs` and `nyq`? > > It raises an exception: ValueError("Values cannot be given for both 'nyq' and 'fs'.") Warren On Wed, Aug 30, 2017 at 10:22 PM, Warren Weckesser < > warren.weckesser at gmail.com> wrote: > >> I've been using quite a few of the functions in scipy.signal recently, >> and I've been reminded of some of the quirks of the API. I have submitted a >> pull request to clean up one of those quirks. >> >> In the new pull request, I have added the argument 'fs' (the sampling >> frequency) to the following functions: firwin, firwin2, firls, and remez. >> For firwin, firwin2 and firls, the new argument replaces 'nyq', and for >> remez, it replaces 'Hz'. This makes these functions consistent with the >> other functions in which the sampling frequency is specified using 'fs': >> periodogram, welch, csd, coherence, spectrogram, stft, and istft. 'fs', or >> in LaTeX, $f_s$, is common notation for the sampling frequency in the DSP >> literature. >> >> In the pull request, the old argument is given a "soft" deprecation. That >> means the docstring says the argument is deprecated, but code to actually >> generate a DeprecationWarning has not been added yet. I'm fine with adding >> that now, but some might prefer a relatively long and soft transition for >> these changes. (Personally, I don't mind if they hang around for a while, >> but they should be gone by 2.0. :) >> >> I haven't changed the default value of the sampling frequency. For the >> FIR filter design functions firls, firwin and firwin2, the default is >> nyq=1.0 (equivalent to fs=2), while for remez the default is Hz=1 (i.e. >> fs=1). The functions that currently already use 'fs' all have the default >> fs=1. Changing the default for the FIR design functions would be a much >> more disruptive change. >> >> Comments here or on the pull request are welcome. >> >> P.S. I can see future pull requests in which 'fs' is added to functions >> that currently don't have an argument to specify the sampling frequency. >> I'm looking at you, freqz. >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: