From pablo.winant at gmail.com Fri Mar 1 05:58:27 2013 From: pablo.winant at gmail.com (Pablo Winant) Date: Fri, 01 Mar 2013 11:58:27 +0100 Subject: [SciPy-Dev] Multilinear interpolation In-Reply-To: References: <51260E4E.4010405@gmail.com> <51263127.7070306@gmail.com> <512B533D.1000105@gmail.com> <512BC3BA.9030506@gmail.com> Message-ID: <513089D3.7010607@gmail.com> So I have updated the tempita template using fused types. I also included code generation helper functions in python blocks. I found a problem with the template generation engine (https://groups.google.com/forum/?fromgroups=#!topic/paste-users/bwrf6xPhIjo). Basically, functions defined in python blocks cannot depend upon each other. In particular, they cannot be recursive. I avoided the problem by defining a dummy function but this is of course not very elegant. On the pyx side: - I included a "with nogil, parallel():" instruction but I am not sure if it has enabled any parallel calculation at all. Cpu usage seems to rely on one core only. - By looking at the generated C file, I suspect there is still a small cython overhead. I see conditional branches inside the loop while there should be none. - Maybe I should not be using memoryviews and use arrays of pointers instead inside the loop ? Then would it be enough to pass memoryviews as arguments in order to ensure data continuity ? - Performances are not so bad if I compare to matlab. I tried to evaluate a function defined over a [4,3,5,6] 4d-grid on 10000000 random points and I got 0.2s vs 1.4s for matlab's interpn(...,'linear*'). Of course this is to be taken with a grain of salt and I think it could still be improved a bit. On 25/02/2013 21:19, Pauli Virtanen wrote: > 25.02.2013 22:04, Pablo Winant kirjoitti: > [clip] >>> - The code assumes C-contiguous arrays, but does not check for it. >> right > Note that you can make Cython to check for this, the syntax is > sometype[::1] or sometype[:,::1] for multidim (see below). > > [clip] >>> You can use fused types instead of templating; the end result is the same, >>> but Cython takes care of picking the correct routine. >>> >> I tried it but I didn't see how to do it. I would like to do define a >> function like >> >> def function(ndarray[fused_type, ndim=2]): >> ... >> cdef fused_type internal_variable >> ... >> return something > The syntax for these arrays is > > cdef fused number_t: > double > double complex > > > def my_function(fused_type[:,:] arg) > ... > > I think it doesn't work with the (old) np.ndarray syntax. > From survinderpal at gmail.com Fri Mar 1 13:34:02 2013 From: survinderpal at gmail.com (survinder pal) Date: Sat, 2 Mar 2013 00:04:02 +0530 Subject: [SciPy-Dev] contribute to scipy.signal Message-ID: hello...I am working in the area of signal processing.I want to contribute to scipy.signal in signal processing and wavelets domain.Suggestions are welcome... On Fri, Mar 1, 2013 at 11:30 PM, wrote: > Send SciPy-Dev mailing list submissions to > scipy-dev at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at scipy.org > > You can reach the person managing the list at > scipy-dev-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Re: Like to contribute to Scipy (Ralf Gommers) > 2. Re: Multilinear interpolation (Pablo Winant) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 28 Feb 2013 21:27:37 +0100 > From: Ralf Gommers > Subject: Re: [SciPy-Dev] Like to contribute to Scipy > To: SciPy Developers List > Message-ID: > vHJ9z1xDQN6XvJUqb8UxA at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > On Thu, Feb 28, 2013 at 7:39 AM, anand parthasarathy >wrote: > > > Hi ralf, > > > > Thank you for your reply. I am thinking of making the Scipy.signal module > > to have the functions for communication, like communication toolbox in > > matlab. We could join and can effectively launch packages including > > features like Error correction mechanisms, channel functions etc. will > this > > be useful ? > > > > Hi Anand, to be honest I think those features are too specialized for > Scipy. I don't know of any Python packages with that type of functionality > (but I haven't searched and I'm not an expert). Maybe someone else has a > good suggestion. You could consider starting a new scikit if you plan on > implementing enough functionality to justify that. > > Cheers, > Ralf > > > > > > > Else can you point me towards any project involving productive results. > > > > > > > > > > > > > > Regards, > > Anand parthasarathy > > > > On Mon, Feb 25, 2013 at 2:11 AM, Ralf Gommers >wrote: > > > >> > >> > >> > >> On Tue, Feb 19, 2013 at 8:44 AM, anand parthasarathy < > anandps20 at gmail.com > >> > wrote: > >> > >>> Hi, > >>> > >>> I would like to start contribute to Scipy - signal processing module. I > >>> have been using MATLAB for a very long time, to the extend that i have > >>> implemented many signal processing algorithms targeting the > applications in > >>> the field of Communications & wireless PHY Layer . > >>> > >>> I am currently a Research Intern at Centre of Excellence in Wireless > >>> Technology (CEWiT), India. I am so much fascinated by Scipy and though > i am > >>> new to python, i started love to code in python environment. > >>> > >>> Since this field is like a ocean , i would like to get suggestions from > >>> you guys on this like where to start , and after decided with a > specific > >>> part to work on , i will start and contribute many things to Scipy to > the > >>> extend that i can. > >>> > >> > >> Hi Anand, welcome! The signal module certainly could use your > >> contributions. > >> > >> The same question you're asking was asked about a month ago by someone > >> else, please have a look at this thread: > >> http://thread.gmane.org/gmane.comp.python.scientific.devel/17173 . It > >> contains a few ideas and pointers for getting started. If you have ideas > >> for new algorithms/functionality to contribute, I recommend that you > first > >> discuss them on this mailing list to make sure the code you write will > be a > >> good fit for scipy. > >> > >> If you have more specific questions later, please don't hesitate to ask. > >> > >> Cheers, > >> Ralf > >> > >> > >> > >>> > >>> > >>> > >>> > >>> > >>> > >>> Regards, > >>> Anand Parthasarathy > >>> > >>> > >>> > >>> _______________________________________________ > >>> SciPy-Dev mailing list > >>> SciPy-Dev at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-dev > >>> > >>> > >> > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-dev > >> > >> > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-dev/attachments/20130228/78b3f69b/attachment-0001.html > > ------------------------------ > > Message: 2 > Date: Fri, 01 Mar 2013 11:58:27 +0100 > From: Pablo Winant > Subject: Re: [SciPy-Dev] Multilinear interpolation > To: scipy-dev at scipy.org > Message-ID: <513089D3.7010607 at gmail.com> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > So I have updated the tempita template using fused types. I also > included code generation helper functions in python blocks. > > I found a problem with the template generation engine > ( > https://groups.google.com/forum/?fromgroups=#!topic/paste-users/bwrf6xPhIjo > ). > Basically, functions defined in python blocks cannot depend upon each > other. In particular, they cannot be recursive. I avoided the problem by > defining a dummy function but this is of course not very elegant. > > On the pyx side: > - I included a "with nogil, parallel():" instruction but I am not sure > if it has enabled any parallel calculation at all. Cpu usage seems to > rely on one core only. > - By looking at the generated C file, I suspect there is still a small > cython overhead. I see conditional branches inside the loop while there > should be none. > - Maybe I should not be using memoryviews and use arrays of pointers > instead inside the loop ? Then would it be enough to pass memoryviews as > arguments in order to ensure data continuity ? > - Performances are not so bad if I compare to matlab. I tried to > evaluate a function defined over a [4,3,5,6] 4d-grid on 10000000 random > points and I got 0.2s vs 1.4s for matlab's interpn(...,'linear*'). Of > course this is to be taken with a grain of salt and I think it could > still be improved a bit. > > > > On 25/02/2013 21:19, Pauli Virtanen wrote: > > 25.02.2013 22:04, Pablo Winant kirjoitti: > > [clip] > >>> - The code assumes C-contiguous arrays, but does not check for it. > >> right > > Note that you can make Cython to check for this, the syntax is > > sometype[::1] or sometype[:,::1] for multidim (see below). > > > > [clip] > >>> You can use fused types instead of templating; the end result is the > same, > >>> but Cython takes care of picking the correct routine. > >>> > >> I tried it but I didn't see how to do it. I would like to do define a > >> function like > >> > >> def function(ndarray[fused_type, ndim=2]): > >> ... > >> cdef fused_type internal_variable > >> ... > >> return something > > The syntax for these arrays is > > > > cdef fused number_t: > > double > > double complex > > > > > > def my_function(fused_type[:,:] arg) > > ... > > > > I think it doesn't work with the (old) np.ndarray syntax. > > > > > > ------------------------------ > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > End of SciPy-Dev Digest, Vol 113, Issue 1 > ***************************************** > -- regards Survinder Pal M.Tech(AIE) CSIR-CSIO Chandigarh survinderpal at csio.res.in 9569868960 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smith.daniel.br at gmail.com Fri Mar 1 15:06:48 2013 From: smith.daniel.br at gmail.com (Daniel Smith) Date: Fri, 1 Mar 2013 15:06:48 -0500 Subject: [SciPy-Dev] LIL sparse matrix pull request Message-ID: After some very helpful feedback, I think my update of the LIL sparse matrix class is ready to go. The pull request is here: https://github.com/scipy/scipy/pull/425 Thanks, Daniel From warren.weckesser at gmail.com Fri Mar 1 16:01:36 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 1 Mar 2013 16:01:36 -0500 Subject: [SciPy-Dev] git question: fetch a PR without merging into master Message-ID: Hey all, Does anyone have a git recipe for grabbing a pull request from github without merging it into master? Ideally, I'd like to branch from the same point the PR branches from master, and not merge the branch back into master. The use case is to test pull requests that have merge conflicts. I'd like to be able to experiment with the PR, while putting off resolving the merge conflicts until later. Thanks, Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From punchagan at gmail.com Fri Mar 1 16:07:13 2013 From: punchagan at gmail.com (Puneeth Chaganti) Date: Sat, 2 Mar 2013 02:37:13 +0530 Subject: [SciPy-Dev] git question: fetch a PR without merging into master In-Reply-To: References: Message-ID: Warren, You could do something like git fetch : git checkout Hope that helps, Puneeth -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Mar 1 16:08:38 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 1 Mar 2013 16:08:38 -0500 Subject: [SciPy-Dev] git question: fetch a PR without merging into master In-Reply-To: References: Message-ID: On Fri, Mar 1, 2013 at 4:01 PM, Warren Weckesser wrote: > Hey all, > > Does anyone have a git recipe for grabbing a pull request from github > without merging it into master? Ideally, I'd like to branch from the same > point the PR branches from master, and not merge the branch back into > master. The use case is to test pull requests that have merge conflicts. > I'd like to be able to experiment with the PR, while putting off resolving > the merge conflicts until later. maybe something like this, if you are not on Windows (I never tried but something like this made the mailing lists a while ago.) https://github.com/splitbrain/git-pull-request Josef > > Thanks, > > Warren > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From jakevdp at cs.washington.edu Fri Mar 1 16:09:57 2013 From: jakevdp at cs.washington.edu (Jacob Vanderplas) Date: Fri, 1 Mar 2013 13:09:57 -0800 Subject: [SciPy-Dev] git question: fetch a PR without merging into master In-Reply-To: References: Message-ID: Hi Warren, github offers a little-known cheat sheet on this: go to the bottom of the PR page, and click the little "i" icon next to the "This pull request can be automatically merged" This shows the commands to create a local branch on your machine and pull down changes associated with the pull request. Jake On Fri, Mar 1, 2013 at 1:01 PM, Warren Weckesser wrote: > Hey all, > > Does anyone have a git recipe for grabbing a pull request from github > without merging it into master? Ideally, I'd like to branch from the same > point the PR branches from master, and not merge the branch back into > master. The use case is to test pull requests that have merge conflicts. > I'd like to be able to experiment with the PR, while putting off resolving > the merge conflicts until later. > > Thanks, > > Warren > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From punchagan at gmail.com Fri Mar 1 16:13:04 2013 From: punchagan at gmail.com (Puneeth Chaganti) Date: Sat, 2 Mar 2013 02:43:04 +0530 Subject: [SciPy-Dev] git question: fetch a PR without merging into master In-Reply-To: References: Message-ID: Warren, Look at https://gist.github.com/piscisaureus/3342247 for something much quicker. Regards, Puneeth -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Fri Mar 1 16:21:59 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 1 Mar 2013 16:21:59 -0500 Subject: [SciPy-Dev] git question: fetch a PR without merging into master In-Reply-To: References: Message-ID: On Fri, Mar 1, 2013 at 4:09 PM, Jacob Vanderplas wrote: > Hi Warren, > github offers a little-known cheat sheet on this: go to the bottom of the > PR page, and click the little "i" icon next to the "This pull request can > be automatically merged" This shows the commands to create a local branch > on your machine and pull down changes associated with the pull request. > Jake > > Thanks Jake. I use those hints a lot, but when the PR can not be merged automatically, the 'git pull' command suggested there results in a merge with master, and that will result in conflicts that have to be resolved. Warren > On Fri, Mar 1, 2013 at 1:01 PM, Warren Weckesser < > warren.weckesser at gmail.com> wrote: > >> Hey all, >> >> Does anyone have a git recipe for grabbing a pull request from github >> without merging it into master? Ideally, I'd like to branch from the same >> point the PR branches from master, and not merge the branch back into >> master. The use case is to test pull requests that have merge conflicts. >> I'd like to be able to experiment with the PR, while putting off resolving >> the merge conflicts until later. >> >> Thanks, >> >> Warren >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Fri Mar 1 16:28:54 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 1 Mar 2013 16:28:54 -0500 Subject: [SciPy-Dev] git question: fetch a PR without merging into master In-Reply-To: References: Message-ID: On Fri, Mar 1, 2013 at 4:08 PM, wrote: > On Fri, Mar 1, 2013 at 4:01 PM, Warren Weckesser > wrote: > > Hey all, > > > > Does anyone have a git recipe for grabbing a pull request from github > > without merging it into master? Ideally, I'd like to branch from the > same > > point the PR branches from master, and not merge the branch back into > > master. The use case is to test pull requests that have merge conflicts. > > I'd like to be able to experiment with the PR, while putting off > resolving > > the merge conflicts until later. > > maybe something like this, if you are not on Windows > (I never tried but something like this made the mailing lists a while ago.) > > https://github.com/splitbrain/git-pull-request > > Josef > > Thanks Josef, that looks nice, I'll have to try it out--but Puneeth's second suggestion might make it moot! Warren > > > Thanks, > > > > Warren > > > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Fri Mar 1 16:31:13 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 1 Mar 2013 16:31:13 -0500 Subject: [SciPy-Dev] git question: fetch a PR without merging into master In-Reply-To: References: Message-ID: On Fri, Mar 1, 2013 at 4:13 PM, Puneeth Chaganti wrote: > Warren, > > Look at https://gist.github.com/piscisaureus/3342247 for something much > quicker. > > Regards, > Puneeth > > Nice, that makes it easy to keep *all* PRs up-to-date. Thanks for the tip! Warren > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at hilboll.de Sat Mar 2 06:55:43 2013 From: lists at hilboll.de (Andreas Hilboll) Date: Sat, 02 Mar 2013 12:55:43 +0100 Subject: [SciPy-Dev] running into errors in scipycentral In-Reply-To: References: <512F62F9.7030705@hilboll.de> Message-ID: <5131E8BF.3010700@hilboll.de> >>> > I just configured scipycentral locally. When I tried to submit a test >>> > link in it, I encountered an error. >>> > >>> > http://dpaste.com/hold/1007760/ contains details of it. >>> > >>> > So, can you please look into it? I don't think you guys have got this >>> > one! It would be great if you can help me on it.. >>> >>> Apparently, scipycentral uses Sphinx to compile the rst markup to HTML. >>> It seems like you didn't set up the Sphinx suite, at least, sphinx is >>> complaining about a missing config. >>> >>> A. >>> >> I do have Sphinx. I guess its unable to find ./deploy/compile/conf.py > right? All I think is that there must be a bug with os.path.. >> Note: I am on windows > > Maybe it will help to see a directory listing on the deployed webserver? > http://pastebin.com/ph3ReddB > > I seem to remember using "os.sep" everywhere, instead of hard coding the > directory separator, but I might have missed one or two instances. > > Kevin Is the problem solved? Source code looks fine to me; the deploy/ directory should get created by the ensure_dir call. @Kevin: Any specific reason why you used string addition and os.sep instead of os.path.join to create paths? The manual adding seems more error-prone to me. But maybe this is not the problem after all ... as I said I cannot test on Windows. A. From warren.weckesser at gmail.com Sat Mar 2 07:39:59 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sat, 2 Mar 2013 07:39:59 -0500 Subject: [SciPy-Dev] Request for feedback on an enhance to scipy.stats.rankdata Message-ID: Pull request 447 (https://github.com/scipy/scipy/pull/447) enhances scipy.stats.rankdata with a choice of the ranking method. The description on github includes an example. Any feedback would be appreciated! Warren From suryak at ieee.org Sat Mar 2 07:40:56 2013 From: suryak at ieee.org (Surya Kasturi) Date: Sat, 2 Mar 2013 18:10:56 +0530 Subject: [SciPy-Dev] running into errors in scipycentral In-Reply-To: <5131E8BF.3010700@hilboll.de> References: <512F62F9.7030705@hilboll.de> <5131E8BF.3010700@hilboll.de> Message-ID: On Sat, Mar 2, 2013 at 5:25 PM, Andreas Hilboll wrote: > >>> > I just configured scipycentral locally. When I tried to submit a test > >>> > link in it, I encountered an error. > >>> > > >>> > http://dpaste.com/hold/1007760/ contains details of it. > >>> > > >>> > So, can you please look into it? I don't think you guys have got this > >>> > one! It would be great if you can help me on it.. > >>> > >>> Apparently, scipycentral uses Sphinx to compile the rst markup to HTML. > >>> It seems like you didn't set up the Sphinx suite, at least, sphinx is > >>> complaining about a missing config. > >>> > >>> A. > >>> > >> I do have Sphinx. I guess its unable to find ./deploy/compile/conf.py > > right? All I think is that there must be a bug with os.path.. > >> Note: I am on windows > > > > Maybe it will help to see a directory listing on the deployed webserver? > > http://pastebin.com/ph3ReddB > > > > I seem to remember using "os.sep" everywhere, instead of hard coding the > > directory separator, but I might have missed one or two instances. > > > > Kevin > > Is the problem solved? Source code looks fine to me; the deploy/ > directory should get created by the ensure_dir call. > > @Kevin: Any specific reason why you used string addition and os.sep > instead of os.path.join to create paths? The manual adding seems more > error-prone to me. But maybe this is not the problem after all ... as I > said I cannot test on Windows. > I spared some time to fix it this weekend (today). I really don't know whether its caused due to os.sep or os.path or os.dir ... etc things -- It was a blind guess.. > > A. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From suryak at ieee.org Sat Mar 2 09:06:38 2013 From: suryak at ieee.org (Surya Kasturi) Date: Sat, 2 Mar 2013 19:36:38 +0530 Subject: [SciPy-Dev] running into errors in scipycentral In-Reply-To: References: <512F62F9.7030705@hilboll.de> <5131E8BF.3010700@hilboll.de> Message-ID: On Sat, Mar 2, 2013 at 6:10 PM, Surya Kasturi wrote: > > > > On Sat, Mar 2, 2013 at 5:25 PM, Andreas Hilboll wrote: > >> >>> > I just configured scipycentral locally. When I tried to submit a >> test >> >>> > link in it, I encountered an error. >> >>> > >> >>> > http://dpaste.com/hold/1007760/ contains details of it. >> >>> > >> >>> > So, can you please look into it? I don't think you guys have got >> this >> >>> > one! It would be great if you can help me on it.. >> >>> >> >>> Apparently, scipycentral uses Sphinx to compile the rst markup to >> HTML. >> >>> It seems like you didn't set up the Sphinx suite, at least, sphinx is >> >>> complaining about a missing config. >> >>> >> >>> A. >> >>> >> >> I do have Sphinx. I guess its unable to find ./deploy/compile/conf.py >> > right? All I think is that there must be a bug with os.path.. >> >> Note: I am on windows >> > >> > Maybe it will help to see a directory listing on the deployed webserver? >> > http://pastebin.com/ph3ReddB >> > >> > I seem to remember using "os.sep" everywhere, instead of hard coding the >> > directory separator, but I might have missed one or two instances. >> > >> > Kevin >> >> Is the problem solved? Source code looks fine to me; the deploy/ >> directory should get created by the ensure_dir call. >> >> @Kevin: Any specific reason why you used string addition and os.sep >> instead of os.path.join to create paths? The manual adding seems more >> error-prone to me. But maybe this is not the problem after all ... as I >> said I cannot test on Windows. >> > I spared some time to fix it this weekend (today). > I really don't know whether its caused due to os.sep or os.path or os.dir > ... etc things -- It was a blind guess.. > >> >> A. >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > Hey, I just had a look at the source code, where its causing error.. Here is the piece of info I found: In call_sphinx_to_compile(working_dir) function, (rest_comments/views.py file) I guess working_dir variable is going wrong! I don't know but, when I overwritten its variable with absolute dir location of "deploy/compile" in my system, the code passed the function successfully, However, it again went wrong in the subsequent stages where working_dir is involved.. (line 236) obj = pickle.load(fhand) I guess this is where something is malfunctioning.. Actually, as you guys are quite familiar with the code (than me . still getting along with it slowly).. I thought sharing this info with you might help fix the bug easily -------------- next part -------------- An HTML attachment was scrubbed... URL: From suryak at ieee.org Sat Mar 2 10:21:48 2013 From: suryak at ieee.org (Surya Kasturi) Date: Sat, 2 Mar 2013 20:51:48 +0530 Subject: [SciPy-Dev] running into errors in scipycentral In-Reply-To: References: <512F62F9.7030705@hilboll.de> <5131E8BF.3010700@hilboll.de> Message-ID: On Sat, Mar 2, 2013 at 7:36 PM, Surya Kasturi wrote: > > > > On Sat, Mar 2, 2013 at 6:10 PM, Surya Kasturi wrote: > >> >> >> >> On Sat, Mar 2, 2013 at 5:25 PM, Andreas Hilboll wrote: >> >>> >>> > I just configured scipycentral locally. When I tried to submit a >>> test >>> >>> > link in it, I encountered an error. >>> >>> > >>> >>> > http://dpaste.com/hold/1007760/ contains details of it. >>> >>> > >>> >>> > So, can you please look into it? I don't think you guys have got >>> this >>> >>> > one! It would be great if you can help me on it.. >>> >>> >>> >>> Apparently, scipycentral uses Sphinx to compile the rst markup to >>> HTML. >>> >>> It seems like you didn't set up the Sphinx suite, at least, sphinx is >>> >>> complaining about a missing config. >>> >>> >>> >>> A. >>> >>> >>> >> I do have Sphinx. I guess its unable to find ./deploy/compile/conf.py >>> > right? All I think is that there must be a bug with os.path.. >>> >> Note: I am on windows >>> > >>> > Maybe it will help to see a directory listing on the deployed >>> webserver? >>> > http://pastebin.com/ph3ReddB >>> > >>> > I seem to remember using "os.sep" everywhere, instead of hard coding >>> the >>> > directory separator, but I might have missed one or two instances. >>> > >>> > Kevin >>> >>> Is the problem solved? Source code looks fine to me; the deploy/ >>> directory should get created by the ensure_dir call. >>> >>> @Kevin: Any specific reason why you used string addition and os.sep >>> instead of os.path.join to create paths? The manual adding seems more >>> error-prone to me. But maybe this is not the problem after all ... as I >>> said I cannot test on Windows. >>> >> I spared some time to fix it this weekend (today). >> I really don't know whether its caused due to os.sep or os.path or os.dir >> ... etc things -- It was a blind guess.. >> >>> >>> A. >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >> >> > Hey, I just had a look at the source code, where its causing error.. > Here is the piece of info I found: > > In call_sphinx_to_compile(working_dir) function, (rest_comments/views.py > file) > > I guess working_dir variable is going wrong! I don't know but, when I > overwritten its variable with absolute dir location of "deploy/compile" in > my system, the code passed the function successfully, > > However, it again went wrong in the subsequent stages where working_dir is > involved.. (line 236) > > obj = pickle.load(fhand) > > > I guess this is where something is malfunctioning.. > > Actually, as you guys are quite familiar with the code (than me . still > getting along with it slowly).. I thought sharing this info with you might > help fix the bug easily > > *I think I fixed it :)* However, there are some concerns. Please go through them and let me know! here it is: 1. the settings.SPC['comment_compile_dir'] value is "compile". However, this isn't working... But when I tried with a absolute location (ex: "d:/code/scipycentral/deploy/compile") it worked!! 2. There is no "email_website_admin.txt" in the "submission" templates. So, it raised TemplateDoesNotExist when I clicked "finish submission". I just created an empty file with that name and it worked.. (Don't know what its for -- didn't go through this stuff) Anyways, what its for? Why it is not present? 3. There is a EOFError with pickle.load() inside ./rest_comments/views.py Line 236, 235. Here in Line 235, its written as with open(pickle_f, 'r') as fhand: However, with reference to this discussion: http://stackoverflow.com/q/700873/1162468 I tried with open(pickle_f, 'rb') as fhand: ------ After all these tweaking.. things going smooth. Now, it would be great if you could try these on your local machines and let me know.. Thanks Surya -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicole.haenni at gmail.com Sat Mar 2 21:42:09 2013 From: nicole.haenni at gmail.com (Nicole Haenni) Date: Sun, 3 Mar 2013 03:42:09 +0100 Subject: [SciPy-Dev] Survey for framework and library developers: "Information needs in software ecosystems" In-Reply-To: References: Message-ID: Hi, I?m Nicole Haenni and I'm doing research for my thesis at the University of Berne (scg.unibe.ch) with Mircea Lungu and Niko Schwarz. We are researching on monitoring the activity in software ecosystems. This is a study about information needs that arise in such software ecosystems. I need your help to fill out the survey below. It takes about 10 minutes to complete it. A software ecosystem can be a project repository like GitHub, an open source community (e.g. the Apache community) or a language-based community (e.g. Smalltalk has Squeaksource, Ruby has Rubyforge). We formulate our research question as follows: "What information needs arise when developers use code from other projects, or see their own code used elsewhere." Survey link: http://bit.ly/14Zc71N or original link: https://docs.google.com/spreadsheet/viewform?formkey=dFBJUmVodVU1V3BMMGRPRWxBdjdDbVE6MA Thank you for your support! Nicole Haenni ----------------------------------------- Software Composition Group Institut f?r Informatik Universit?t Bern Neubr?ckstrasse 10 CH-3012 Bern SWITZERLAND -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Sun Mar 3 19:13:31 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Sun, 3 Mar 2013 18:13:31 -0600 Subject: [SciPy-Dev] Scipy Central logo In-Reply-To: References: <51093EC5.6030809@hilboll.de> <510A9565.4010606@crans.org> Message-ID: On Thu, Jan 31, 2013 at 11:42 AM, Pauli Virtanen wrote: > 31.01.2013 18:01, Pierre Haessig kirjoitti: > [clip] > > About feedback on the logo (I didn't find an existing thread on the ML) > > I suspect there may be an issue of perceived geographical bias by > > centering the map projection on North Atlantic Ocean. > > > > I'm by no means an specialist of this topic, but I know it may exist. > > (see http://en.wikipedia.org/wiki/Reversed_map for example). As possible > > workarounds, I see either : > > * no continents display (don't know how good or how boring such an earth > > would look) > > * or bit funnier: random map orientation ! (each page refresh could > > point to another random map from a prerendered pool) > > I think a reasonable workaround here would be to have two or so versions > of the logo, and just choose between them randomly. > > -- > Pauli Virtanen > I've updated the logo code to use multiple globe orientations. The original had arrows pointing from specific cities into a central location, which made a lot sense for "Scipy Central". To allow multiple globe orientations, however, I changed the arrows so that they start from the edges of the globe; that way the start positions wouldn't be at random locations when the map was reoriented. It doesn't mesh quite as well with what I had in mind, but at least the arrows don't point into the North Atlantic Garbage Patch anymore. Also, I tweaked the element sizes so that it looked better at smaller scales (which was mentioned in a separate thread). The example focuses the arrows on a list of cities, but it could be made completely random if desired. I've copied a couple of orientations below (there aremore cities in the code, and more can be easily added). Cheers, -Tony [image: Inline image 1][image: Inline image 2] The one below may be closer to logo size. [image: Inline image 3] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 35438 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 36156 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11015 bytes Desc: not available URL: From robert.kern at gmail.com Mon Mar 4 11:23:22 2013 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 4 Mar 2013 16:23:22 +0000 Subject: [SciPy-Dev] Scipy Central logo In-Reply-To: References: <51093EC5.6030809@hilboll.de> <510A9565.4010606@crans.org> Message-ID: On Mon, Mar 4, 2013 at 12:13 AM, Tony Yu wrote: > > I've updated the logo code to use multiple globe orientations. The original had arrows pointing from > specific cities into a central location, which made a lot sense for "Scipy > Central". To allow multiple globe orientations, however, I changed the > arrows so that they start from the edges of the globe; that way the start > positions wouldn't be at random locations when the map was reoriented. It > doesn't mesh quite as well with what I had in mind, but at least the arrows > don't point into the North Atlantic Garbage Patch anymore. > > Also, I tweaked the element sizes so that it looked better at smaller > scales (which was mentioned in a separate thread). > > The example focuses the arrows on a list of cities, but it could be made > completely random if desired. I've copied a couple of orientations below > (there aremore cities in the code, and more can be easily added). > 'rlyeh': {'lat': -47.15, 'lon': -126.72} -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Mar 4 16:04:09 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 4 Mar 2013 22:04:09 +0100 Subject: [SciPy-Dev] contribute to scipy.signal In-Reply-To: References: Message-ID: On Fri, Mar 1, 2013 at 7:34 PM, survinder pal wrote: > hello...I am working in the area of signal processing.I want to contribute > to scipy.signal in signal processing and wavelets domain.Suggestions are > welcome... > Hi Survinder, welcome. The first messages from Pauli and me in this thread may give you some ideas to start with: http://thread.gmane.org/gmane.comp.python.scientific.devel/17173. As for wavelets, there's not a lot in Scipy. What's there (see under Wavelets in http://docs.scipy.org/doc/scipy/reference/signal.html) could use a critical review and better documentation. PyWavelets has much more funcitonality, but development seems to have stalled. No idea why or how, you could contact the author if that project looks interesting. Cheers, Ralf > On Fri, Mar 1, 2013 at 11:30 PM, wrote: > >> Send SciPy-Dev mailing list submissions to >> scipy-dev at scipy.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> or, via email, send a message with subject or body 'help' to >> scipy-dev-request at scipy.org >> >> You can reach the person managing the list at >> scipy-dev-owner at scipy.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of SciPy-Dev digest..." >> >> >> Today's Topics: >> >> 1. Re: Like to contribute to Scipy (Ralf Gommers) >> 2. Re: Multilinear interpolation (Pablo Winant) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Thu, 28 Feb 2013 21:27:37 +0100 >> From: Ralf Gommers >> Subject: Re: [SciPy-Dev] Like to contribute to Scipy >> To: SciPy Developers List >> Message-ID: >> > vHJ9z1xDQN6XvJUqb8UxA at mail.gmail.com> >> Content-Type: text/plain; charset="iso-8859-1" >> >> On Thu, Feb 28, 2013 at 7:39 AM, anand parthasarathy > >wrote: >> >> > Hi ralf, >> > >> > Thank you for your reply. I am thinking of making the Scipy.signal >> module >> > to have the functions for communication, like communication toolbox in >> > matlab. We could join and can effectively launch packages including >> > features like Error correction mechanisms, channel functions etc. will >> this >> > be useful ? >> > >> >> Hi Anand, to be honest I think those features are too specialized for >> Scipy. I don't know of any Python packages with that type of functionality >> (but I haven't searched and I'm not an expert). Maybe someone else has a >> good suggestion. You could consider starting a new scikit if you plan on >> implementing enough functionality to justify that. >> >> Cheers, >> Ralf >> >> >> >> > >> > Else can you point me towards any project involving productive results. >> > >> > >> > >> > >> > >> > >> > Regards, >> > Anand parthasarathy >> > >> > On Mon, Feb 25, 2013 at 2:11 AM, Ralf Gommers > >wrote: >> > >> >> >> >> >> >> >> >> On Tue, Feb 19, 2013 at 8:44 AM, anand parthasarathy < >> anandps20 at gmail.com >> >> > wrote: >> >> >> >>> Hi, >> >>> >> >>> I would like to start contribute to Scipy - signal processing module. >> I >> >>> have been using MATLAB for a very long time, to the extend that i have >> >>> implemented many signal processing algorithms targeting the >> applications in >> >>> the field of Communications & wireless PHY Layer . >> >>> >> >>> I am currently a Research Intern at Centre of Excellence in Wireless >> >>> Technology (CEWiT), India. I am so much fascinated by Scipy and >> though i am >> >>> new to python, i started love to code in python environment. >> >>> >> >>> Since this field is like a ocean , i would like to get suggestions >> from >> >>> you guys on this like where to start , and after decided with a >> specific >> >>> part to work on , i will start and contribute many things to Scipy to >> the >> >>> extend that i can. >> >>> >> >> >> >> Hi Anand, welcome! The signal module certainly could use your >> >> contributions. >> >> >> >> The same question you're asking was asked about a month ago by someone >> >> else, please have a look at this thread: >> >> http://thread.gmane.org/gmane.comp.python.scientific.devel/17173 . It >> >> contains a few ideas and pointers for getting started. If you have >> ideas >> >> for new algorithms/functionality to contribute, I recommend that you >> first >> >> discuss them on this mailing list to make sure the code you write will >> be a >> >> good fit for scipy. >> >> >> >> If you have more specific questions later, please don't hesitate to >> ask. >> >> >> >> Cheers, >> >> Ralf >> >> >> >> >> >> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> Regards, >> >>> Anand Parthasarathy >> >>> >> >>> >> >>> >> >>> _______________________________________________ >> >>> SciPy-Dev mailing list >> >>> SciPy-Dev at scipy.org >> >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >>> >> >>> >> >> >> >> _______________________________________________ >> >> SciPy-Dev mailing list >> >> SciPy-Dev at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> >> >> >> > >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> > >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> http://mail.scipy.org/pipermail/scipy-dev/attachments/20130228/78b3f69b/attachment-0001.html >> >> ------------------------------ >> >> Message: 2 >> Date: Fri, 01 Mar 2013 11:58:27 +0100 >> From: Pablo Winant >> Subject: Re: [SciPy-Dev] Multilinear interpolation >> To: scipy-dev at scipy.org >> Message-ID: <513089D3.7010607 at gmail.com> >> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >> >> So I have updated the tempita template using fused types. I also >> included code generation helper functions in python blocks. >> >> I found a problem with the template generation engine >> ( >> https://groups.google.com/forum/?fromgroups=#!topic/paste-users/bwrf6xPhIjo >> ). >> Basically, functions defined in python blocks cannot depend upon each >> other. In particular, they cannot be recursive. I avoided the problem by >> defining a dummy function but this is of course not very elegant. >> >> On the pyx side: >> - I included a "with nogil, parallel():" instruction but I am not sure >> if it has enabled any parallel calculation at all. Cpu usage seems to >> rely on one core only. >> - By looking at the generated C file, I suspect there is still a small >> cython overhead. I see conditional branches inside the loop while there >> should be none. >> - Maybe I should not be using memoryviews and use arrays of pointers >> instead inside the loop ? Then would it be enough to pass memoryviews as >> arguments in order to ensure data continuity ? >> - Performances are not so bad if I compare to matlab. I tried to >> evaluate a function defined over a [4,3,5,6] 4d-grid on 10000000 random >> points and I got 0.2s vs 1.4s for matlab's interpn(...,'linear*'). Of >> course this is to be taken with a grain of salt and I think it could >> still be improved a bit. >> >> >> >> On 25/02/2013 21:19, Pauli Virtanen wrote: >> > 25.02.2013 22:04, Pablo Winant kirjoitti: >> > [clip] >> >>> - The code assumes C-contiguous arrays, but does not check for it. >> >> right >> > Note that you can make Cython to check for this, the syntax is >> > sometype[::1] or sometype[:,::1] for multidim (see below). >> > >> > [clip] >> >>> You can use fused types instead of templating; the end result is the >> same, >> >>> but Cython takes care of picking the correct routine. >> >>> >> >> I tried it but I didn't see how to do it. I would like to do define a >> >> function like >> >> >> >> def function(ndarray[fused_type, ndim=2]): >> >> ... >> >> cdef fused_type internal_variable >> >> ... >> >> return something >> > The syntax for these arrays is >> > >> > cdef fused number_t: >> > double >> > double complex >> > >> > >> > def my_function(fused_type[:,:] arg) >> > ... >> > >> > I think it doesn't work with the (old) np.ndarray syntax. >> > >> >> >> >> ------------------------------ >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> >> End of SciPy-Dev Digest, Vol 113, Issue 1 >> ***************************************** >> > > > > -- > regards > > Survinder Pal > M.Tech(AIE) > CSIR-CSIO Chandigarh > survinderpal at csio.res.in > 9569868960 > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smith.daniel.br at gmail.com Tue Mar 5 08:57:01 2013 From: smith.daniel.br at gmail.com (Daniel Smith) Date: Tue, 5 Mar 2013 08:57:01 -0500 Subject: [SciPy-Dev] Sparse Matrix Prototype Message-ID: Hi, I've been working on adding fancy indexing to the LIL sparse matrix class and have run into a problem. Judging from the tests, no decision has been made as to what NumPy structure the class should replicate. In particular, some of the tests assume ndarray behavior, and others assume matrix behavior. The biggest difference is in row/column vector behavior. Take for example: A = np.zeros((5, 5)) B = np.matrix(A) A[:, 1] array([0, 0, 0, 0, 0]) B[:, 1] matrix([ [0], [0], [0], [0], [0] ]) Since NumPy is encouraging people to use ndarrays over matrices, it might make sense to reproduce the ndarray behavior. However, since the class has other matrix-specific behavior, e.g. being only two-dimensional, it might be confusing to have the class behave like an ndarray in other ways. Until a decision is made, no version of the class will pass all the tests as currently written. Any input would be greatly appreciated. Thanks, Daniel From njs at pobox.com Tue Mar 5 09:18:51 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 5 Mar 2013 14:18:51 +0000 Subject: [SciPy-Dev] Sparse Matrix Prototype In-Reply-To: References: Message-ID: On Tue, Mar 5, 2013 at 1:57 PM, Daniel Smith wrote: > Hi, > > I've been working on adding fancy indexing to the LIL sparse matrix > class and have run into a problem. Judging from the tests, no decision > has been made as to what NumPy structure the class should replicate. > In particular, some of the tests assume ndarray behavior, and others > assume matrix behavior. The biggest difference is in row/column vector > behavior. Take for example: > > A = np.zeros((5, 5)) > B = np.matrix(A) > > A[:, 1] > array([0, 0, 0, 0, 0]) > B[:, 1] > matrix([ [0], [0], [0], [0], [0] ]) > > Since NumPy is encouraging people to use ndarrays over matrices, it > might make sense to reproduce the ndarray behavior. However, since the > class has other matrix-specific behavior, e.g. being only > two-dimensional, it might be confusing to have the class behave like > an ndarray in other ways. Until a decision is made, no version of the > class will pass all the tests as currently written. Any input would be > greatly appreciated. I find the "matrixness" of sparse matrices to be constantly annoying and a source of tons of special cases in code that wants to handle both sparse and dense matrices... but it is what it is. If they can't act like ndarrays, better they act like np.matrix's than like some weird mash-up of the two. And in particular the key thing for this indexing example is that we *can't* return a sparse 1-d ndarray-alike, right, because we have no structure to represent sparse 1-d things? -n From smith.daniel.br at gmail.com Tue Mar 5 09:47:34 2013 From: smith.daniel.br at gmail.com (Daniel Smith) Date: Tue, 5 Mar 2013 09:47:34 -0500 Subject: [SciPy-Dev] Sparse Matrix Prototype Message-ID: > On Tue, Mar 5, 2013 at 1:57 PM, Daniel Smith wrote: >> Hi, >> >> I've been working on adding fancy indexing to the LIL sparse matrix >> class and have run into a problem. Judging from the tests, no decision >> has been made as to what NumPy structure the class should replicate. >> In particular, some of the tests assume ndarray behavior, and others >> assume matrix behavior. The biggest difference is in row/column vector >> behavior. Take for example: >> >> A = np.zeros((5, 5)) >> B = np.matrix(A) >> >> A[:, 1] >> array([0, 0, 0, 0, 0]) >> B[:, 1] >> matrix([ [0], [0], [0], [0], [0] ]) >> >> Since NumPy is encouraging people to use ndarrays over matrices, it >> might make sense to reproduce the ndarray behavior. However, since the >> class has other matrix-specific behavior, e.g. being only >> two-dimensional, it might be confusing to have the class behave like >> an ndarray in other ways. Until a decision is made, no version of the >> class will pass all the tests as currently written. Any input would be >> greatly appreciated. > > I find the "matrixness" of sparse matrices to be constantly annoying > and a source of tons of special cases in code that wants to handle > both sparse and dense matrices... but it is what it is. If they can't > act like ndarrays, better they act like np.matrix's than like some > weird mash-up of the two. And in particular the key thing for this > indexing example is that we *can't* return a sparse 1-d ndarray-alike, > right, because we have no structure to represent sparse 1-d things? > > -n We have at least one sparse 1-d ndarray like thing in a sparse column vector, a 1xN matrix. This code would output exactly what you want: B = sparse.lil_matrix(np.reshape(np.arange(25), (5, 5))) vector = np.squeeze(B[:, 0].A) vector np.array([0, 5, 10, 15, 20]) Another option is to leave the built-in __getitem__ and __setitem__ behavior like a matrix and add get_array, set_array which would mirror ndarray behavior. We already have the method .A which returns a dense ndarray to go along with .todense() which returns a dense matrix. Daniel From njs at pobox.com Tue Mar 5 10:12:21 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 5 Mar 2013 15:12:21 +0000 Subject: [SciPy-Dev] Sparse Matrix Prototype In-Reply-To: References: Message-ID: On Tue, Mar 5, 2013 at 2:47 PM, Daniel Smith wrote: >> On Tue, Mar 5, 2013 at 1:57 PM, Daniel Smith wrote: >>> Hi, >>> >>> I've been working on adding fancy indexing to the LIL sparse matrix >>> class and have run into a problem. Judging from the tests, no decision >>> has been made as to what NumPy structure the class should replicate. >>> In particular, some of the tests assume ndarray behavior, and others >>> assume matrix behavior. The biggest difference is in row/column vector >>> behavior. Take for example: >>> >>> A = np.zeros((5, 5)) >>> B = np.matrix(A) >>> >>> A[:, 1] >>> array([0, 0, 0, 0, 0]) >>> B[:, 1] >>> matrix([ [0], [0], [0], [0], [0] ]) >>> >>> Since NumPy is encouraging people to use ndarrays over matrices, it >>> might make sense to reproduce the ndarray behavior. However, since the >>> class has other matrix-specific behavior, e.g. being only >>> two-dimensional, it might be confusing to have the class behave like >>> an ndarray in other ways. Until a decision is made, no version of the >>> class will pass all the tests as currently written. Any input would be >>> greatly appreciated. >> >> I find the "matrixness" of sparse matrices to be constantly annoying >> and a source of tons of special cases in code that wants to handle >> both sparse and dense matrices... but it is what it is. If they can't >> act like ndarrays, better they act like np.matrix's than like some >> weird mash-up of the two. And in particular the key thing for this >> indexing example is that we *can't* return a sparse 1-d ndarray-alike, >> right, because we have no structure to represent sparse 1-d things? >> >> -n > > We have at least one sparse 1-d ndarray like thing in a sparse column > vector, a 1xN matrix. I thought the whole point of your example was that for the given sort of indexing, ndarray's return an object with shape (N,) and np.matrix's return an object with shape (N, 1), and the question was which should sparse matrices do? And my point was that there's no such thing as a sparse matrix with shape (N,) so the choice was sort of obvious? Maybe I'm just missing something. -n From smith.daniel.br at gmail.com Tue Mar 5 11:34:31 2013 From: smith.daniel.br at gmail.com (Daniel Smith) Date: Tue, 5 Mar 2013 11:34:31 -0500 Subject: [SciPy-Dev] Sparse Matrix Prototype Message-ID: Nathaniel Smith pobox.com> writes: > > On Tue, Mar 5, 2013 at 2:47 PM, Daniel Smith gmail.com> wrote: > >> On Tue, Mar 5, 2013 at 1:57 PM, Daniel Smith gmail.com> wrote: > >>> Hi, > >>> > >>> I've been working on adding fancy indexing to the LIL sparse matrix > >>> class and have run into a problem. Judging from the tests, no decision > >>> has been made as to what NumPy structure the class should replicate. > >>> In particular, some of the tests assume ndarray behavior, and others > >>> assume matrix behavior. The biggest difference is in row/column vector > >>> behavior. Take for example: > >>> > >>> A = np.zeros((5, 5)) > >>> B = np.matrix(A) > >>> > >>> A[:, 1] > >>> array([0, 0, 0, 0, 0]) > >>> B[:, 1] > >>> matrix([ [0], [0], [0], [0], [0] ]) > >>> > >>> Since NumPy is encouraging people to use ndarrays over matrices, it > >>> might make sense to reproduce the ndarray behavior. However, since the > >>> class has other matrix-specific behavior, e.g. being only > >>> two-dimensional, it might be confusing to have the class behave like > >>> an ndarray in other ways. Until a decision is made, no version of the > >>> class will pass all the tests as currently written. Any input would be > >>> greatly appreciated. > >> > >> I find the "matrixness" of sparse matrices to be constantly annoying > >> and a source of tons of special cases in code that wants to handle > >> both sparse and dense matrices... but it is what it is. If they can't > >> act like ndarrays, better they act like np.matrix's than like some > >> weird mash-up of the two. And in particular the key thing for this > >> indexing example is that we *can't* return a sparse 1-d ndarray-alike, > >> right, because we have no structure to represent sparse 1-d things? > >> > >> -n > > > > We have at least one sparse 1-d ndarray like thing in a sparse column > > vector, a 1xN matrix. > > I thought the whole point of your example was that for the given sort > of indexing, ndarray's return an object with shape (N,) and > np.matrix's return an object with shape (N, 1), and the question was > which should sparse matrices do? And my point was that there's no such > thing as a sparse matrix with shape (N,) so the choice was sort of > obvious? Maybe I'm just missing something. > > -n > A couple more examples might be instructive: A = np.zeros((5, 5)) B = np.matrix(A) B[1].shape (1, 5) A[1].shape (5, ) B[:, 1].shape (5, 1) A[:, 1].shape (5, ) A[:, 1] = A[1, :] None B[:, 1] = B[1, :] ValueError: output operand requires a reduction, but reduction is not enabled Basically, matrices differentiate between row and column vectors, and ndarrays don't. Daniel From dave.hirschfeld at gmail.com Tue Mar 5 15:13:43 2013 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Tue, 5 Mar 2013 20:13:43 +0000 (UTC) Subject: [SciPy-Dev] git question: fetch a PR without merging into master References: Message-ID: Warren Weckesser gmail.com> writes: > > Hey all,Does anyone have a git recipe for grabbing a pull request from github without merging it into master?? Ideally, I'd like to branch from the same point the PR branches from master, and not merge the branch back into master.? The use case is to test pull requests that have merge conflicts.? I'd like to be able to experiment with the PR, while putting off resolving the merge conflicts until later.Thanks,Warren > You can checkout pull-requests if you change your config file as described in: https://gist.github.com/piscisaureus/3342247 ...except you'll probably want to change the upstream remote rather than origin. NB: IIRC I came across this very useful info on on the IPython list -Dave From lguzzardi at gmail.com Wed Mar 6 06:40:14 2013 From: lguzzardi at gmail.com (luca guzzardi) Date: Wed, 6 Mar 2013 12:40:14 +0100 Subject: [SciPy-Dev] linear algebra Message-ID: Hello, Currently I'm studying some topics in image processing and I writing some code for image deblurring. This code is based fast methods for solving special linear systems such as circulant matrix or Toeplitz matrix or more complicated stuff: I would like to contribute to with all these methods. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicole.haenni at gmail.com Sat Mar 2 21:40:17 2013 From: nicole.haenni at gmail.com (Nicole Haenni) Date: Sun, 3 Mar 2013 03:40:17 +0100 Subject: [SciPy-Dev] Survey for framework and library developers: "Information needs in software ecosystems" In-Reply-To: References: Message-ID: Hi, I?m Nicole Haenni and I'm doing research for my thesis at the University of Berne (scg.unibe.ch) with Mircea Lungu and Niko Schwarz. We are researching on monitoring the activity in software ecosystems. This is a study about information needs that arise in such software ecosystems. I need your help to fill out the survey below. It takes about 10 minutes to complete it. A software ecosystem can be a project repository like GitHub, an open source community (e.g. the Apache community) or a language-based community (e.g. Smalltalk has Squeaksource, Ruby has Rubyforge). We formulate our research question as follows: "What information needs arise when developers use code from other projects, or see their own code used elsewhere." Survey link: http://bit.ly/14Zc71N or original link: https://docs.google.com/spreadsheet/viewform?formkey=dFBJUmVodVU1V3BMMGRPRWxBdjdDbVE6MA Thank you for your support! Nicole Haenni ----------------------------------------- Software Composition Group Institut f?r Informatik Universit?t Bern Neubr?ckstrasse 10 CH-3012 Bern SWITZERLAND -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Mar 6 15:39:46 2013 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 06 Mar 2013 22:39:46 +0200 Subject: [SciPy-Dev] linear algebra In-Reply-To: References: Message-ID: Hi, 06.03.2013 13:40, luca guzzardi kirjoitti: > Currently I'm studying some topics in image processing and I writing > some code for image deblurring. This code is based fast methods for > solving special linear systems such as circulant matrix or Toeplitz > matrix or more complicated stuff: I would like to contribute to with > all these methods. This sounds interesting --- I think there is room in scipy.linalg for algorithms for solving additional linear algebra problems, in addition to the basic LAPACK-provided triangular/banded matrix cases. So I think these routines would be welcome. What is your implementation language --- pure Python or some compiled language (C, Fortran, Cython will all work for us)? -- Pauli Virtanen From lists at informa.tiker.net Wed Mar 6 17:23:51 2013 From: lists at informa.tiker.net (Andreas Kloeckner) Date: Wed, 06 Mar 2013 17:23:51 -0500 Subject: [SciPy-Dev] Interpolative Decomposition (was: Re: linear algebra) References: Message-ID: <876214p3pk.fsf@ding.tiker.net> Hi Pauli, all, Pauli Virtanen writes: > 06.03.2013 13:40, luca guzzardi kirjoitti: >> Currently I'm studying some topics in image processing and I writing >> some code for image deblurring. This code is based fast methods for >> solving special linear systems such as circulant matrix or Toeplitz >> matrix or more complicated stuff: I would like to contribute to with >> all these methods. > > This sounds interesting --- I think there is room in scipy.linalg for > algorithms for solving additional linear algebra problems, in addition > to the basic LAPACK-provided triangular/banded matrix cases. So I think > these routines would be welcome. Sorry for hijacking this thread--but I thought what you said made a good backdrop for the discussion. :) Ken Ho [1] has produced a wrapper [2] for the Interpolative Decomposition code [3] originated by Per-Gunnar Martinsson, Vladimir Rokhlin, Yoel Shkolnisky, and Mark Tygert. This is a matrix decomposition that successfully competes with the SVD in a number of applications. I thought that this might be a nice thing to have in scipy.linalg and got in touch with Ken to see if he might be interested in helping to integrate it. He was, and specifically, he's OK with adapting the license (currently GPL3) to Scipy's needs. The underlying ID package is BSD-ish [4] as far as I can tell, so that doesn't seem to pose extra restrictions. [1] http://www.stanford.edu/~klho/ [2] https://github.com/klho/PyMatrixID [3] https://cims.nyu.edu/~tygert/software.html [4] https://cims.nyu.edu/~tygert/id_doc.pdf I'd be curious to hear if there's an interest from your side. If so, we'd work to provide a patch. Let me know (I'm subscribed), Andreas From lguzzardi at gmail.com Thu Mar 7 14:53:10 2013 From: lguzzardi at gmail.com (luca guzzardi) Date: Thu, 7 Mar 2013 20:53:10 +0100 Subject: [SciPy-Dev] linear algebra Message-ID: I've already wrote a method for circulant matrix and for circulant blockblock circulant matrix: I've both version in C++ and in Python. Python version is newer and better commented. I would prefer work with Python even tough C++ it's fine: I've never used Fortran nor Cython. If it is fine I'll try to submit this method prior to work on the others to learn the procedure (honestly I'm going to getting lost!) Hi, 06.03.2013 13:40, luca guzzardi kirjoitti: > Currently I'm studying some topics in image processing and I writing > some code for image deblurring. This code is based fast methods for > solving special linear systems such as circulant matrix or Toeplitz > matrix or more complicated stuff: I would like to contribute to with > all these methods. This sounds interesting --- I think there is room in scipy.linalg for algorithms for solving additional linear algebra problems, in addition to the basic LAPACK-provided triangular/banded matrix cases. So I think these routines would be welcome. What is your implementation language --- pure Python or some compiled language (C, Fortran, Cython will all work for us)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmiller730 at gmail.com Thu Mar 7 21:16:14 2013 From: cmiller730 at gmail.com (Christopher Miller) Date: Thu, 07 Mar 2013 21:16:14 -0500 Subject: [SciPy-Dev] Sparse grid codes. Message-ID: <513949EE.9030706@gmail.com> Hello again, I got really busy with work as soon as I proposed putting sparse grid codes into scipy.integrate, so I was a little slow off the blocks. I've started in earnest tonight and hope to have isotropic Clenshaw-Curtis up and running this weekend and Gaussian sparse grids shortly after that. I've setup a fork at git://github.com/cmiller730/scipy.git I'll probably put in a pull request once isotropic CC and Gaussian grids are working. Chris From geier at lostpackets.de Fri Mar 8 20:26:21 2013 From: geier at lostpackets.de (Christian Geier) Date: Sat, 9 Mar 2013 01:26:21 +0000 Subject: [SciPy-Dev] Lomb-Scargle Periodogram: Press & Rybicki Algorithm Message-ID: <20130309012621.GA65314@brutus.lostpackets.de> Hello everyone! Would you in general be considering to include the Lombscargle Periodogram by Press & Rybicki [1] into scipy in addition to the already present one? I find the included algorithm by Townend rather slow and had recently some "interesting" results returned by it. I've recently translated the original FORTRAN code (which is actually the description of the algorithm [1]) to (pure) python [2] and would like to know what the legal situation is in this case: can I release this translated code under the BSD license? In this case I would translate the code further to cython and supply tests and more documentation. Greetings Christian Geier [1] http://adsabs.harvard.edu/full/1989ApJ...338..277P [2] https://github.com/geier/scipy/commit/710bf4ca514d223df39891eb20401aba2edd8cdb From jakevdp at cs.washington.edu Fri Mar 8 20:18:56 2013 From: jakevdp at cs.washington.edu (Jacob Vanderplas) Date: Fri, 8 Mar 2013 17:18:56 -0800 Subject: [SciPy-Dev] Lomb-Scargle Periodogram: Press & Rybicki Algorithm In-Reply-To: <20130309012621.GA65314@brutus.lostpackets.de> References: <20130309012621.GA65314@brutus.lostpackets.de> Message-ID: Hi, We have a cython version of this Lomb-Scargle algorithm in astroML [1], as well as a generalized version that does not depend on the sample mean being a good approximation of the true mean. We've not yet implemented the FFT trick shown in Press & Rybicki, but it's fairly fast as-is for problems of reasonable size. For some examples of it in use on astronomical data, see [2-3] below (the examples are figures from our upcoming astro/statistics textbook). This code is BSD-licensed, so if it seems generally useful enough to include in scipy, it would be no problem to port it. Also, if you have implemented an FFT-based version, it would get a fair bit of use in the astronomy community if you were willing to contribute it to astroML. Thanks, Jake [1] https://github.com/astroML/astroML/blob/master/astroML_addons/periodogram.pyx [2] http://astroml.github.com/book_figures/chapter10/fig_LS_example.html [3] http://astroml.github.com/book_figures/chapter10/fig_LS_sg_comparison.html On Fri, Mar 8, 2013 at 5:26 PM, Christian Geier wrote: > Hello everyone! > > Would you in general be considering to include the Lombscargle > Periodogram by Press & Rybicki [1] into scipy in addition to the already > present one? I find the included algorithm by Townend rather slow and > had recently some "interesting" results returned by it. > > I've recently translated the original FORTRAN code (which is actually > the description of the algorithm [1]) to (pure) python [2] and would like > to know what the legal situation is in this case: can I release this > translated code under the BSD license? > > In this case I would translate the code further to cython and supply > tests and more documentation. > > Greetings > > Christian Geier > > [1] http://adsabs.harvard.edu/full/1989ApJ...338..277P > [2] > https://github.com/geier/scipy/commit/710bf4ca514d223df39891eb20401aba2edd8cdb > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Mar 11 06:23:57 2013 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 11 Mar 2013 12:23:57 +0200 Subject: [SciPy-Dev] Fwd: Re: Interpolative Decomposition In-Reply-To: <513DB056.9050407@gmail.com> References: <513DB056.9050407@gmail.com> Message-ID: <513DB0BD.6010707@iki.fi> [oops, forgot to include the list in recipients] Hi, 07.03.2013 00:23, Andreas Kloeckner kirjoitti: [clip] > Ken Ho [1] has produced a wrapper [2] for the Interpolative > Decomposition code [3] originated by Per-Gunnar Martinsson, Vladimir > Rokhlin, Yoel Shkolnisky, and Mark Tygert. This is a matrix > decomposition that successfully competes with the SVD in a number of > applications. I thought that this might be a nice thing to have in > scipy.linalg and got in touch with Ken to see if he might be > interested in helping to integrate it. He was, and specifically, he's > OK with adapting the license (currently GPL3) to Scipy's needs. The > underlying ID package is BSD-ish [4] as far as I can tell, so that > doesn't seem to pose extra restrictions. The PyMatrixID package looks pretty complete, and if you think that this decomposition is useful to people (I hadn't heard about it before), I don't think there's a reason why it couldn't just be dropped into scipy.linalg. Only some minor style changes (e.g. docstrings of exposed functions in Numpy/Scipy format) would be needed as far as I see. A PR/patch would be appreciated. One way to go would be to prefix the PyMatrixID package with `_` to mark them private to Scipy, and then import the useful set of functions in the main scipy.linalg namespace. In this approach some functions would need to be renamed, though --- `id` unfortunately is a Python keyword and shouldn't be overridden. The recon* functions should be prefixed with id_*, and I personally would prefer spelling out "estimate_rank", "id_reconstruct_skeleton" over contractions. Another way would be to add `scipy.linalg.id_decomp` module where the useful functions are. I'm not sure which one is less hassle to use. Best regards, Pauli From filip.dominec at gmail.com Tue Mar 12 04:41:10 2013 From: filip.dominec at gmail.com (Filip Dominec) Date: Tue, 12 Mar 2013 09:41:10 +0100 Subject: [SciPy-Dev] Feature proposal: automatic branch selection for arcsin(), arccos() Message-ID: Dear scipy developers, it is well known that arccos(cos(x)) or similar expressions do not always return the value of the original argument, x. In this case, the sign as well as any 2*pi*n offset are lost by this operation; in other words, the solution of arccos() has many branches. For certain applications, however, it is necessary to reconstruct the original data, x, without any discontinuities. I have implemented such a proof-of-concept function for arccos_continuous(), which automatically switches the branch of solution so that the result of arccos_continuous(cos(x))==x, if the initial offset and sign are supplied. Moreover, after having spent next few evenings experimenting, I also * reimplemented the algorithm so that it does not explicitly process the arrays point-by-point, but it only uses the numpy/scipy functions that operate on the whole array efficiently; * enabled output and input of the internal branch selection variables (so the x array may be split into several blocks, and still the arccos_continuous() does not forget the branch between consecutive calls). Do you think such functions could be added to scipy, probably in some sub-module? Branch-aware version of arccos() is crucial, e. g., for elegant computation of metamaterial effective parameters. I am looking forward to read your comments, Filip From gustavo.goretkin at gmail.com Tue Mar 12 13:59:53 2013 From: gustavo.goretkin at gmail.com (Gustavo Goretkin) Date: Tue, 12 Mar 2013 13:59:53 -0400 Subject: [SciPy-Dev] Feature proposal: automatic branch selection for arcsin(), arccos() In-Reply-To: References: Message-ID: If the function relies on stored state, won't there be issues if you call the function from different places in your code "simultaneously"? On Mar 12, 2013 4:41 AM, "Filip Dominec" wrote: > Dear scipy developers, > it is well known that arccos(cos(x)) or similar expressions do not > always return the value of the original argument, x. In this case, the > sign as well as any 2*pi*n offset are lost by this operation; in other > words, the solution of arccos() has many branches. > > For certain applications, however, it is necessary to reconstruct the > original data, x, without any discontinuities. I have implemented such > a proof-of-concept function for arccos_continuous(), which > automatically switches the branch of solution so that the result of > arccos_continuous(cos(x))==x, if the initial offset and sign are > supplied. > > Moreover, after having spent next few evenings experimenting, I also > * reimplemented the algorithm so that it does not explicitly process > the arrays point-by-point, but it only uses the numpy/scipy functions > that operate on the whole array efficiently; > * enabled output and input of the internal branch selection variables > (so the x array may be split into several blocks, and still the > arccos_continuous() does not forget the branch between consecutive > calls). > > Do you think such functions could be added to scipy, probably in some > sub-module? Branch-aware version of arccos() is crucial, e. g., for > elegant computation of metamaterial effective parameters. > > I am looking forward to read your comments, > Filip > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From suryak at ieee.org Tue Mar 12 14:09:58 2013 From: suryak at ieee.org (Surya Kasturi) Date: Tue, 12 Mar 2013 23:39:58 +0530 Subject: [SciPy-Dev] KeyError on ScipyCentral uploading image -- site running locally. Message-ID: Hey, I found another error on scipycentral (running locally). whenever I am uploading images. I am getting the below error. http://dpaste.com/hold/1020806/ (KeyError) at \scipy_central\submission\views.py in new_or_edit_submission, line 562 can anyone faced this? can anyone help me with this.. -- I have finished all the design part of the code.. except this bug. (found yesterday).. was about to commit all the new design :( -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Mar 12 14:56:23 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 12 Mar 2013 18:56:23 +0000 Subject: [SciPy-Dev] Feature proposal: automatic branch selection for arcsin(), arccos() In-Reply-To: References: Message-ID: On Tue, Mar 12, 2013 at 8:41 AM, Filip Dominec wrote: > > Dear scipy developers, > it is well known that arccos(cos(x)) or similar expressions do not > always return the value of the original argument, x. In this case, the > sign as well as any 2*pi*n offset are lost by this operation; in other > words, the solution of arccos() has many branches. > > For certain applications, however, it is necessary to reconstruct the > original data, x, without any discontinuities. I have implemented such > a proof-of-concept function for arccos_continuous(), which > automatically switches the branch of solution so that the result of > arccos_continuous(cos(x))==x, if the initial offset and sign are > supplied. It's hard to say without seeing the code, but certainly something like this could be useful... my main concern personally would be to make sure that the branch selection problem is sufficiently generic that it has a single, general solution appropriate for implementation in a generic, low-level library like numpy/scipy. If there's one sort of automatic branch selection that the metamaterial people use, and another that another field uses, then it makes a mess... How arccos_continuous(x) different from unwrap(arccos(x))? I assume arccos_continuous() uses a similar heuristic to unwrap() to select branches? -n From denis-bz-py at t-online.de Tue Mar 12 14:59:44 2013 From: denis-bz-py at t-online.de (denis) Date: Tue, 12 Mar 2013 18:59:44 +0000 (UTC) Subject: [SciPy-Dev] Splines References: Message-ID: Folks, some late comments, stop me if you've heard these ... - a collection of test functions would I think help focus the *huge* range of spline that people want. It must be easy to run codes A B ... on testfunctions T* with (not so easy) uniform printout of smoothness, runtime ... - splines above 3d: I agree with Pauli that map_coordinates has a clunky calling sequence, but it's fast, clean, any-d, works with most numpy scalar types. Is it clear how to handle any-d indices in numpy - cython - C ? Seems to me fundamental (https://github.com/ContinuumIO/dynd-python is active). - Catmull-Rom splines are common in image processing / 3d tweening and are easy, just switch base matrices. cheers -- denis From pav at iki.fi Tue Mar 12 15:55:15 2013 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 12 Mar 2013 21:55:15 +0200 Subject: [SciPy-Dev] Splines In-Reply-To: References: Message-ID: 12.03.2013 20:59, denis kirjoitti: [clip] > - splines above 3d: > I agree with Pauli that map_coordinates has a clunky calling sequence, > but it's fast, clean, any-d, works with most numpy scalar types. However, map_coordinates is only for splines on uniform set of knots, you can't evaluate it on a grid efficiently, etc. > Is it clear how to handle any-d indices in numpy - cython - C ? > Seems to me fundamental > (https://github.com/ContinuumIO/dynd-python is active). I don't think there's yet a definite plan how to do this. Considering evaluation of B-splines as equivalent to the operation (c * B(t, x)).sum(axis=axis) with usual Numpy broadcasting and vectorization rules would probably be a swiss-knife solution. -- Pauli Virtanen From pav at iki.fi Tue Mar 12 15:57:17 2013 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 12 Mar 2013 21:57:17 +0200 Subject: [SciPy-Dev] Feature proposal: automatic branch selection for arcsin(), arccos() In-Reply-To: References: Message-ID: 12.03.2013 20:56, Nathaniel Smith kirjoitti: [clip] > How arccos_continuous(x) different from unwrap(arccos(x))? I assume > arccos_continuous() uses a similar heuristic to unwrap() to select > branches? If you plot arccos(cos(x)), you get a sawtooth instead of discontinuities, so unwrap() doesn't help. The unwrapping needs to consider slope discontinuities somehow, I think. This then probably entails assumptions about the input data... -- Pauli Virtanen From josef.pktd at gmail.com Wed Mar 13 10:51:04 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 13 Mar 2013 10:51:04 -0400 Subject: [SciPy-Dev] optimize.fsolve endless loop with nan In-Reply-To: References: Message-ID: preliminary question, I didn't have time yet to look closely >>> scipy.__version__ '0.9.0' I have a problem where fsolve goes into a range where the values are nan. After that it goes into an endless loop, as far as I can tell. Something like this has been fixed for optimize.fmin_bfgs. Was there a fix for this also for fsolve, since 0.9.0? (The weirder story: I rearranged some test, and made unfortunately also some other changes, and now when I run nosetests it never returns. Ctrl+C kills nosetests, but leaves a python process running. I have no clue why the test sequence should matter.) Josef From denis-bz-py at t-online.de Wed Mar 13 11:47:18 2013 From: denis-bz-py at t-online.de (denis) Date: Wed, 13 Mar 2013 15:47:18 +0000 (UTC) Subject: [SciPy-Dev] Splines References: Message-ID: Pauli Virtanen iki.fi> writes: > However, map_coordinates is only for splines on uniform set of knots, > you can't evaluate it on a grid efficiently, etc. Pauli, It's easy to uniformize non-uniform grids, see [Intergrid](http://denis-bz.github.com/docs/intergrid.html) (comments welcome). Sure, for order > 1 this is not as smooth as real non-uniform splines but very simple, low-memory too, fast: in 4d, 5d, 6d, Intergrid does around 3M, 2M, .8M interpolations / second. a bird in the hand. cheers -- denis From josef.pktd at gmail.com Wed Mar 13 12:17:41 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 13 Mar 2013 12:17:41 -0400 Subject: [SciPy-Dev] optimize.fsolve endless loop with nan In-Reply-To: References: Message-ID: On Wed, Mar 13, 2013 at 10:51 AM, wrote: > preliminary question, I didn't have time yet to look closely > >>>> scipy.__version__ > '0.9.0' > > I have a problem where fsolve goes into a range where the values are > nan. After that it goes into an endless loop, as far as I can tell. > > Something like this has been fixed for optimize.fmin_bfgs. Was there a > fix for this also for fsolve, since 0.9.0? > > > (The weirder story: I rearranged some test, and made unfortunately > also some other changes, and now when I run nosetests it never > returns. Ctrl+C kills nosetests, but leaves a python process running. > I have no clue why the test sequence should matter.) I had left the starting values in a module global even after I started to adjust them in one of the cases. The starting value for fsolve was in a range where the curvature is very flat, and fsolve made huge steps into the invalid range. After getting nans, it went AWOL. If I return np.inf as soon as I get a nan, then fsolve seems to stop right away. Is there a way to induce fsolve to stay out of the nan zone, for example returning something else than inf? I don't want to find a very specific solution, because I'm throwing lot's of different cases at the same generic method. Josef > > Josef From charlesr.harris at gmail.com Wed Mar 13 12:19:16 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 13 Mar 2013 10:19:16 -0600 Subject: [SciPy-Dev] Splines In-Reply-To: References: Message-ID: On Wed, Mar 13, 2013 at 9:47 AM, denis wrote: > Pauli Virtanen iki.fi> writes: > > > However, map_coordinates is only for splines on uniform set of knots, > > you can't evaluate it on a grid efficiently, etc. > > Pauli, > It's easy to uniformize non-uniform grids, see > [Intergrid](http://denis-bz.github.com/docs/intergrid.html) > (comments welcome). > Sure, for order > 1 this is not as smooth as real non-uniform splines > but very simple, low-memory too, fast: > in 4d, 5d, 6d, Intergrid does around 3M, 2M, .8M interpolations / > second. > > a bird in the hand. > > I'm not sure you are interpreting 'prefilter' correctly. I haven't looked at the scipy code, but the uniform spline coefficients can be gotten by filtering forward and back along each axis. It essentially factors the fit matrix into lower/upper factors with constant diagonals modulo boundary conditions, and forward/reverse substitution reduces to IIR filtering. This is also used for other interpolation schemes, for instance variance preserving interpolation which is useful when matching scenes using mutual information. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Mar 13 12:59:02 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 13 Mar 2013 12:59:02 -0400 Subject: [SciPy-Dev] optimize.fsolve endless loop with nan In-Reply-To: References: Message-ID: On Wed, Mar 13, 2013 at 12:17 PM, wrote: > On Wed, Mar 13, 2013 at 10:51 AM, wrote: >> preliminary question, I didn't have time yet to look closely >> >>>>> scipy.__version__ >> '0.9.0' >> >> I have a problem where fsolve goes into a range where the values are >> nan. After that it goes into an endless loop, as far as I can tell. >> >> Something like this has been fixed for optimize.fmin_bfgs. Was there a >> fix for this also for fsolve, since 0.9.0? >> >> >> (The weirder story: I rearranged some test, and made unfortunately >> also some other changes, and now when I run nosetests it never >> returns. Ctrl+C kills nosetests, but leaves a python process running. >> I have no clue why the test sequence should matter.) > > I had left the starting values in a module global even after I started > to adjust them in one of the cases. > > The starting value for fsolve was in a range where the curvature is > very flat, and fsolve made huge steps into the invalid range. After > getting nans, it went AWOL. > > If I return np.inf as soon as I get a nan, then fsolve seems to stop > right away. Is there a way to induce fsolve to stay out of the nan > zone, for example returning something else than inf? > > I don't want to find a very specific solution, because I'm throwing > lot's of different cases at the same generic method. same result with python 2.7, scipy version 0.11.0b1 >"C:\Programs\Python27\python.exe" fsolve_endless_nan.py scipy version 0.11.0b1 args 0.3 [100] 0.05 args 0.3 [ 100.] 0.05 args 0.3 [ 100.] 0.05 args 0.3 [ 100.00000149] 0.05 args 0.3 [-132.75434239] 0.05 fsolve_endless_nan.py:36: RuntimeWarning: invalid value encountered in sqrt pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) fsolve_endless_nan.py:39: RuntimeWarning: invalid value encountered in sqrt pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) args 0.3 [ nan] 0.05 standalone test case (from my power branch) Don't run in an interpreter (session) that you want to keep alive! And open TaskManager if you are on Windows :) ------------ # -*- coding: utf-8 -*- """Warning: endless loop in runaway process, requires hard kill of process Created on Wed Mar 13 12:44:15 2013 Author: Josef Perktold """ import numpy as np from scipy import stats, optimize import scipy print "scipy version", scipy.__version__ def ttest_power(effect_size, nobs, alpha, df=None, alternative='two-sided'): '''Calculate power of a ttest ''' print 'args', effect_size, nobs, alpha d = effect_size if df is None: df = nobs - 1 if alternative in ['two-sided', '2s']: alpha_ = alpha / 2. #no inplace changes, doesn't work elif alternative in ['smaller', 'larger']: alpha_ = alpha else: raise ValueError("alternative has to be 'two-sided', 'larger' " + "or 'smaller'") pow_ = 0 if alternative in ['two-sided', '2s', 'larger']: crit_upp = stats.t.isf(alpha_, df) # use private methods, generic methods return nan with negative d pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) if alternative in ['two-sided', '2s', 'smaller']: crit_low = stats.t.ppf(alpha_, df) pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) return pow_ func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) print optimize.fsolve(func, 100, args=(0.3, 0.05)) ------------ > > Josef > >> >> Josef From josef.pktd at gmail.com Wed Mar 13 13:24:01 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 13 Mar 2013 13:24:01 -0400 Subject: [SciPy-Dev] optimize.fsolve endless loop with nan In-Reply-To: References: Message-ID: On Wed, Mar 13, 2013 at 12:59 PM, wrote: > On Wed, Mar 13, 2013 at 12:17 PM, wrote: >> On Wed, Mar 13, 2013 at 10:51 AM, wrote: >>> preliminary question, I didn't have time yet to look closely >>> >>>>>> scipy.__version__ >>> '0.9.0' >>> >>> I have a problem where fsolve goes into a range where the values are >>> nan. After that it goes into an endless loop, as far as I can tell. >>> >>> Something like this has been fixed for optimize.fmin_bfgs. Was there a >>> fix for this also for fsolve, since 0.9.0? >>> >>> >>> (The weirder story: I rearranged some test, and made unfortunately >>> also some other changes, and now when I run nosetests it never >>> returns. Ctrl+C kills nosetests, but leaves a python process running. >>> I have no clue why the test sequence should matter.) >> >> I had left the starting values in a module global even after I started >> to adjust them in one of the cases. >> >> The starting value for fsolve was in a range where the curvature is >> very flat, and fsolve made huge steps into the invalid range. After >> getting nans, it went AWOL. >> >> If I return np.inf as soon as I get a nan, then fsolve seems to stop >> right away. Is there a way to induce fsolve to stay out of the nan >> zone, for example returning something else than inf? >> >> I don't want to find a very specific solution, because I'm throwing >> lot's of different cases at the same generic method. > > same result with python 2.7, scipy version 0.11.0b1 > >>"C:\Programs\Python27\python.exe" fsolve_endless_nan.py > scipy version 0.11.0b1 > args 0.3 [100] 0.05 > args 0.3 [ 100.] 0.05 > args 0.3 [ 100.] 0.05 > args 0.3 [ 100.00000149] 0.05 > args 0.3 [-132.75434239] 0.05 > fsolve_endless_nan.py:36: RuntimeWarning: invalid value encountered in sqrt > pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) > fsolve_endless_nan.py:39: RuntimeWarning: invalid value encountered in sqrt > pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) > args 0.3 [ nan] 0.05 > > > standalone test case (from my power branch) > > Don't run in an interpreter (session) that you want to keep alive! > And open TaskManager if you are on Windows :) > > ------------ > # -*- coding: utf-8 -*- > """Warning: endless loop in runaway process, requires hard kill of process > > Created on Wed Mar 13 12:44:15 2013 > > Author: Josef Perktold > """ > > import numpy as np > from scipy import stats, optimize > > import scipy > print "scipy version", scipy.__version__ > > > def ttest_power(effect_size, nobs, alpha, df=None, alternative='two-sided'): > '''Calculate power of a ttest > ''' > print 'args', effect_size, nobs, alpha > d = effect_size > if df is None: > df = nobs - 1 > > if alternative in ['two-sided', '2s']: > alpha_ = alpha / 2. #no inplace changes, doesn't work > elif alternative in ['smaller', 'larger']: > alpha_ = alpha > else: > raise ValueError("alternative has to be 'two-sided', 'larger' " + > "or 'smaller'") > > pow_ = 0 > if alternative in ['two-sided', '2s', 'larger']: > crit_upp = stats.t.isf(alpha_, df) > # use private methods, generic methods return nan with negative d > pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) > if alternative in ['two-sided', '2s', 'smaller']: > crit_low = stats.t.ppf(alpha_, df) > pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) > return pow_ > > func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) > > print optimize.fsolve(func, 100, args=(0.3, 0.05)) > ------------ correction to get a solution that would make sense (last two lines) func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) - args[2] print optimize.fsolve(func, 10, args=(0.76638635, 0.1, 0.8)) #converges to 12 print optimize.fsolve(func, 100, args=(0.76638635, 0.1, 0.8)) #runaway Josef "Lost in Translation" > >> >> Josef >> >>> >>> Josef From denis-bz-py at t-online.de Wed Mar 13 14:54:03 2013 From: denis-bz-py at t-online.de (denis) Date: Wed, 13 Mar 2013 18:54:03 +0000 (UTC) Subject: [SciPy-Dev] Splines References: Message-ID: Charles R Harris gmail.com> writes: > I'm not sure you are interpreting 'prefilter' correctly. I haven't looked at the scipy code, but the uniform spline coefficients can be gotten by filtering forward and back along each axis. It essentially factors the fit matrix into lower/upper factors with constant diagonals modulo boundary conditions, and forward/reverse substitution reduces to IIR filtering. This is also used for other interpolation schemes, for instance variance preserving interpolation which is useful when matching scenes using mutual information.Chuck Chuck, is "prefilter=True will filter out high frequencies in the data" not correct ? It's a pure preprocessing step, spline_filter in ndimage/interpolation.py iterates spline_filter1d Bspline smoothing along each axis. How about running some testfunctions with / without prefilter, do you know of any real or realistic ? cheers -- denis From charlesr.harris at gmail.com Wed Mar 13 15:48:13 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 13 Mar 2013 13:48:13 -0600 Subject: [SciPy-Dev] Splines In-Reply-To: References: Message-ID: On Wed, Mar 13, 2013 at 12:54 PM, denis wrote: > Charles R Harris gmail.com> writes: > > > I'm not sure you are interpreting 'prefilter' correctly. I haven't > looked at > the scipy code, but the uniform spline coefficients can be gotten by > filtering > forward and back along each axis. It essentially factors the fit matrix > into > lower/upper factors with constant diagonals modulo boundary conditions, and > forward/reverse substitution reduces to IIR filtering. This is also used > for > other interpolation schemes, for instance variance preserving interpolation > which is useful when matching scenes using mutual information.Chuck > > Chuck, > is "prefilter=True will filter out high frequencies in the data" > not correct ? No, I suspect it increases the high frequencies ;) For cubic splines convolving the coefficients with [1/6, 2/3, 1/6] will reproduce the sample values, so to get the spline coefficients you need to deconvolve what is essentially a low pass filter. Direct deconvolution is numerically unstable, so you need to factor the kernel and run one factor forward and the other backwards to get the coefficients. It's a pure preprocessing step, > spline_filter in ndimage/interpolation.py > iterates spline_filter1d Bspline smoothing along each axis. > > How about running some testfunctions with / without prefilter, > do you know of any real or realistic ? > > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Fri Mar 15 14:47:43 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 15 Mar 2013 14:47:43 -0400 Subject: [SciPy-Dev] Add ability to disable the autogeneration of the function signature in a ufunc docstring. Message-ID: Hi all, In a recent scipy pull request (https://github.com/scipy/scipy/pull/459), I ran into the problem of ufuncs automatically generating a signature in the docstring using arguments such as 'x' or 'x1, x2'. scipy.special has a lot of ufuncs, and for most of them, there are much more descriptive or conventional argument names than 'x'. For now, we will include a nicer signature in the added docstring, and grudgingly put up with the one generated by the ufunc. In the long term, it would be nice to be able to disable the automatic generation of the signature. I submitted a pull request to numpy to allow that: https://github.com/numpy/numpy/pull/3149 Comments on the pull request would be appreciated. Thanks, Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.asenjo at gmail.com Sat Mar 16 05:55:13 2013 From: daniel.asenjo at gmail.com (Daniel Asenjo) Date: Sat, 16 Mar 2013 09:55:13 +0000 Subject: [SciPy-Dev] adding FIRE algorithm to scipy.optimize Message-ID: I have an implementation of the FIRE minimization algorithm that I would like add to scipy.optimize. The algorithm is described here: http://link.aps.org/doi/10.1103/PhysRevLett.97.170201 FIRE is relatively fast although not as efficient as L-BFGS but it is much more stable as shown in our recently submitted paper: http://people.ds.cam.ac.uk/daa32/compareminmeth.pdf I think that FIRE would be a good addition to the existing algorithms in scipy.optimize as it is becoming popular within the community. What do you think? Daniel. -------------- next part -------------- An HTML attachment was scrubbed... URL: From msuzen at gmail.com Sat Mar 16 11:03:28 2013 From: msuzen at gmail.com (Suzen, Mehmet) Date: Sat, 16 Mar 2013 16:03:28 +0100 Subject: [SciPy-Dev] adding FIRE algorithm to scipy.optimize In-Reply-To: References: Message-ID: It would be nice to see this in SciPy On 16 March 2013 10:55, Daniel Asenjo wrote: > I have an implementation of the FIRE minimization algorithm that I would > like add to scipy.optimize. > > The algorithm is described here: > > http://link.aps.org/doi/10.1103/PhysRevLett.97.170201 > > FIRE is relatively fast although not as efficient as L-BFGS but it is much > more stable as shown in our recently submitted paper: > > http://people.ds.cam.ac.uk/daa32/compareminmeth.pdf > > I think that FIRE would be a good addition to the existing algorithms in > scipy.optimize as it is becoming popular within the community. > > What do you think? > > Daniel. > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From jstevenson131 at gmail.com Sat Mar 16 11:26:34 2013 From: jstevenson131 at gmail.com (Jacob Stevenson) Date: Sat, 16 Mar 2013 15:26:34 +0000 Subject: [SciPy-Dev] adding FIRE algorithm to scipy.optimize In-Reply-To: References: Message-ID: <09A751E6-4E53-40B8-B0E2-FEC4F8CA97F8@gmail.com> I also think it would be a useful addition. On 16 Mar 2013, at 15:03, Suzen, Mehmet wrote: > It would be nice to see this in SciPy > > On 16 March 2013 10:55, Daniel Asenjo wrote: >> I have an implementation of the FIRE minimization algorithm that I would >> like add to scipy.optimize. >> >> The algorithm is described here: >> >> http://link.aps.org/doi/10.1103/PhysRevLett.97.170201 >> >> FIRE is relatively fast although not as efficient as L-BFGS but it is much >> more stable as shown in our recently submitted paper: >> >> http://people.ds.cam.ac.uk/daa32/compareminmeth.pdf >> >> I think that FIRE would be a good addition to the existing algorithms in >> scipy.optimize as it is becoming popular within the community. >> >> What do you think? >> >> Daniel. >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From dave.hirschfeld at gmail.com Sun Mar 17 05:11:47 2013 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Sun, 17 Mar 2013 09:11:47 +0000 (UTC) Subject: [SciPy-Dev] adding FIRE algorithm to scipy.optimize References: Message-ID: Daniel Asenjo gmail.com> writes: > > > I have an implementation of the FIRE minimization algorithm that I would like add to scipy.optimize. > The algorithm is described here: > http://link.aps.org/doi/10.1103/PhysRevLett.97.170201 > FIRE is relatively fast although not as efficient as L-BFGS but it is much more stable as shown in our recently submitted paper: > http://people.ds.cam.ac.uk/daa32/compareminmeth.pdf > I think that FIRE would be a good addition to the existing algorithms in scipy.optimize as it is becoming popular within the community. > What do you think? > Daniel. > > As a user, the more high quality optimization algorithms I can access from Scipy with a common interface without having to resort to 3rd party packages, the better! Thanks, Dave From geier at lostpackets.de Mon Mar 18 11:19:22 2013 From: geier at lostpackets.de (Christian Geier) Date: Mon, 18 Mar 2013 15:19:22 +0000 Subject: [SciPy-Dev] Lomb-Scargle Periodogram: Press & Rybicki Algorithm In-Reply-To: References: <20130309012621.GA65314@brutus.lostpackets.de> Message-ID: <20130318151922.GA20372@brutus.lostpackets.de> On Fri, Mar 08, 2013 at 05:18:56PM -0800, Jacob Vanderplas wrote: > Hi, > We have a cython version of this Lomb-Scargle algorithm in astroML [1], as > well as a generalized version that does not depend on the sample mean being > a good approximation of the true mean. We've not yet implemented the FFT > trick shown in Press & Rybicki, but it's fairly fast as-is for problems of > reasonable size. > > For some examples of it in use on astronomical data, see [2-3] below (the > examples are figures from our upcoming astro/statistics textbook). This > code is BSD-licensed, so if it seems generally useful enough to include in > scipy, it would be no problem to port it. > > Also, if you have implemented an FFT-based version, it would get a fair bit > of use in the astronomy community if you were willing to contribute it to > astroML. > > Thanks, > Jake > > [1] > https://github.com/astroML/astroML/blob/master/astroML_addons/periodogram.pyx > [2] http://astroml.github.com/book_figures/chapter10/fig_LS_example.html > [3] > http://astroml.github.com/book_figures/chapter10/fig_LS_sg_comparison.html > > > On Fri, Mar 8, 2013 at 5:26 PM, Christian Geier wrote: > > > Hello everyone! > > > > Would you in general be considering to include the Lombscargle > > Periodogram by Press & Rybicki [1] into scipy in addition to the already > > present one? I find the included algorithm by Townend rather slow and > > had recently some "interesting" results returned by it. > > > > I've recently translated the original FORTRAN code (which is actually > > the description of the algorithm [1]) to (pure) python [2] and would like > > to know what the legal situation is in this case: can I release this > > translated code under the BSD license? > > > > In this case I would translate the code further to cython and supply > > tests and more documentation. > > > > Greetings > > > > Christian Geier > > > > [1] http://adsabs.harvard.edu/full/1989ApJ...338..277P > > [2] > > https://github.com/geier/scipy/commit/710bf4ca514d223df39891eb20401aba2edd8cdb Hi, thanks for your answer. When time allows for it (hopefully next week) I will improve the code some more and then contact astroML. Regards Christian From josef.pktd at gmail.com Mon Mar 18 11:14:41 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 18 Mar 2013 11:14:41 -0400 Subject: [SciPy-Dev] optimize.fsolve endless loop with nan In-Reply-To: References: Message-ID: On Wed, Mar 13, 2013 at 1:24 PM, wrote: > On Wed, Mar 13, 2013 at 12:59 PM, wrote: >> On Wed, Mar 13, 2013 at 12:17 PM, wrote: >>> On Wed, Mar 13, 2013 at 10:51 AM, wrote: >>>> preliminary question, I didn't have time yet to look closely >>>> >>>>>>> scipy.__version__ >>>> '0.9.0' >>>> >>>> I have a problem where fsolve goes into a range where the values are >>>> nan. After that it goes into an endless loop, as far as I can tell. >>>> >>>> Something like this has been fixed for optimize.fmin_bfgs. Was there a >>>> fix for this also for fsolve, since 0.9.0? >>>> >>>> >>>> (The weirder story: I rearranged some test, and made unfortunately >>>> also some other changes, and now when I run nosetests it never >>>> returns. Ctrl+C kills nosetests, but leaves a python process running. >>>> I have no clue why the test sequence should matter.) >>> >>> I had left the starting values in a module global even after I started >>> to adjust them in one of the cases. >>> >>> The starting value for fsolve was in a range where the curvature is >>> very flat, and fsolve made huge steps into the invalid range. After >>> getting nans, it went AWOL. >>> >>> If I return np.inf as soon as I get a nan, then fsolve seems to stop >>> right away. Is there a way to induce fsolve to stay out of the nan >>> zone, for example returning something else than inf? >>> >>> I don't want to find a very specific solution, because I'm throwing >>> lot's of different cases at the same generic method. >> >> same result with python 2.7, scipy version 0.11.0b1 >> >>>"C:\Programs\Python27\python.exe" fsolve_endless_nan.py >> scipy version 0.11.0b1 >> args 0.3 [100] 0.05 >> args 0.3 [ 100.] 0.05 >> args 0.3 [ 100.] 0.05 >> args 0.3 [ 100.00000149] 0.05 >> args 0.3 [-132.75434239] 0.05 >> fsolve_endless_nan.py:36: RuntimeWarning: invalid value encountered in sqrt >> pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) >> fsolve_endless_nan.py:39: RuntimeWarning: invalid value encountered in sqrt >> pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) >> args 0.3 [ nan] 0.05 >> >> >> standalone test case (from my power branch) >> >> Don't run in an interpreter (session) that you want to keep alive! >> And open TaskManager if you are on Windows :) >> >> ------------ >> # -*- coding: utf-8 -*- >> """Warning: endless loop in runaway process, requires hard kill of process >> >> Created on Wed Mar 13 12:44:15 2013 >> >> Author: Josef Perktold >> """ >> >> import numpy as np >> from scipy import stats, optimize >> >> import scipy >> print "scipy version", scipy.__version__ >> >> >> def ttest_power(effect_size, nobs, alpha, df=None, alternative='two-sided'): >> '''Calculate power of a ttest >> ''' >> print 'args', effect_size, nobs, alpha >> d = effect_size >> if df is None: >> df = nobs - 1 >> >> if alternative in ['two-sided', '2s']: >> alpha_ = alpha / 2. #no inplace changes, doesn't work >> elif alternative in ['smaller', 'larger']: >> alpha_ = alpha >> else: >> raise ValueError("alternative has to be 'two-sided', 'larger' " + >> "or 'smaller'") >> >> pow_ = 0 >> if alternative in ['two-sided', '2s', 'larger']: >> crit_upp = stats.t.isf(alpha_, df) >> # use private methods, generic methods return nan with negative d >> pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) >> if alternative in ['two-sided', '2s', 'smaller']: >> crit_low = stats.t.ppf(alpha_, df) >> pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) >> return pow_ >> >> func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) >> >> print optimize.fsolve(func, 100, args=(0.3, 0.05)) >> ------------ > > correction to get a solution that would make sense (last two lines) > > func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) - args[2] > > print optimize.fsolve(func, 10, args=(0.76638635, 0.1, 0.8)) > #converges to 12 > print optimize.fsolve(func, 100, args=(0.76638635, 0.1, 0.8)) #runaway > > Josef > "Lost in Translation" (continuing my monologue) more problems with fsolve, looks like on Ubuntu Skipper uses scipy master, but python-xy daily testing uses python-scipy (0.10.1+dfsg1-4) TravisCI, which also runs Ubuntu doesn't have any problems (using python-scipy amd64 0.9.0+dfsg1-1ubuntu2 ) https://travis-ci.org/statsmodels/statsmodels/jobs/5585028 (just realized: I've tested on Windows so far only with scipy 0.9) from Skipper's run ( https://github.com/statsmodels/statsmodels/issues/710 ): ----------- [~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] [67]: optimize.fsolve(func, .14, full_output=True) [67]: (array([ 0.05]), {'fjac': array([[-1.]]), 'fvec': array([ 0.]), 'nfev': 12, 'qtf': array([ 0.]), 'r': array([-3.408293])}, 1, 'The solution converged.') [~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] [68]: optimize.fsolve(func, .15, full_output=True) [68]: (array([ 0.15]), {'fjac': array([[-1.]]), 'fvec': array([ 0.20178728]), 'nfev': 4, 'qtf': array([-0.20178728]), 'r': array([ inf])}, 1, 'The solution converged.') It looks like the QR factorization is failing (r == inf) and then it's reporting convergence still. --------- That it stops with a "solution converged" also doesn't trigger my backup root-finding. I can work around this since I have no idea how to debug this. However, there might be something fishy with fsolve. I never had problems with fsolve before, and scipy.stats.distributions is a heavy user of it. Josef > >> >>> >>> Josef >>> >>>> >>>> Josef From josef.pktd at gmail.com Mon Mar 18 11:29:17 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 18 Mar 2013 11:29:17 -0400 Subject: [SciPy-Dev] optimize.fsolve endless loop with nan In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 11:14 AM, wrote: > On Wed, Mar 13, 2013 at 1:24 PM, wrote: >> On Wed, Mar 13, 2013 at 12:59 PM, wrote: >>> On Wed, Mar 13, 2013 at 12:17 PM, wrote: >>>> On Wed, Mar 13, 2013 at 10:51 AM, wrote: >>>>> preliminary question, I didn't have time yet to look closely >>>>> >>>>>>>> scipy.__version__ >>>>> '0.9.0' >>>>> >>>>> I have a problem where fsolve goes into a range where the values are >>>>> nan. After that it goes into an endless loop, as far as I can tell. >>>>> >>>>> Something like this has been fixed for optimize.fmin_bfgs. Was there a >>>>> fix for this also for fsolve, since 0.9.0? >>>>> >>>>> >>>>> (The weirder story: I rearranged some test, and made unfortunately >>>>> also some other changes, and now when I run nosetests it never >>>>> returns. Ctrl+C kills nosetests, but leaves a python process running. >>>>> I have no clue why the test sequence should matter.) >>>> >>>> I had left the starting values in a module global even after I started >>>> to adjust them in one of the cases. >>>> >>>> The starting value for fsolve was in a range where the curvature is >>>> very flat, and fsolve made huge steps into the invalid range. After >>>> getting nans, it went AWOL. >>>> >>>> If I return np.inf as soon as I get a nan, then fsolve seems to stop >>>> right away. Is there a way to induce fsolve to stay out of the nan >>>> zone, for example returning something else than inf? >>>> >>>> I don't want to find a very specific solution, because I'm throwing >>>> lot's of different cases at the same generic method. >>> >>> same result with python 2.7, scipy version 0.11.0b1 >>> >>>>"C:\Programs\Python27\python.exe" fsolve_endless_nan.py >>> scipy version 0.11.0b1 >>> args 0.3 [100] 0.05 >>> args 0.3 [ 100.] 0.05 >>> args 0.3 [ 100.] 0.05 >>> args 0.3 [ 100.00000149] 0.05 >>> args 0.3 [-132.75434239] 0.05 >>> fsolve_endless_nan.py:36: RuntimeWarning: invalid value encountered in sqrt >>> pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) >>> fsolve_endless_nan.py:39: RuntimeWarning: invalid value encountered in sqrt >>> pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) >>> args 0.3 [ nan] 0.05 >>> >>> >>> standalone test case (from my power branch) >>> >>> Don't run in an interpreter (session) that you want to keep alive! >>> And open TaskManager if you are on Windows :) >>> >>> ------------ >>> # -*- coding: utf-8 -*- >>> """Warning: endless loop in runaway process, requires hard kill of process >>> >>> Created on Wed Mar 13 12:44:15 2013 >>> >>> Author: Josef Perktold >>> """ >>> >>> import numpy as np >>> from scipy import stats, optimize >>> >>> import scipy >>> print "scipy version", scipy.__version__ >>> >>> >>> def ttest_power(effect_size, nobs, alpha, df=None, alternative='two-sided'): >>> '''Calculate power of a ttest >>> ''' >>> print 'args', effect_size, nobs, alpha >>> d = effect_size >>> if df is None: >>> df = nobs - 1 >>> >>> if alternative in ['two-sided', '2s']: >>> alpha_ = alpha / 2. #no inplace changes, doesn't work >>> elif alternative in ['smaller', 'larger']: >>> alpha_ = alpha >>> else: >>> raise ValueError("alternative has to be 'two-sided', 'larger' " + >>> "or 'smaller'") >>> >>> pow_ = 0 >>> if alternative in ['two-sided', '2s', 'larger']: >>> crit_upp = stats.t.isf(alpha_, df) >>> # use private methods, generic methods return nan with negative d >>> pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) >>> if alternative in ['two-sided', '2s', 'smaller']: >>> crit_low = stats.t.ppf(alpha_, df) >>> pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) >>> return pow_ >>> >>> func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) >>> >>> print optimize.fsolve(func, 100, args=(0.3, 0.05)) >>> ------------ >> >> correction to get a solution that would make sense (last two lines) >> >> func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) - args[2] >> >> print optimize.fsolve(func, 10, args=(0.76638635, 0.1, 0.8)) >> #converges to 12 >> print optimize.fsolve(func, 100, args=(0.76638635, 0.1, 0.8)) #runaway >> >> Josef >> "Lost in Translation" > > (continuing my monologue) > > more problems with fsolve, looks like on Ubuntu > > Skipper uses scipy master, but python-xy daily testing uses > python-scipy (0.10.1+dfsg1-4) > TravisCI, which also runs Ubuntu doesn't have any problems (using > python-scipy amd64 0.9.0+dfsg1-1ubuntu2 ) > https://travis-ci.org/statsmodels/statsmodels/jobs/5585028 > > (just realized: I've tested on Windows so far only with scipy 0.9) I should have done the testing across versions: I'm getting the same failures on Windows with >>> import scipy >>> scipy.__version__ '0.11.0b1' but not with scipy 0.9 something has changed in the recent scipy versions that makes fsolve more fragile Josef > > from Skipper's run ( https://github.com/statsmodels/statsmodels/issues/710 ): > > ----------- > [~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] > [67]: optimize.fsolve(func, .14, full_output=True) > [67]: > (array([ 0.05]), > {'fjac': array([[-1.]]), > 'fvec': array([ 0.]), > 'nfev': 12, > 'qtf': array([ 0.]), > 'r': array([-3.408293])}, > 1, > 'The solution converged.') > > [~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] > [68]: optimize.fsolve(func, .15, full_output=True) > [68]: > (array([ 0.15]), > {'fjac': array([[-1.]]), > 'fvec': array([ 0.20178728]), > 'nfev': 4, > 'qtf': array([-0.20178728]), > 'r': array([ inf])}, > 1, > 'The solution converged.') > > It looks like the QR factorization is failing (r == inf) and then it's > reporting convergence still. > --------- > > That it stops with a "solution converged" also doesn't trigger my > backup root-finding. > > > I can work around this since I have no idea how to debug this. > However, there might be something fishy with fsolve. > > I never had problems with fsolve before, and scipy.stats.distributions > is a heavy user of it. > > Josef > >> >>> >>>> >>>> Josef >>>> >>>>> >>>>> Josef From jsseabold at gmail.com Mon Mar 18 11:40:49 2013 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 18 Mar 2013 11:40:49 -0400 Subject: [SciPy-Dev] optimize.fsolve endless loop with nan In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 11:29 AM, wrote: > On Mon, Mar 18, 2013 at 11:14 AM, wrote: > > On Wed, Mar 13, 2013 at 1:24 PM, wrote: > >> On Wed, Mar 13, 2013 at 12:59 PM, wrote: > >>> On Wed, Mar 13, 2013 at 12:17 PM, wrote: > >>>> On Wed, Mar 13, 2013 at 10:51 AM, wrote: > >>>>> preliminary question, I didn't have time yet to look closely > >>>>> > >>>>>>>> scipy.__version__ > >>>>> '0.9.0' > >>>>> > >>>>> I have a problem where fsolve goes into a range where the values are > >>>>> nan. After that it goes into an endless loop, as far as I can tell. > >>>>> > >>>>> Something like this has been fixed for optimize.fmin_bfgs. Was there > a > >>>>> fix for this also for fsolve, since 0.9.0? > >>>>> > >>>>> > >>>>> (The weirder story: I rearranged some test, and made unfortunately > >>>>> also some other changes, and now when I run nosetests it never > >>>>> returns. Ctrl+C kills nosetests, but leaves a python process running. > >>>>> I have no clue why the test sequence should matter.) > >>>> > >>>> I had left the starting values in a module global even after I started > >>>> to adjust them in one of the cases. > >>>> > >>>> The starting value for fsolve was in a range where the curvature is > >>>> very flat, and fsolve made huge steps into the invalid range. After > >>>> getting nans, it went AWOL. > >>>> > >>>> If I return np.inf as soon as I get a nan, then fsolve seems to stop > >>>> right away. Is there a way to induce fsolve to stay out of the nan > >>>> zone, for example returning something else than inf? > >>>> > >>>> I don't want to find a very specific solution, because I'm throwing > >>>> lot's of different cases at the same generic method. > >>> > >>> same result with python 2.7, scipy version 0.11.0b1 > >>> > >>>>"C:\Programs\Python27\python.exe" fsolve_endless_nan.py > >>> scipy version 0.11.0b1 > >>> args 0.3 [100] 0.05 > >>> args 0.3 [ 100.] 0.05 > >>> args 0.3 [ 100.] 0.05 > >>> args 0.3 [ 100.00000149] 0.05 > >>> args 0.3 [-132.75434239] 0.05 > >>> fsolve_endless_nan.py:36: RuntimeWarning: invalid value encountered in > sqrt > >>> pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) > >>> fsolve_endless_nan.py:39: RuntimeWarning: invalid value encountered in > sqrt > >>> pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) > >>> args 0.3 [ nan] 0.05 > >>> > >>> > >>> standalone test case (from my power branch) > >>> > >>> Don't run in an interpreter (session) that you want to keep alive! > >>> And open TaskManager if you are on Windows :) > >>> > >>> ------------ > >>> # -*- coding: utf-8 -*- > >>> """Warning: endless loop in runaway process, requires hard kill of > process > >>> > >>> Created on Wed Mar 13 12:44:15 2013 > >>> > >>> Author: Josef Perktold > >>> """ > >>> > >>> import numpy as np > >>> from scipy import stats, optimize > >>> > >>> import scipy > >>> print "scipy version", scipy.__version__ > >>> > >>> > >>> def ttest_power(effect_size, nobs, alpha, df=None, > alternative='two-sided'): > >>> '''Calculate power of a ttest > >>> ''' > >>> print 'args', effect_size, nobs, alpha > >>> d = effect_size > >>> if df is None: > >>> df = nobs - 1 > >>> > >>> if alternative in ['two-sided', '2s']: > >>> alpha_ = alpha / 2. #no inplace changes, doesn't work > >>> elif alternative in ['smaller', 'larger']: > >>> alpha_ = alpha > >>> else: > >>> raise ValueError("alternative has to be 'two-sided', 'larger' > " + > >>> "or 'smaller'") > >>> > >>> pow_ = 0 > >>> if alternative in ['two-sided', '2s', 'larger']: > >>> crit_upp = stats.t.isf(alpha_, df) > >>> # use private methods, generic methods return nan with > negative d > >>> pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) > >>> if alternative in ['two-sided', '2s', 'smaller']: > >>> crit_low = stats.t.ppf(alpha_, df) > >>> pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) > >>> return pow_ > >>> > >>> func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) > >>> > >>> print optimize.fsolve(func, 100, args=(0.3, 0.05)) > >>> ------------ > >> > >> correction to get a solution that would make sense (last two lines) > >> > >> func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) - args[2] > >> > >> print optimize.fsolve(func, 10, args=(0.76638635, 0.1, 0.8)) > >> #converges to 12 > >> print optimize.fsolve(func, 100, args=(0.76638635, 0.1, 0.8)) > #runaway > >> > >> Josef > >> "Lost in Translation" > > > > (continuing my monologue) > > > > more problems with fsolve, looks like on Ubuntu > > > > Skipper uses scipy master, but python-xy daily testing uses > > python-scipy (0.10.1+dfsg1-4) > > TravisCI, which also runs Ubuntu doesn't have any problems (using > > python-scipy amd64 0.9.0+dfsg1-1ubuntu2 ) > > https://travis-ci.org/statsmodels/statsmodels/jobs/5585028 > > > > (just realized: I've tested on Windows so far only with scipy 0.9) > > I should have done the testing across versions: > > I'm getting the same failures on Windows with > >>> import scipy > >>> scipy.__version__ > '0.11.0b1' > > but not with scipy 0.9 > > something has changed in the recent scipy versions that makes fsolve > more fragile > > Another data point if it helps. Using 'lm' of the new optimize.root is ok, but all of the other method also fail on this - even when starting very close the actual root. Most reach maximum iterations. [~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] [80]: optimize.root(func, .15, method="lm") [80]: status: 2 qtf: array([ 0.]) cov_x: array([[ 0.08608474]]) ipvt: array([1], dtype=int32) success: True nfev: 16 fun: array([ 0.]) x: array([ 0.05]) message: 'The relative error between two consecutive iterates is at most 0.000000' fjac: array([[-3.40829292]]) [~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] [81]: optimize.root(func, .15, method="hybr") [81]: status: 1 success: True qtf: array([-0.20178728]) nfev: 4 r: array([ inf]) fun: array([ 0.20178728]) x: array([ 0.15]) message: 'The solution converged.' fjac: array([[-1.]]) > Josef > > > > > > from Skipper's run ( > https://github.com/statsmodels/statsmodels/issues/710 ): > > > > ----------- > > [~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] > > [67]: optimize.fsolve(func, .14, full_output=True) > > [67]: > > (array([ 0.05]), > > {'fjac': array([[-1.]]), > > 'fvec': array([ 0.]), > > 'nfev': 12, > > 'qtf': array([ 0.]), > > 'r': array([-3.408293])}, > > 1, > > 'The solution converged.') > > > > [~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] > > [68]: optimize.fsolve(func, .15, full_output=True) > > [68]: > > (array([ 0.15]), > > {'fjac': array([[-1.]]), > > 'fvec': array([ 0.20178728]), > > 'nfev': 4, > > 'qtf': array([-0.20178728]), > > 'r': array([ inf])}, > > 1, > > 'The solution converged.') > > > > It looks like the QR factorization is failing (r == inf) and then it's > > reporting convergence still. > > --------- > > > > That it stops with a "solution converged" also doesn't trigger my > > backup root-finding. > > > > > > I can work around this since I have no idea how to debug this. > > However, there might be something fishy with fsolve. > > > > I never had problems with fsolve before, and scipy.stats.distributions > > is a heavy user of it. > > > > Josef > > > >> > >>> > >>>> > >>>> Josef > >>>> > >>>>> > >>>>> Josef > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From helmrp at yahoo.com Mon Mar 18 12:53:34 2013 From: helmrp at yahoo.com (The Helmbolds) Date: Mon, 18 Mar 2013 09:53:34 -0700 (PDT) Subject: [SciPy-Dev] SciPy-Dev Digest, Vol 113, Issue 21 In-Reply-To: References: Message-ID: <1363625614.44736.YahooMailNeo@web31802.mail.mud.yahoo.com> Regarding fsolve,? *Cautionary Note*: According to [2], the `hybrd` Fortran programs computes an initial value of the "approximate Jacobian", but update that value only when "the rank-1 method is not producing satisfactory progress", where the reference is to Broyden's rank-1 method [4]_ (used in an `fsolve` inner loop). Examining the number of Jacobian? evaluations used (see `njev`) will give some idea as to how often `fsolve` updated the Jacobian. Because the program's QR-related outputs (`fjac`, `r`, and `qtf`) are based on the program's "approximate Jacobian", they should not be used in subsequent analyses unless their validity is confirmed by independent computations. (Example 2 illustrates this point.) References ---------- .. [1] Powell, M. J. D., "A Hybrid Method for Nonlinear Equations," ? ?in `Numerical Methods for Nonlinear Algebraic Equations`, P.? ? ?Rabinowitz, ed., Gordon and Breach, 1970. .. [2] User Guide for MINPACK-1, Mor'e, Jorge J., et al,? ? ?Argonne National Laboratory, Report ANL-80-74, August 1980. ? ?(Out of print. For contents, see ? ?http://www.mcs.anl.gov/~more/ANL8074a.pdf, and? ? ?http://www.mcs.anl.gov/~more/ANL8074b.pdf.) .. [3] A copy of the Fortran code is in SciPy module? ? ?"optimize/MINPACK". .. [4] Broyden, C. G., "A Class of Methods for Solving Nonlinear ? ?Simultaneous Equations", Mathematics of Computation? ? ?(American Mathematical Society), 19 (92), October 1965,? ? ?pp 577?593. doi:10.2307/2003941. JSTOR 2003941. ?? Examples -------- `Example 1`: Supppose we wish to solve the following system of three simultaneous nonlinear equations in the three variables (u, v, w): * 2 * a*u + b*v + d - w*v = 0 * b*u + 2 * c*v + e - w*u = 0 * -u*v + g ?= 0 For this system, the Jacobian matrix with derivatives? across columns is: ? ? +-------+-------+----+ ? ? | ?2*a ?| b - w | -v | ? ? +-------+-------+----+ ? ? | b - w | ?2*c ?| -u | ? ? +-------+-------+----+ ? ? | ? -v ?| ? -u ?| 0 ?| ? ? +-------+-------+----+ It is symmetrical, and could be considered as "banded",? though we make no use of that in these examples.? We may proceed as follows. >>> params = (2, 3, 7, 8, 9, 10, 2) >>> def Myfunc(z, *params): >>> ... (u, v, w) = z >>> ... (a, b, c, d, e, f, g) = params >>> ... # Make a list of the LHS values. >>> ... temp = [2*a*u + b*v + d - w*v] >>> ... temp.append(b*u + 2*c*v + e - w*u) >>> ... temp.append(-u*v + g) >>> ... return temp >>> >>> def Myfprime(z, *params): >>> ... (u, v, w) = z >>> ... (a, b, c, d, e, f, g) = params >>> ... J = np.ndarray([2*a, b - w, -v],? ? ? ? ? ? ? [b-w, 2*c, -u],? ? ? ? ? ? ? [-v, -u, 0]) >>> ... J.reshape((3,3)) >>> ... return J >>> >>> z0 = np.ndarray([1, 1, 5]) ? ? ? ? # Initial guess. >>> # Accept the default values for all parameters. >>> from scipy import optimize >>> Example_1 = optimize.fsolve(Myfunc, ? ? ? ? ? ? z0, ? ? ? ? ? ? args = params) >>> print "Example_1 = ", Example_1 Prints the solution point value:: [ ?1.79838825 ? ?1.11210691 ? ?16.66195357] `Example 2`: To solve this problem using the new? (as of Version 0.11.0) `root` method we need to (a) convert the options to an "option dictionary" using keys acceptable? to the `root` method for `fsolve` (as given in the following code snippet), (b) call `root` with `method = 'hybr'` (not `'hybrd'`), and (c) take account of the fact that the returned value? will be a `Result` dictionary as defined in SciPy's? "optimize/optimize.py" module. We may proceed as follows (where we note some changes? in terminology from those used in `fsolve`): >>> Myopts = { ? ? ? ? ? 'col_deriv' : 0, ?# Default, must be `0`, not `None`. ? ? ? ? ? ? 'xtol' ? ?: 1.49012e-8, ? # Default. ? ? ? ? ? ? 'maxfev' ?: 0, ? ?# Default. ? ? ? ? ? ? 'band' ? ?: None, # Default. ? ? ? ? ? ? 'eps' ? ? : 0.0, ?# Default, formerly called `epsfcn`. ? ? ? ? ? ? 'factor' ?: 100, ?# Default. ? ? ? ? ? ? 'diag' ? ?: None ?# Default. ? ? ? ? ? ? ? } >>> ? ? ? ? ? ? ? >>> from scipy import optimize >>> Example_2 = optimize.root(Myfunc, ? ? ? ? ? ? ? ? z0, ? ? ? ? ? ? ? ? args = params, ? ? ? ? ? ? ? ? method = 'hybr', ?# Not 'hybrd'. ? ? ? ? ? ? ? ? jac = Myprime, ? ?# Formerly called 'fprime'. ? ? ? ? ? ? ? ? options = Myopts) >>> >>> print "Example_2 = ", Example_2 ? ? ? ?? This prints the following (in a slightly different format):: Example_2 = ? ? status: 1 ? ? success: True ? ? qtf: array([ 1.32699176e-07, -8.49465473e-08, -3.09277588e-08]) ? ? nfev: 12 ? ? r: array([ -7.77806716, 30.02199802, -0.819055, ? ? ? ? ? ? ? -10.74878184, 2.00090268, ? 1.02706198]) ? ? fun: array([ -2.30691910e-10, 4.58971172e-09, 5.55402391e-11]) ? ? x: array([ ?1.79838825, ? 1.11210691, ?16.66195357]) ? ? message: 'The solution converged.' ? ? fjac: array([[-0.64093238, ?0.75748326, ?0.1241966 ], ? ? ? ? ? ? ? [-0.62403598, -0.60841098, ?0.4903215 ], ? ? ? ? ? ? ? [-0.44697291, -0.23675978, -0.8626471 ]]) ? ? njev: 1 As the `status` code, `success` value, and `message` indicate, the? run is considered a success. The value of `fun` confirms this by? showing that the equations are satisfied to at least 8 significant figures. The solution point `x` is the same as in Example_1.? The value of `nfev` shows that this run used 12 function? evaluations. Note that `njev`, the number of Jacobian evaluations, is 1. So the? initial "approximate Jacobian" was used throughout. Thus the reported values of `fjac`, `r`, and `qtf` may not be those at the solution point. Therefore subsequent analyses should not use them as the values at the solution point until their validity is confirmed by independent computations. ? Bob and Paula Helmbold "Many of life's failures are people who did not realize how close they were to success when they gave up." (Thomas Edison) >________________________________ > From: "scipy-dev-request at scipy.org" >To: scipy-dev at scipy.org >Sent: Monday, March 18, 2013 8:36 AM >Subject: SciPy-Dev Digest, Vol 113, Issue 21 > >Send SciPy-Dev mailing list submissions to >??? scipy-dev at scipy.org > >To subscribe or unsubscribe via the World Wide Web, visit >??? http://mail.scipy.org/mailman/listinfo/scipy-dev >or, via email, send a message with subject or body 'help' to >??? scipy-dev-request at scipy.org > >You can reach the person managing the list at >??? scipy-dev-owner at scipy.org > >When replying, please edit your Subject line so it is more specific >than "Re: Contents of SciPy-Dev digest..." > > >Today's Topics: > >? 1. Re: Lomb-Scargle Periodogram: Press & Rybicki Algorithm >? ? ? (Christian Geier) >? 2. Re: optimize.fsolve endless loop with nan (josef.pktd at gmail.com) >? 3. Re: optimize.fsolve endless loop with nan (josef.pktd at gmail.com) >? 4. Re: optimize.fsolve endless loop with nan (Skipper Seabold) > > >---------------------------------------------------------------------- > >Message: 1 >Date: Mon, 18 Mar 2013 15:19:22 +0000 >From: Christian Geier >Subject: Re: [SciPy-Dev] Lomb-Scargle Periodogram: Press & Rybicki >??? Algorithm >To: SciPy Developers List >Message-ID: <20130318151922.GA20372 at brutus.lostpackets.de> >Content-Type: text/plain; charset=us-ascii > >On Fri, Mar 08, 2013 at 05:18:56PM -0800, Jacob Vanderplas wrote: >> Hi, >> We have a cython version of this Lomb-Scargle algorithm in astroML [1], as >> well as a generalized version that does not depend on the sample mean being >> a good approximation of the true mean.? We've not yet implemented the FFT >> trick shown in Press & Rybicki, but it's fairly fast as-is for problems of >> reasonable size. >> >> For some examples of it in use on astronomical data, see [2-3] below (the >> examples are figures from our upcoming astro/statistics textbook). This >> code is BSD-licensed, so if it seems generally useful enough to include in >> scipy, it would be no problem to port it. >> >> Also, if you have implemented an FFT-based version, it would get a fair bit >> of use in the astronomy community if you were willing to contribute it to >> astroML. >> >> Thanks, >>? ? Jake >> >> [1] >> https://github.com/astroML/astroML/blob/master/astroML_addons/periodogram.pyx >> [2] http://astroml.github.com/book_figures/chapter10/fig_LS_example.html >> [3] >> http://astroml.github.com/book_figures/chapter10/fig_LS_sg_comparison.html >> >> >> On Fri, Mar 8, 2013 at 5:26 PM, Christian Geier wrote: >> >> > Hello everyone! >> > >> > Would you in general be considering to include the Lombscargle >> > Periodogram by Press & Rybicki [1] into scipy in addition to the already >> > present one? I find the included algorithm by Townend rather slow and >> > had recently some "interesting" results returned by it. >> > >> > I've recently translated the original FORTRAN code (which is actually >> > the description of the algorithm [1]) to (pure) python [2] and would like >> > to know what the legal situation is in this case: can I release this >> > translated code under the BSD license? >> > >> > In this case I would translate the code further to cython and supply >> > tests and more documentation. >> > >> > Greetings >> > >> > Christian Geier >> > >> > [1] http://adsabs.harvard.edu/full/1989ApJ...338..277P >> > [2] >> > https://github.com/geier/scipy/commit/710bf4ca514d223df39891eb20401aba2edd8cdb > >Hi, >thanks for your answer. When time allows for it (hopefully next week) >I will improve the code some more and then contact astroML. > >Regards >Christian > > >------------------------------ > >Message: 2 >Date: Mon, 18 Mar 2013 11:14:41 -0400 >From: josef.pktd at gmail.com >Subject: Re: [SciPy-Dev] optimize.fsolve endless loop with nan >To: SciPy Developers List >Message-ID: >??? >Content-Type: text/plain; charset=ISO-8859-1 > >On Wed, Mar 13, 2013 at 1:24 PM,? wrote: >> On Wed, Mar 13, 2013 at 12:59 PM,? wrote: >>> On Wed, Mar 13, 2013 at 12:17 PM,? wrote: >>>> On Wed, Mar 13, 2013 at 10:51 AM,? wrote: >>>>> preliminary question, I didn't have time yet to look closely >>>>> >>>>>>>> scipy.__version__ >>>>> '0.9.0' >>>>> >>>>> I have a problem where fsolve goes into a range where the values are >>>>> nan. After that it goes into an endless loop, as far as I can tell. >>>>> >>>>> Something like this has been fixed for optimize.fmin_bfgs. Was there a >>>>> fix for this also for fsolve, since 0.9.0? >>>>> >>>>> >>>>> (The weirder story: I rearranged some test, and made unfortunately >>>>> also some other changes, and now when I run nosetests it never >>>>> returns. Ctrl+C kills nosetests, but leaves a python process running. >>>>> I have no clue why the test sequence should matter.) >>>> >>>> I had left the starting values in a module global even after I started >>>> to adjust them in one of the cases. >>>> >>>> The starting value for fsolve was in a range where the curvature is >>>> very flat, and fsolve made huge steps into the invalid range. After >>>> getting nans, it went AWOL. >>>> >>>> If I return np.inf as soon as I get a nan, then fsolve seems to stop >>>> right away. Is there a way to induce fsolve to stay out of the nan >>>> zone, for example returning something else than inf? >>>> >>>> I don't want to find a very specific solution, because I'm throwing >>>> lot's of different cases at the same generic method. >>> >>> same result with python 2.7, scipy version 0.11.0b1 >>> >>>>"C:\Programs\Python27\python.exe" fsolve_endless_nan.py >>> scipy version 0.11.0b1 >>> args 0.3 [100] 0.05 >>> args 0.3 [ 100.] 0.05 >>> args 0.3 [ 100.] 0.05 >>> args 0.3 [ 100.00000149] 0.05 >>> args 0.3 [-132.75434239] 0.05 >>> fsolve_endless_nan.py:36: RuntimeWarning: invalid value encountered in sqrt >>>? pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) >>> fsolve_endless_nan.py:39: RuntimeWarning: invalid value encountered in sqrt >>>? pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) >>> args 0.3 [ nan] 0.05 >>> >>> >>> standalone test case (from my power branch) >>> >>> Don't run in an interpreter (session) that you want to keep alive! >>> And open TaskManager if you are on Windows :) >>> >>> ------------ >>> # -*- coding: utf-8 -*- >>> """Warning: endless loop in runaway process, requires hard kill of process >>> >>> Created on Wed Mar 13 12:44:15 2013 >>> >>> Author: Josef Perktold >>> """ >>> >>> import numpy as np >>> from scipy import stats, optimize >>> >>> import scipy >>> print "scipy version", scipy.__version__ >>> >>> >>> def ttest_power(effect_size, nobs, alpha, df=None, alternative='two-sided'): >>>? ? '''Calculate power of a ttest >>>? ? ''' >>>? ? print 'args', effect_size, nobs, alpha >>>? ? d = effect_size >>>? ? if df is None: >>>? ? ? ? df = nobs - 1 >>> >>>? ? if alternative in ['two-sided', '2s']: >>>? ? ? ? alpha_ = alpha / 2.? #no inplace changes, doesn't work >>>? ? elif alternative in ['smaller', 'larger']: >>>? ? ? ? alpha_ = alpha >>>? ? else: >>>? ? ? ? raise ValueError("alternative has to be 'two-sided', 'larger' " + >>>? ? ? ? ? ? ? ? ? ? ? ? ? "or 'smaller'") >>> >>>? ? pow_ = 0 >>>? ? if alternative in ['two-sided', '2s', 'larger']: >>>? ? ? ? crit_upp = stats.t.isf(alpha_, df) >>>? ? ? ? # use private methods, generic methods return nan with negative d >>>? ? ? ? pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) >>>? ? if alternative in ['two-sided', '2s', 'smaller']: >>>? ? ? ? crit_low = stats.t.ppf(alpha_, df) >>>? ? ? ? pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) >>>? ? return pow_ >>> >>> func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) >>> >>> print optimize.fsolve(func, 100, args=(0.3, 0.05)) >>> ------------ >> >> correction to get a solution that would make sense (last two lines) >> >> func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) - args[2] >> >> print optimize.fsolve(func, 10, args=(0.76638635, 0.1, 0.8)) >> #converges to 12 >> print optimize.fsolve(func, 100, args=(0.76638635, 0.1, 0.8))? ? ? #runaway >> >> Josef >> "Lost in Translation" > >(continuing my monologue) > >more problems with fsolve, looks like on Ubuntu > >Skipper uses scipy master, but python-xy daily testing uses >python-scipy (0.10.1+dfsg1-4) >TravisCI, which also runs Ubuntu doesn't have any problems (using >python-scipy amd64 0.9.0+dfsg1-1ubuntu2 ) >https://travis-ci.org/statsmodels/statsmodels/jobs/5585028 > >(just realized: I've tested on Windows so far only with scipy 0.9) > >from Skipper's run ( https://github.com/statsmodels/statsmodels/issues/710 ): > >----------- >[~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] >[67]: optimize.fsolve(func, .14, full_output=True) >[67]: >(array([ 0.05]), >{'fjac': array([[-1.]]), >? 'fvec': array([ 0.]), >? 'nfev': 12, >? 'qtf': array([ 0.]), >? 'r': array([-3.408293])}, >1, >'The solution converged.') > >[~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] >[68]: optimize.fsolve(func, .15, full_output=True) >[68]: >(array([ 0.15]), >{'fjac': array([[-1.]]), >? 'fvec': array([ 0.20178728]), >? 'nfev': 4, >? 'qtf': array([-0.20178728]), >? 'r': array([ inf])}, >1, >'The solution converged.') > >It looks like the QR factorization is failing (r == inf) and then it's >reporting convergence still. >--------- > >That it stops with a "solution converged" also doesn't trigger my >backup root-finding. > > >I can work around this since I have no idea how to debug this. >However, there might be something fishy with fsolve. > >I never had problems with fsolve before, and scipy.stats.distributions >is a heavy user of it. > >Josef > >> >>> >>>> >>>> Josef >>>> >>>>> >>>>> Josef > > >------------------------------ > >Message: 3 >Date: Mon, 18 Mar 2013 11:29:17 -0400 >From: josef.pktd at gmail.com >Subject: Re: [SciPy-Dev] optimize.fsolve endless loop with nan >To: SciPy Developers List >Message-ID: >??? >Content-Type: text/plain; charset=ISO-8859-1 > >On Mon, Mar 18, 2013 at 11:14 AM,? wrote: >> On Wed, Mar 13, 2013 at 1:24 PM,? wrote: >>> On Wed, Mar 13, 2013 at 12:59 PM,? wrote: >>>> On Wed, Mar 13, 2013 at 12:17 PM,? wrote: >>>>> On Wed, Mar 13, 2013 at 10:51 AM,? wrote: >>>>>> preliminary question, I didn't have time yet to look closely >>>>>> >>>>>>>>> scipy.__version__ >>>>>> '0.9.0' >>>>>> >>>>>> I have a problem where fsolve goes into a range where the values are >>>>>> nan. After that it goes into an endless loop, as far as I can tell. >>>>>> >>>>>> Something like this has been fixed for optimize.fmin_bfgs. Was there a >>>>>> fix for this also for fsolve, since 0.9.0? >>>>>> >>>>>> >>>>>> (The weirder story: I rearranged some test, and made unfortunately >>>>>> also some other changes, and now when I run nosetests it never >>>>>> returns. Ctrl+C kills nosetests, but leaves a python process running. >>>>>> I have no clue why the test sequence should matter.) >>>>> >>>>> I had left the starting values in a module global even after I started >>>>> to adjust them in one of the cases. >>>>> >>>>> The starting value for fsolve was in a range where the curvature is >>>>> very flat, and fsolve made huge steps into the invalid range. After >>>>> getting nans, it went AWOL. >>>>> >>>>> If I return np.inf as soon as I get a nan, then fsolve seems to stop >>>>> right away. Is there a way to induce fsolve to stay out of the nan >>>>> zone, for example returning something else than inf? >>>>> >>>>> I don't want to find a very specific solution, because I'm throwing >>>>> lot's of different cases at the same generic method. >>>> >>>> same result with python 2.7, scipy version 0.11.0b1 >>>> >>>>>"C:\Programs\Python27\python.exe" fsolve_endless_nan.py >>>> scipy version 0.11.0b1 >>>> args 0.3 [100] 0.05 >>>> args 0.3 [ 100.] 0.05 >>>> args 0.3 [ 100.] 0.05 >>>> args 0.3 [ 100.00000149] 0.05 >>>> args 0.3 [-132.75434239] 0.05 >>>> fsolve_endless_nan.py:36: RuntimeWarning: invalid value encountered in sqrt >>>>? pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) >>>> fsolve_endless_nan.py:39: RuntimeWarning: invalid value encountered in sqrt >>>>? pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) >>>> args 0.3 [ nan] 0.05 >>>> >>>> >>>> standalone test case (from my power branch) >>>> >>>> Don't run in an interpreter (session) that you want to keep alive! >>>> And open TaskManager if you are on Windows :) >>>> >>>> ------------ >>>> # -*- coding: utf-8 -*- >>>> """Warning: endless loop in runaway process, requires hard kill of process >>>> >>>> Created on Wed Mar 13 12:44:15 2013 >>>> >>>> Author: Josef Perktold >>>> """ >>>> >>>> import numpy as np >>>> from scipy import stats, optimize >>>> >>>> import scipy >>>> print "scipy version", scipy.__version__ >>>> >>>> >>>> def ttest_power(effect_size, nobs, alpha, df=None, alternative='two-sided'): >>>>? ? '''Calculate power of a ttest >>>>? ? ''' >>>>? ? print 'args', effect_size, nobs, alpha >>>>? ? d = effect_size >>>>? ? if df is None: >>>>? ? ? ? df = nobs - 1 >>>> >>>>? ? if alternative in ['two-sided', '2s']: >>>>? ? ? ? alpha_ = alpha / 2.? #no inplace changes, doesn't work >>>>? ? elif alternative in ['smaller', 'larger']: >>>>? ? ? ? alpha_ = alpha >>>>? ? else: >>>>? ? ? ? raise ValueError("alternative has to be 'two-sided', 'larger' " + >>>>? ? ? ? ? ? ? ? ? ? ? ? ? "or 'smaller'") >>>> >>>>? ? pow_ = 0 >>>>? ? if alternative in ['two-sided', '2s', 'larger']: >>>>? ? ? ? crit_upp = stats.t.isf(alpha_, df) >>>>? ? ? ? # use private methods, generic methods return nan with negative d >>>>? ? ? ? pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) >>>>? ? if alternative in ['two-sided', '2s', 'smaller']: >>>>? ? ? ? crit_low = stats.t.ppf(alpha_, df) >>>>? ? ? ? pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) >>>>? ? return pow_ >>>> >>>> func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) >>>> >>>> print optimize.fsolve(func, 100, args=(0.3, 0.05)) >>>> ------------ >>> >>> correction to get a solution that would make sense (last two lines) >>> >>> func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) - args[2] >>> >>> print optimize.fsolve(func, 10, args=(0.76638635, 0.1, 0.8)) >>> #converges to 12 >>> print optimize.fsolve(func, 100, args=(0.76638635, 0.1, 0.8))? ? ? #runaway >>> >>> Josef >>> "Lost in Translation" >> >> (continuing my monologue) >> >> more problems with fsolve, looks like on Ubuntu >> >> Skipper uses scipy master, but python-xy daily testing uses >> python-scipy (0.10.1+dfsg1-4) >> TravisCI, which also runs Ubuntu doesn't have any problems (using >> python-scipy amd64 0.9.0+dfsg1-1ubuntu2 ) >> https://travis-ci.org/statsmodels/statsmodels/jobs/5585028 >> >> (just realized: I've tested on Windows so far only with scipy 0.9) > >I should have done the testing across versions: > >I'm getting the same failures on Windows with >>>> import scipy >>>> scipy.__version__ >'0.11.0b1' > >but not with scipy 0.9 > >something has changed in the recent scipy versions that makes fsolve >more fragile > >Josef > > >> >> from Skipper's run ( https://github.com/statsmodels/statsmodels/issues/710 ): >> >> ----------- >> [~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] >> [67]: optimize.fsolve(func, .14, full_output=True) >> [67]: >> (array([ 0.05]), >>? {'fjac': array([[-1.]]), >>? 'fvec': array([ 0.]), >>? 'nfev': 12, >>? 'qtf': array([ 0.]), >>? 'r': array([-3.408293])}, >>? 1, >>? 'The solution converged.') >> >> [~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] >> [68]: optimize.fsolve(func, .15, full_output=True) >> [68]: >> (array([ 0.15]), >>? {'fjac': array([[-1.]]), >>? 'fvec': array([ 0.20178728]), >>? 'nfev': 4, >>? 'qtf': array([-0.20178728]), >>? 'r': array([ inf])}, >>? 1, >>? 'The solution converged.') >> >> It looks like the QR factorization is failing (r == inf) and then it's >> reporting convergence still. >> --------- >> >> That it stops with a "solution converged" also doesn't trigger my >> backup root-finding. >> >> >> I can work around this since I have no idea how to debug this. >> However, there might be something fishy with fsolve. >> >> I never had problems with fsolve before, and scipy.stats.distributions >> is a heavy user of it. >> >> Josef >> >>> >>>> >>>>> >>>>> Josef >>>>> >>>>>> >>>>>> Josef > > >------------------------------ > >Message: 4 >Date: Mon, 18 Mar 2013 11:40:49 -0400 >From: Skipper Seabold >Subject: Re: [SciPy-Dev] optimize.fsolve endless loop with nan >To: SciPy Developers List >Message-ID: >??? >Content-Type: text/plain; charset="iso-8859-1" > >On Mon, Mar 18, 2013 at 11:29 AM, wrote: > >> On Mon, Mar 18, 2013 at 11:14 AM,? wrote: >> > On Wed, Mar 13, 2013 at 1:24 PM,? wrote: >> >> On Wed, Mar 13, 2013 at 12:59 PM,? wrote: >> >>> On Wed, Mar 13, 2013 at 12:17 PM,? wrote: >> >>>> On Wed, Mar 13, 2013 at 10:51 AM,? wrote: >> >>>>> preliminary question, I didn't have time yet to look closely >> >>>>> >> >>>>>>>> scipy.__version__ >> >>>>> '0.9.0' >> >>>>> >> >>>>> I have a problem where fsolve goes into a range where the values are >> >>>>> nan. After that it goes into an endless loop, as far as I can tell. >> >>>>> >> >>>>> Something like this has been fixed for optimize.fmin_bfgs. Was there >> a >> >>>>> fix for this also for fsolve, since 0.9.0? >> >>>>> >> >>>>> >> >>>>> (The weirder story: I rearranged some test, and made unfortunately >> >>>>> also some other changes, and now when I run nosetests it never >> >>>>> returns. Ctrl+C kills nosetests, but leaves a python process running. >> >>>>> I have no clue why the test sequence should matter.) >> >>>> >> >>>> I had left the starting values in a module global even after I started >> >>>> to adjust them in one of the cases. >> >>>> >> >>>> The starting value for fsolve was in a range where the curvature is >> >>>> very flat, and fsolve made huge steps into the invalid range. After >> >>>> getting nans, it went AWOL. >> >>>> >> >>>> If I return np.inf as soon as I get a nan, then fsolve seems to stop >> >>>> right away. Is there a way to induce fsolve to stay out of the nan >> >>>> zone, for example returning something else than inf? >> >>>> >> >>>> I don't want to find a very specific solution, because I'm throwing >> >>>> lot's of different cases at the same generic method. >> >>> >> >>> same result with python 2.7, scipy version 0.11.0b1 >> >>> >> >>>>"C:\Programs\Python27\python.exe" fsolve_endless_nan.py >> >>> scipy version 0.11.0b1 >> >>> args 0.3 [100] 0.05 >> >>> args 0.3 [ 100.] 0.05 >> >>> args 0.3 [ 100.] 0.05 >> >>> args 0.3 [ 100.00000149] 0.05 >> >>> args 0.3 [-132.75434239] 0.05 >> >>> fsolve_endless_nan.py:36: RuntimeWarning: invalid value encountered in >> sqrt >> >>>? pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) >> >>> fsolve_endless_nan.py:39: RuntimeWarning: invalid value encountered in >> sqrt >> >>>? pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) >> >>> args 0.3 [ nan] 0.05 >> >>> >> >>> >> >>> standalone test case (from my power branch) >> >>> >> >>> Don't run in an interpreter (session) that you want to keep alive! >> >>> And open TaskManager if you are on Windows :) >> >>> >> >>> ------------ >> >>> # -*- coding: utf-8 -*- >> >>> """Warning: endless loop in runaway process, requires hard kill of >> process >> >>> >> >>> Created on Wed Mar 13 12:44:15 2013 >> >>> >> >>> Author: Josef Perktold >> >>> """ >> >>> >> >>> import numpy as np >> >>> from scipy import stats, optimize >> >>> >> >>> import scipy >> >>> print "scipy version", scipy.__version__ >> >>> >> >>> >> >>> def ttest_power(effect_size, nobs, alpha, df=None, >> alternative='two-sided'): >> >>>? ? '''Calculate power of a ttest >> >>>? ? ''' >> >>>? ? print 'args', effect_size, nobs, alpha >> >>>? ? d = effect_size >> >>>? ? if df is None: >> >>>? ? ? ? df = nobs - 1 >> >>> >> >>>? ? if alternative in ['two-sided', '2s']: >> >>>? ? ? ? alpha_ = alpha / 2.? #no inplace changes, doesn't work >> >>>? ? elif alternative in ['smaller', 'larger']: >> >>>? ? ? ? alpha_ = alpha >> >>>? ? else: >> >>>? ? ? ? raise ValueError("alternative has to be 'two-sided', 'larger' >> " + >> >>>? ? ? ? ? ? ? ? ? ? ? ? ? "or 'smaller'") >> >>> >> >>>? ? pow_ = 0 >> >>>? ? if alternative in ['two-sided', '2s', 'larger']: >> >>>? ? ? ? crit_upp = stats.t.isf(alpha_, df) >> >>>? ? ? ? # use private methods, generic methods return nan with >> negative d >> >>>? ? ? ? pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs)) >> >>>? ? if alternative in ['two-sided', '2s', 'smaller']: >> >>>? ? ? ? crit_low = stats.t.ppf(alpha_, df) >> >>>? ? ? ? pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs)) >> >>>? ? return pow_ >> >>> >> >>> func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) >> >>> >> >>> print optimize.fsolve(func, 100, args=(0.3, 0.05)) >> >>> ------------ >> >> >> >> correction to get a solution that would make sense (last two lines) >> >> >> >> func = lambda nobs, *args: ttest_power(args[0], nobs, args[1]) - args[2] >> >> >> >> print optimize.fsolve(func, 10, args=(0.76638635, 0.1, 0.8)) >> >> #converges to 12 >> >> print optimize.fsolve(func, 100, args=(0.76638635, 0.1, 0.8)) >>? #runaway >> >> >> >> Josef >> >> "Lost in Translation" >> > >> > (continuing my monologue) >> > >> > more problems with fsolve, looks like on Ubuntu >> > >> > Skipper uses scipy master, but python-xy daily testing uses >> > python-scipy (0.10.1+dfsg1-4) >> > TravisCI, which also runs Ubuntu doesn't have any problems (using >> > python-scipy amd64 0.9.0+dfsg1-1ubuntu2 ) >> > https://travis-ci.org/statsmodels/statsmodels/jobs/5585028 >> > >> > (just realized: I've tested on Windows so far only with scipy 0.9) >> >> I should have done the testing across versions: >> >> I'm getting the same failures on Windows with >> >>> import scipy >> >>> scipy.__version__ >> '0.11.0b1' >> >> but not with scipy 0.9 >> >> something has changed in the recent scipy versions that makes fsolve >> more fragile >> >> >Another data point if it helps. Using 'lm' of the new optimize.root is ok, >but all of the other method also fail on this - even when starting very >close the actual root. Most reach maximum iterations. > >[~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] >[80]: optimize.root(func, .15, method="lm") >[80]: >? status: 2 >? ? qtf: array([ 0.]) >? cov_x: array([[ 0.08608474]]) >? ? ipvt: array([1], dtype=int32) >success: True >? ? nfev: 16 >? ? fun: array([ 0.]) >? ? ? x: array([ 0.05]) >message: 'The relative error between two consecutive iterates is at most >0.000000' >? ? fjac: array([[-3.40829292]]) > >[~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] >[81]: optimize.root(func, .15, method="hybr") >[81]: >? status: 1 >success: True >? ? qtf: array([-0.20178728]) >? ? nfev: 4 >? ? ? r: array([ inf]) >? ? fun: array([ 0.20178728]) >? ? ? x: array([ 0.15]) >message: 'The solution converged.' >? ? fjac: array([[-1.]]) > > > >> Josef >> >> >> > >> > from Skipper's run ( >> https://github.com/statsmodels/statsmodels/issues/710 ): >> > >> > ----------- >> > [~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] >> > [67]: optimize.fsolve(func, .14, full_output=True) >> > [67]: >> > (array([ 0.05]), >> >? {'fjac': array([[-1.]]), >> >? 'fvec': array([ 0.]), >> >? 'nfev': 12, >> >? 'qtf': array([ 0.]), >> >? 'r': array([-3.408293])}, >> >? 1, >> >? 'The solution converged.') >> > >> > [~/statsmodels/statsmodels-skipper/statsmodels/stats/tests/] >> > [68]: optimize.fsolve(func, .15, full_output=True) >> > [68]: >> > (array([ 0.15]), >> >? {'fjac': array([[-1.]]), >> >? 'fvec': array([ 0.20178728]), >> >? 'nfev': 4, >> >? 'qtf': array([-0.20178728]), >> >? 'r': array([ inf])}, >> >? 1, >> >? 'The solution converged.') >> > >> > It looks like the QR factorization is failing (r == inf) and then it's >> > reporting convergence still. >> > --------- >> > >> > That it stops with a "solution converged" also doesn't trigger my >> > backup root-finding. >> > >> > >> > I can work around this since I have no idea how to debug this. >> > However, there might be something fishy with fsolve. >> > >> > I never had problems with fsolve before, and scipy.stats.distributions >> > is a heavy user of it. >> > >> > Josef >> > >> >> >> >>> >> >>>> >> >>>> Josef >> >>>> >> >>>>> >> >>>>> Josef >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >-------------- next part -------------- >An HTML attachment was scrubbed... >URL: http://mail.scipy.org/pipermail/scipy-dev/attachments/20130318/674f08fc/attachment.html > >------------------------------ > >_______________________________________________ >SciPy-Dev mailing list >SciPy-Dev at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-dev > > >End of SciPy-Dev Digest, Vol 113, Issue 21 >****************************************** > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis-bz-py at t-online.de Mon Mar 18 13:17:20 2013 From: denis-bz-py at t-online.de (denis) Date: Mon, 18 Mar 2013 17:17:20 +0000 (UTC) Subject: [SciPy-Dev] Splines References: Message-ID: Charles R Harris gmail.com> writes: > On Wed, Mar 13, 2013 at 12:54 PM, denis t-online.de> wrote: > Chuck, > ? is "prefilter=True will filter out high frequencies in the data" > not correct ? > > No, I suspect it increases the high frequencies ;) For cubic splines convolving the coefficients with [1/6, 2/3, 1/6] will reproduce the sample values, so to get the spline coefficients you need to deconvolve what is essentially a low pass filter. Chuck, you're right; could spline_filter1d be buggy ? f .5 x [ 100 -100 100 -100 100 -100 100 -100 100 -100] np.convolve: [ 50 -33 33 -33 33 -33 33 -33 33 -50] ndimage.convolve: [ 50 -33 33 -33 33 -33 33 -33 33 -50] ndimage.spline order 2: [ 200 -200 200 -200 200 -200 200 -200 200 -200] ndimage.spline order 3: [ 300 -300 300 -300 300 -300 300 -300 300 -300] ndimage.spline order 4: [ 480 -480 480 -480 480 -480 480 -480 480 -480] NI_SplineFilter1D in $scipysrc/ndimage/src/ni_interpolation.c has case 2: npoles = 1; pole[0] = sqrt(8.0) - 3.0; break; case 3: npoles = 1; pole[0] = sqrt(3.0) - 2.0; beyond me, spline_filter1d( ones ) is ok though. cheers -- denis From charlesr.harris at gmail.com Mon Mar 18 14:16:23 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 18 Mar 2013 12:16:23 -0600 Subject: [SciPy-Dev] Splines In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 11:17 AM, denis wrote: > Charles R Harris gmail.com> writes: > > > > On Wed, Mar 13, 2013 at 12:54 PM, denis t-online.de> > wrote: > > > Chuck, > > is "prefilter=True will filter out high frequencies in the data" > > not correct ? > > > > No, I suspect it increases the high frequencies ;) For cubic splines > convolving the coefficients with [1/6, 2/3, 1/6] will reproduce the sample > values, so to get the spline coefficients you need to deconvolve what is > essentially a low pass filter. > > Chuck, > you're right; could spline_filter1d be buggy ? > > f .5 x [ 100 -100 100 -100 100 -100 100 -100 100 -100] > np.convolve: [ 50 -33 33 -33 33 -33 33 -33 33 -50] > ndimage.convolve: [ 50 -33 33 -33 33 -33 33 -33 33 -50] > ndimage.spline order 2: [ 200 -200 200 -200 200 -200 200 -200 200 > -200] > ndimage.spline order 3: [ 300 -300 300 -300 300 -300 300 -300 300 > -300] > ndimage.spline order 4: [ 480 -480 480 -480 480 -480 480 -480 480 > -480] > > Unlikely, probably there is a scaling factor floating about somewhere. You need to be more precise about what you did for me to say more. NI_SplineFilter1D in $scipysrc/ndimage/src/ni_interpolation.c has > > case 2: > npoles = 1; > pole[0] = sqrt(8.0) - 3.0; > break; > case 3: > npoles = 1; > pole[0] = sqrt(3.0) - 2.0; > > The roots of 1/6 + 2/3*x + 1/6*x**2 are -2 +/- sqrt(3), which is where the pole in the (factored) inverse IIR filter for cubic splines, case 3, comes from. Offhand, I don't see where spline_filter1d handles the boundary conditions, which I assume are only the mirrored case. The usual way to handle that, sometimes obscured in the literature, is to treat the boundary conditions as a perturbation and use the Sherman-Morrisonformula to deal with them. That is somewhat clearer if one sets up the circulant matrix for periodic boundary conditions and uses its exact factorization into circulant matrices as a starting point. This brings up the whole question of boundary conditions for uniform splines. The version in signal processing seems to be mirrored (like the DCT II), and I assume the same for spline_filter1d. The reference for spline_filter1d looks to be Unser's early paper. I think we need at least periodic conditions for closed loops, perhaps reflected (DCT-I like), and among the remaining options would be not-a-knot, and endpoint derivatives. Some of these get a bit tricky for higher order splines, so perhaps the spline_filter1d choice of degree 5 as the maximum is reasonable. Note that high degree uniform splines go over to sinc interpolation methods and show the Gibbs phenomenon. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis-bz-py at t-online.de Tue Mar 19 08:28:57 2013 From: denis-bz-py at t-online.de (denis) Date: Tue, 19 Mar 2013 12:28:57 +0000 (UTC) Subject: [SciPy-Dev] Splines References: Message-ID: Charles R Harris gmail.com> writes: > > No, I suspect it increases the high frequencies ;) > Chuck, > ? you're right; could spline_filter1d be buggy ? > > Unlikely, probably there is a scaling factor floating about somewhere. You need to be more precise about what you did for me to say more. test-filter1d.py at https://gist.github.com/denis-bz/5195594 runs a couple of waveforms thru spline_filter1d, np.convolve, ndimage.convolve1d -- the last 2 match, look reasonable. cheers -- denis From m.hofsaess at gmail.com Thu Mar 21 10:42:24 2013 From: m.hofsaess at gmail.com (=?ISO-8859-1?Q?Martin_Hofs=E4=DF?=) Date: Thu, 21 Mar 2013 15:42:24 +0100 Subject: [SciPy-Dev] scipy build error Message-ID: Hi all, I have a new fresh system with linux mint 14 (64bit) and had install the intel compiler c and fortran successful. Setting the path to the mkl destination in the site.cfg was no problem and building numpy worked without error. I build numpy and scipy with "python setup.py config --compiler=intelem --fcompiler=intelem build_clib --compiler=intelem --fcompiler=intelem build_ext --compiler=intelem --fcompiler=intelem install --user" But trying to build scipy-0.11 and scipy-0.12b I get the following error. Thanks for your help. Best regards Martin Running from scipy source directory. blas_opt_info: blas_mkl_info: FOUND: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/opt/intel/mkl/lib/intel64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/include/'] FOUND: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/opt/intel/mkl/lib/intel64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/include/'] lapack_opt_info: lapack_mkl_info: mkl_info: FOUND: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/opt/intel/mkl/lib/intel64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/include/'] FOUND: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/opt/intel/mkl/lib/intel64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/include/'] FOUND: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/opt/intel/mkl/lib/intel64'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/include/'] umfpack_info: amd_info: FOUND: libraries = ['amd'] library_dirs = ['/usr/local/lib'] swig_opts = ['-I/usr/local/include'] define_macros = [('SCIPY_AMD_H', None)] include_dirs = ['/usr/local/include'] FOUND: libraries = ['umfpack', 'amd'] library_dirs = ['/usr/local/lib'] swig_opts = ['-I/usr/local/include', '-I/usr/local/include'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] include_dirs = ['/usr/local/include'] running config running build_clib running build_src build_src building py_modules sources building library "dfftpack" sources building library "fftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "dop" sources building library "fitpack" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "superlu_src" sources building library "arpack_scipy" sources building library "qhull" sources building library "sc_c_misc" sources building library "sc_cephes" sources building library "sc_mach" sources building library "sc_toms" sources building library "sc_amos" sources building library "sc_cdf" sources building library "sc_specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.cluster._hierarchy_wrap" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.integrate._dop" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.interpolate.interpnd" sources building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. adding 'build/src.linux-x86_64-2.7/scipy/interpolate/src/dfitpack-f2pywrappers.f' to sources. building extension "scipy.interpolate._interpolate" sources building extension "scipy.io.matlab.streams" sources building extension "scipy.io.matlab.mio_utils" sources building extension "scipy.io.matlab.mio5_utils" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. adding 'build/src.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.linux-x86_64-2.7/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.linux-x86_64-2.7/scipy/lib/lapack/clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. adding 'build/src.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.linux-x86_64-2.7/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.linalg.flapack" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. adding 'build/src.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.linux-x86_64-2.7/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.optimize._slsqp" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.optimize._nnls" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spectral" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse.linalg.isolve._iterative" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.sparse.linalg.dsolve._superlu" sources building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources adding 'scipy/sparse/linalg/dsolve/umfpack/umfpack.i' to sources. building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. adding 'build/src.linux-x86_64-2.7/build/src.linux-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f' to sources. building extension "scipy.sparse.sparsetools._csr" sources building extension "scipy.sparse.sparsetools._csc" sources building extension "scipy.sparse.sparsetools._coo" sources building extension "scipy.sparse.sparsetools._bsr" sources building extension "scipy.sparse.sparsetools._dia" sources building extension "scipy.sparse.sparsetools._csgraph" sources building extension "scipy.sparse.csgraph._shortest_path" sources building extension "scipy.sparse.csgraph._traversal" sources building extension "scipy.sparse.csgraph._min_spanning_tree" sources building extension "scipy.sparse.csgraph._tools" sources building extension "scipy.spatial.qhull" sources building extension "scipy.spatial.ckdtree" sources building extension "scipy.spatial._distance_wrap" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.special.orthogonal_eval" sources building extension "scipy.special.lambertw" sources building extension "scipy.special._logit" sources building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.stats.vonmises_cython" sources building extension "scipy.stats._rank" sources building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.linux-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.linux-x86_64-2.7' to include_dirs. adding 'build/src.linux-x86_64-2.7/scipy/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building data_files sources build_src: building npy-pkg config files Found executable /opt/intel/composer_xe_2011_sp1.10.319/bin/intel64/icc Could not locate executable ecc customize IntelEM64TCCompiler customize IntelEM64TCCompiler using build_clib customize IntelEM64TFCompiler Found executable /opt/intel/composer_xe_2011_sp1.10.319/bin/intel64/ifort customize IntelEM64TFCompiler using build_clib running build_ext customize IntelEM64TCCompiler customize IntelEM64TCCompiler using build_ext extending extension 'scipy.sparse.linalg.dsolve._superlu' defined_macros with [('USE_VENDOR_BLAS', 1)] customize IntelEM64TCCompiler customize IntelEM64TCCompiler using build_ext customize IntelEM64TFCompiler customize IntelEM64TFCompiler using build_ext building 'scipy.interpolate._interpolate' extension compiling C++ sources C compiler: icc -m64 -O3 -g -fPIC -fp-model strict -fomit-frame-pointer -openmp -xhost compile options: '-Iscipy/interpolate/src -I/home/likewise-open/hofsaess/.local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c' icc: scipy/interpolate/src/_interpolate.cpp /usr/include/c++/4.7/ext/atomicity.h(48): error: identifier "__ATOMIC_ACQ_REL" is undefined { return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } ^ /usr/include/c++/4.7/ext/atomicity.h(48): error: identifier "__atomic_fetch_add" is undefined { return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } ^ /usr/include/c++/4.7/ext/atomicity.h(52): error: identifier "__ATOMIC_ACQ_REL" is undefined { __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } ^ /usr/include/c++/4.7/ext/atomicity.h(52): error: identifier "__atomic_fetch_add" is undefined { __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } ^ /home/likewise-open/hofsaess/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_deprecated_api.h(11): warning #1224: #warning directive: "Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" #warning "Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" ^ compilation aborted for scipy/interpolate/src/_interpolate.cpp (code 2) /usr/include/c++/4.7/ext/atomicity.h(48): error: identifier "__ATOMIC_ACQ_REL" is undefined { return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } ^ /usr/include/c++/4.7/ext/atomicity.h(48): error: identifier "__atomic_fetch_add" is undefined { return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } ^ /usr/include/c++/4.7/ext/atomicity.h(52): error: identifier "__ATOMIC_ACQ_REL" is undefined { __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } ^ /usr/include/c++/4.7/ext/atomicity.h(52): error: identifier "__atomic_fetch_add" is undefined { __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } ^ /home/likewise-open/hofsaess/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_deprecated_api.h(11): warning #1224: #warning directive: "Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" #warning "Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" ^ compilation aborted for scipy/interpolate/src/_interpolate.cpp (code 2) error: Command "icc -m64 -O3 -g -fPIC -fp-model strict -fomit-frame-pointer -openmp -xhost -Iscipy/interpolate/src -I/home/likewise-open/hofsaess/.local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c scipy/interpolate/src/_interpolate.cpp -o build/temp.linux-x86_64-2.7/scipy/interpolate/src/_interpolate.o" failed with exit status 2 From pav at iki.fi Thu Mar 21 13:49:18 2013 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 21 Mar 2013 19:49:18 +0200 Subject: [SciPy-Dev] scipy build error In-Reply-To: References: Message-ID: 21.03.2013 16:42, Martin Hofs?? kirjoitti: > I have a new fresh system with linux mint 14 (64bit) and had install > the intel compiler c and fortran successful. > > Setting the path to the mkl destination in the site.cfg was no problem > and building numpy worked without error. [clip] > compilation aborted for scipy/interpolate/src/_interpolate.cpp (code 2) > /usr/include/c++/4.7/ext/atomicity.h(48): error: identifier > "__ATOMIC_ACQ_REL" is undefined > { return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } Looks like some problem in the installation of Intel C++ compiler. Those headers come from libstdc++ which I doubt is used by icc. Try checking that you can compile simple c++ programs first. -- Pauli Virtanen From m.hofsaess at gmail.com Thu Mar 21 15:02:19 2013 From: m.hofsaess at gmail.com (=?ISO-8859-1?Q?Martin_Hofs=E4=DF?=) Date: Thu, 21 Mar 2013 20:02:19 +0100 Subject: [SciPy-Dev] scipy build error In-Reply-To: References: Message-ID: Hi, i have tried the code #include using namespace std; int main() { cout << "Hello World!" << endl; return 0; } with icpc test.cpp and i get the following error: /usr/include/c++/4.7/ext/atomicity.h(48): error: identifier "__ATOMIC_ACQ_REL" is undefined { return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } ^ /usr/include/c++/4.7/ext/atomicity.h(48): error: identifier "__atomic_fetch_add" is undefined { return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } ^ /usr/include/c++/4.7/ext/atomicity.h(52): error: identifier "__ATOMIC_ACQ_REL" is undefined { __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } ^ /usr/include/c++/4.7/ext/atomicity.h(52): error: identifier "__atomic_fetch_add" is undefined { __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } ^ compilation aborted for test.cpp (code 2) So the compiling failed. What can I do? 2013/3/21 Pauli Virtanen : > 21.03.2013 16:42, Martin Hofs?? kirjoitti: >> I have a new fresh system with linux mint 14 (64bit) and had install >> the intel compiler c and fortran successful. >> >> Setting the path to the mkl destination in the site.cfg was no problem >> and building numpy worked without error. > [clip] >> compilation aborted for scipy/interpolate/src/_interpolate.cpp (code 2) >> /usr/include/c++/4.7/ext/atomicity.h(48): error: identifier >> "__ATOMIC_ACQ_REL" is undefined >> { return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } > > Looks like some problem in the installation of Intel C++ compiler. Those > headers come from libstdc++ which I doubt is used by icc. > > Try checking that you can compile simple c++ programs first. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From beamesleach at gmail.com Thu Mar 21 15:08:48 2013 From: beamesleach at gmail.com (Alex Leach) Date: Thu, 21 Mar 2013 19:08:48 -0000 Subject: [SciPy-Dev] scipy build error In-Reply-To: References: Message-ID: On Thu, 21 Mar 2013 19:02:19 -0000, Martin Hofs?? wrote: > > with icpc test.cpp and i get the following error: > > /usr/include/c++/4.7/ext/atomicity.h(48): error: identifier > "__ATOMIC_ACQ_REL" is undefined > { return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } > ^ > What version of the Intel compiler do you have installed? Have you tried compiling with the "--std=gnu++0x" compile flag? Cheers, Alex From m.hofsaess at gmail.com Thu Mar 21 15:17:29 2013 From: m.hofsaess at gmail.com (=?ISO-8859-1?Q?Martin_Hofs=E4=DF?=) Date: Thu, 21 Mar 2013 20:17:29 +0100 Subject: [SciPy-Dev] scipy build error In-Reply-To: References: Message-ID: icpc --std=gnu++0x test.cpp don't work 2013/3/21 Alex Leach : > On Thu, 21 Mar 2013 19:02:19 -0000, Martin Hofs?? > wrote: > >> >> with icpc test.cpp and i get the following error: >> >> /usr/include/c++/4.7/ext/atomicity.h(48): error: identifier >> "__ATOMIC_ACQ_REL" is undefined >> { return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } >> ^ >> > > What version of the Intel compiler do you have installed? Have you tried > compiling with the "--std=gnu++0x" compile flag? > > Cheers, > Alex > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From beamesleach at gmail.com Thu Mar 21 15:20:17 2013 From: beamesleach at gmail.com (Alex Leach) Date: Thu, 21 Mar 2013 19:20:17 -0000 Subject: [SciPy-Dev] scipy build error In-Reply-To: References: Message-ID: Sorry, just one dash before 'std'. See 'man icc' for details. Also worth trying are '-std=c++0x' and '-std=c++11'. The latter isn't documented, but does seem to work nonetheless. On Thu, 21 Mar 2013 19:17:29 -0000, Martin Hofs?? wrote: > icpc --std=gnu++0x test.cpp don't work > > 2013/3/21 Alex Leach : >> On Thu, 21 Mar 2013 19:02:19 -0000, Martin Hofs?? >> wrote: >> >> >> What version of the Intel compiler do you have installed? Have you tried >> compiling with the "--std=gnu++0x" compile flag? >> >> Cheers, >> Alex >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From ralf.gommers at gmail.com Thu Mar 21 15:27:45 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 21 Mar 2013 20:27:45 +0100 Subject: [SciPy-Dev] ANN: SciPy 0.12.0 beta 1 release In-Reply-To: References: Message-ID: On Sat, Feb 16, 2013 at 10:34 PM, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the first beta release of > SciPy 0.12.0. This is shaping up to be another solid release, with some > cool new features (see highlights below) and a large amount of bug fixes > and maintenance work under the hood. The number of contributors also keeps > rising steadily, we're at 74 so far for this release. > Hi all, just an update on the status of 0.12.0. There have not been a lot of issues reported for the beta, so the TODO list isn't all that long. List of 7 tickets at http://projects.scipy.org/scipy/report/3, most of which already have PRs to close them. I have had trouble finding time to spend on this release, but aim to get RC1 out within a week or so. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.hofsaess at gmail.com Thu Mar 21 15:28:53 2013 From: m.hofsaess at gmail.com (=?ISO-8859-1?Q?Martin_Hofs=E4=DF?=) Date: Thu, 21 Mar 2013 20:28:53 +0100 Subject: [SciPy-Dev] scipy build error In-Reply-To: References: Message-ID: Don't work. I think that my icpc version is incompatible with gcc 4.7. I think the first solution is to go back to gcc 4.6 or to update intel compiler. 2013/3/21 Alex Leach : > Sorry, just one dash before 'std'. See 'man icc' for details. Also worth > trying are '-std=c++0x' and '-std=c++11'. The latter isn't documented, but > does seem to work nonetheless. > > On Thu, 21 Mar 2013 19:17:29 -0000, Martin Hofs?? > wrote: > >> icpc --std=gnu++0x test.cpp don't work >> >> 2013/3/21 Alex Leach : >>> On Thu, 21 Mar 2013 19:02:19 -0000, Martin Hofs?? >>> wrote: >>> > >>> >>> What version of the Intel compiler do you have installed? Have you tried >>> compiling with the "--std=gnu++0x" compile flag? >>> >>> Cheers, >>> Alex >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From beamesleach at gmail.com Thu Mar 21 15:35:14 2013 From: beamesleach at gmail.com (Alex Leach) Date: Thu, 21 Mar 2013 19:35:14 -0000 Subject: [SciPy-Dev] scipy build error In-Reply-To: References: Message-ID: On Thu, 21 Mar 2013 19:28:53 -0000, Martin Hofs?? wrote: > Don't work. > > I think that my icpc version is incompatible with gcc 4.7. > > I think the first solution is to go back to gcc 4.6 or to update intel > compiler. > Works for me, with GCC 4.7.2 $ icpc test.cpp $ ./a.out Hello World! $ icpc --version icpc (ICC) 13.1.0 20130121 I remember getting this problem last year; earlier versions of icc / icpc weren't compatible with the atomic keyword, but they are now.. If you can't get hold of a newer version of icc, then yes, you'll need to get a hold of GCC 4.6 headers. From ralf.gommers at gmail.com Thu Mar 21 17:20:31 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 21 Mar 2013 22:20:31 +0100 Subject: [SciPy-Dev] NumPy/SciPy participation in GSoC 2013 Message-ID: Hi all, It is the time of the year for Google Summer of Code applications. If we want to participate with Numpy and/or Scipy, we need two things: enough mentors and ideas for projects. If we get those, we'll apply under the PSF umbrella. They've outlined the timeline they're working by and guidelines at http://pyfound.blogspot.nl/2013/03/get-ready-for-google-summer-of-code.html. We should be able to come up with some interesting project ideas I'd think, let's put those at http://projects.scipy.org/scipy/wiki/SummerofCodeIdeas. Preferably with enough detail to be understandable for people new to the projects and a proposed mentor. We need at least 3 people willing to mentor a student. Ideally we'd have enough mentors this week, so we can apply to the PSF on time. If you're willing to be a mentor, please send me the following: name, email address, phone nr, and what you're interested in mentoring. If you have time constaints and have doubts about being able to be a primary mentor, being a backup mentor would also be helpful. Cheers, Ralf P.S. as you can probably tell from the above, I'm happy to coordinate the GSoC applications for Numpy and Scipy -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Mar 21 17:46:39 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 21 Mar 2013 22:46:39 +0100 Subject: [SciPy-Dev] scipy.org spam Message-ID: Hi, There's a "remove spam" action on the wiki, which would be useful right now for the scipy.org front page. I don't have or lost the rights for that action though. Can someone with the rights to do that press that button? Thanks, Ralf if only we had that static site..... -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Mar 22 07:53:47 2013 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 Mar 2013 11:53:47 +0000 Subject: [SciPy-Dev] scipy.org spam In-Reply-To: References: Message-ID: On Thu, Mar 21, 2013 at 9:46 PM, Ralf Gommers wrote: > Hi, > > There's a "remove spam" action on the wiki, which would be useful right now > for the scipy.org front page. I don't have or lost the rights for that > action though. Can someone with the rights to do that press that button? I'm trying to, but it's rejecting your last valid revision: Sorry, can not save page because "[[ImageLink(scipydownloadlogosmb.png,Download)]] || [[ImageLink(scipygetstartedsm2b.png,Getting_Started)]] || [[ImageLink(scipydoclogosm.png,http://docs.scipy.org)]] || [[ImageLink(scipybuglogosm2e.png,BugReport)]] || [[ImageLink(feed-icon-100.png,http://planet.scipy.org)]]" is not allowed in this wiki. There must be something in the BadContent or LocalBadContent filter that is triggering on one of these links, but I don't know what it is. -- Robert Kern From robert.kern at gmail.com Fri Mar 22 07:56:37 2013 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 Mar 2013 11:56:37 +0000 Subject: [SciPy-Dev] scipy.org spam In-Reply-To: References: Message-ID: On Fri, Mar 22, 2013 at 11:53 AM, Robert Kern wrote: > On Thu, Mar 21, 2013 at 9:46 PM, Ralf Gommers wrote: >> Hi, >> >> There's a "remove spam" action on the wiki, which would be useful right now >> for the scipy.org front page. I don't have or lost the rights for that >> action though. Can someone with the rights to do that press that button? > > I'm trying to, but it's rejecting your last valid revision: > > Sorry, can not save page because > "[[ImageLink(scipydownloadlogosmb.png,Download)]] || > [[ImageLink(scipygetstartedsm2b.png,Getting_Started)]] || > [[ImageLink(scipydoclogosm.png,http://docs.scipy.org)]] || > [[ImageLink(scipybuglogosm2e.png,BugReport)]] || > [[ImageLink(feed-icon-100.png,http://planet.scipy.org)]]" is not > allowed in this wiki. > > > There must be something in the BadContent or LocalBadContent filter > that is triggering on one of these links, but I don't know what it is. Easy enough to circumvent at the shell, though. Fixed. -- Robert Kern From suryak at ieee.org Fri Mar 22 10:04:08 2013 From: suryak at ieee.org (Surya Kasturi) Date: Fri, 22 Mar 2013 19:34:08 +0530 Subject: [SciPy-Dev] scipycentral design complete -- need help in versioning Message-ID: Hi. guys, the design part of the website is finally done. you can view the changes, run locally in your systems by downloading spc at https://github.com/ksurya/SciPyCentral However, i need to update the software version in code before sending pull request. can anyone suggest on it? current version is 0.20 New updates : 1. new design from scratch 2. fixed 1 bug 3. identified it (status: low) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Mar 23 08:12:33 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 23 Mar 2013 13:12:33 +0100 Subject: [SciPy-Dev] scipycentral design complete -- need help in versioning In-Reply-To: References: Message-ID: On Fri, Mar 22, 2013 at 3:04 PM, Surya Kasturi wrote: > Hi. guys, the design part of the website is finally done. you can view the > changes, run locally in your systems by downloading spc at > https://github.com/ksurya/SciPyCentral > > However, i need to update the software version in code before sending pull > request. > > can anyone suggest on it? > > current version is 0.20 > Hi Surya, I think the version numbering shouldn't be critical, because it's for a website and not for a library that others will build on. For the deployed version using the last commit ID would be enough. If there's a version number that says 0.20 now, you could change it to 0.30. > > New updates : > > 1. new design from scratch > 2. fixed 1 bug > 3. identified it (status: low) > Sounds good. Looking at your design-1.0 branch, I noticed that there aren't any meaningful commit messages. It would be good to edit those, then send a PR. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Mar 23 08:30:28 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 23 Mar 2013 13:30:28 +0100 Subject: [SciPy-Dev] adding FIRE algorithm to scipy.optimize In-Reply-To: References: Message-ID: On Sat, Mar 16, 2013 at 10:55 AM, Daniel Asenjo wrote: > I have an implementation of the FIRE minimization algorithm that I would > like add to scipy.optimize. > > The algorithm is described here: > > http://link.aps.org/doi/10.1103/PhysRevLett.97.170201 > > FIRE is relatively fast although not as efficient as L-BFGS but it is much > more stable as shown in our recently submitted paper: > > http://people.ds.cam.ac.uk/daa32/compareminmeth.pdf > > I think that FIRE would be a good addition to the existing algorithms in > scipy.optimize as it is becoming popular within the community. > > What do you think? > Guess I could figure this out from the papers, but it's easier to ask: is FIRE in any way related to Firefly that Andrea Gavana recently proposed for inclusion in scipy.optimize: http://article.gmane.org/gmane.comp.python.scientific.user/33701/match=firefly? Andrea has a quite extensive benchmark of global optimizers, if you could work with him to include your algorithm then that would help a lot in determining the added value for scipy. Certainly we need a few more good global optimizers, but we need to be confident that they're an improvement on what's already available. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From suryak at ieee.org Sat Mar 23 09:44:21 2013 From: suryak at ieee.org (Surya Kasturi) Date: Sat, 23 Mar 2013 19:14:21 +0530 Subject: [SciPy-Dev] scipycentral design complete -- need help in versioning In-Reply-To: References: Message-ID: On Sat, Mar 23, 2013 at 5:42 PM, Ralf Gommers wrote: > > > > On Fri, Mar 22, 2013 at 3:04 PM, Surya Kasturi wrote: > >> Hi. guys, the design part of the website is finally done. you can view >> the changes, run locally in your systems by downloading spc at >> https://github.com/ksurya/SciPyCentral >> >> However, i need to update the software version in code before sending >> pull request. >> >> can anyone suggest on it? >> >> current version is 0.20 >> > > Hi Surya, I think the version numbering shouldn't be critical, because > it's for a website and not for a library that others will build on. For the > deployed version using the last commit ID would be enough. If there's a > version number that says 0.20 now, you could change it to 0.30. > Okay! > > >> >> New updates : >> >> 1. new design from scratch >> 2. fixed 1 bug >> 3. identified another (status: low) --typo** >> > > Sounds good. Looking at your design-1.0 branch, I noticed that there > aren't any meaningful commit messages. It would be good to edit those, then > send a PR. > Actually, most of the changes I made are from scratch even though files are same. That's the reason why for most of commits, I just dropped "design update". Will try to look into it and be specific.. Would you suggest any better for writing commit messages in this situation. > > > Cheers, > Ralf > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jstevenson131 at gmail.com Sat Mar 23 11:38:44 2013 From: jstevenson131 at gmail.com (Jacob Stevenson) Date: Sat, 23 Mar 2013 15:38:44 +0000 Subject: [SciPy-Dev] adding FIRE algorithm to scipy.optimize In-Reply-To: References: Message-ID: <01F5F7C4-A963-482D-B9FB-DE0625B28FC0@gmail.com> I can answer this. FIRE is not related to Firefly. Actually FIRE is a local optimiser and Firefly is a global optimiser. Jake On 23 Mar 2013, at 12:30, Ralf Gommers wrote: > > > > On Sat, Mar 16, 2013 at 10:55 AM, Daniel Asenjo wrote: > I have an implementation of the FIRE minimization algorithm that I would like add to scipy.optimize. > > The algorithm is described here: > > http://link.aps.org/doi/10.1103/PhysRevLett.97.170201 > > FIRE is relatively fast although not as efficient as L-BFGS but it is much more stable as shown in our recently submitted paper: > > http://people.ds.cam.ac.uk/daa32/compareminmeth.pdf > > I think that FIRE would be a good addition to the existing algorithms in scipy.optimize as it is becoming popular within the community. > > What do you think? > > Guess I could figure this out from the papers, but it's easier to ask: is FIRE in any way related to Firefly that Andrea Gavana recently proposed for inclusion in scipy.optimize: http://article.gmane.org/gmane.comp.python.scientific.user/33701/match=firefly ? > > Andrea has a quite extensive benchmark of global optimizers, if you could work with him to include your algorithm then that would help a lot in determining the added value for scipy. Certainly we need a few more good global optimizers, but we need to be confident that they're an improvement on what's already available. > > Ralf > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Sat Mar 23 11:39:01 2013 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Sat, 23 Mar 2013 16:39:01 +0100 Subject: [SciPy-Dev] adding FIRE algorithm to scipy.optimize In-Reply-To: References: Message-ID: Hi Ralf & All, On 23 March 2013 13:30, Ralf Gommers wrote: > > > > On Sat, Mar 16, 2013 at 10:55 AM, Daniel Asenjo > wrote: >> >> I have an implementation of the FIRE minimization algorithm that I would >> like add to scipy.optimize. >> >> The algorithm is described here: >> >> http://link.aps.org/doi/10.1103/PhysRevLett.97.170201 >> >> FIRE is relatively fast although not as efficient as L-BFGS but it is much >> more stable as shown in our recently submitted paper: >> >> http://people.ds.cam.ac.uk/daa32/compareminmeth.pdf >> >> I think that FIRE would be a good addition to the existing algorithms in >> scipy.optimize as it is becoming popular within the community. >> >> What do you think? > > Guess I could figure this out from the papers, but it's easier to ask: is > FIRE in any way related to Firefly that Andrea Gavana recently proposed for > inclusion in scipy.optimize: > http://article.gmane.org/gmane.comp.python.scientific.user/33701/match=firefly > ? I don't have access to the second paper (or at least I couldn't find it freely downloadable on the web), but it seems to me that FIRE stands for "Fast Inertial Relaxation Engine", so I guess it's a different algorithm from the FIrefly one. > > Andrea has a quite extensive benchmark of global optimizers, if you could > work with him to include your algorithm then that would help a lot in > determining the added value for scipy. Certainly we need a few more good > global optimizers, but we need to be confident that they're an improvement > on what's already available. The main page describing the benchmark, the rules, the algorithms and motivation is here: http://infinity77.net/global_optimization/index.html Algorithms comparisons: http://infinity77.net/global_optimization/multidimensional.html http://infinity77.net/global_optimization/univariate.html Test functions: http://infinity77.net/global_optimization/test_functions.html I would be more than happy to test the FIRE algorithm against the benchmark test suite, to see how it compares with the algorithms I have tried up to now. At the moment, my implementation of AMPGO is superior to all the other algorithms I have tried. However, to stress it again, it is an algorithm designed for low-dimensional optimization problems (i.e., 1-10 variables). Daniel, if you have an implementation of the FIRE algorithm somewhere I can access it, please do send me a link and I will re-run the benchmark including it. It doesn't need to be perfectly documented or up to Scipy standards (yet), but I am very curious to give it a try. Andrea. "Imagination Is The Only Weapon In The War Against Reality." http://www.infinity77.net # ------------------------------------------------------------- # def ask_mailing_list_support(email): if mention_platform_and_version() and include_sample_app(): send_message(email) else: install_malware() erase_hard_drives() # ------------------------------------------------------------- # From ralf.gommers at gmail.com Sat Mar 23 12:18:39 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 23 Mar 2013 17:18:39 +0100 Subject: [SciPy-Dev] scipycentral design complete -- need help in versioning In-Reply-To: References: Message-ID: On Sat, Mar 23, 2013 at 2:44 PM, Surya Kasturi wrote: > > > > On Sat, Mar 23, 2013 at 5:42 PM, Ralf Gommers wrote: > >> >> >> >> On Fri, Mar 22, 2013 at 3:04 PM, Surya Kasturi wrote: >> >>> Hi. guys, the design part of the website is finally done. you can view >>> the changes, run locally in your systems by downloading spc at >>> https://github.com/ksurya/SciPyCentral >>> >>> However, i need to update the software version in code before sending >>> pull request. >>> >>> can anyone suggest on it? >>> >>> current version is 0.20 >>> >> >> Hi Surya, I think the version numbering shouldn't be critical, because >> it's for a website and not for a library that others will build on. For the >> deployed version using the last commit ID would be enough. If there's a >> version number that says 0.20 now, you could change it to 0.30. >> > > Okay! > > >> >> >>> >>> New updates : >>> >>> 1. new design from scratch >>> 2. fixed 1 bug >>> 3. identified another (status: low) --typo** >>> >> >> Sounds good. Looking at your design-1.0 branch, I noticed that there >> aren't any meaningful commit messages. It would be good to edit those, then >> send a PR. >> > > Actually, most of the changes I made are from scratch even though files > are same. That's the reason why for most of commits, I just dropped "design > update". Will try to look into it and be specific.. Would you suggest any > better for writing commit messages in this situation. > This section of our devguide contains some advice on that: http://docs.scipy.org/doc/numpy/dev/gitwash/development_workflow.html#writing-the-commit-message Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From suryak at ieee.org Sun Mar 24 09:24:07 2013 From: suryak at ieee.org (Surya Kasturi) Date: Sun, 24 Mar 2013 18:54:07 +0530 Subject: [SciPy-Dev] scipycentral design complete -- need help in versioning In-Reply-To: References: Message-ID: On Sat, Mar 23, 2013 at 9:48 PM, Ralf Gommers wrote: > > > > On Sat, Mar 23, 2013 at 2:44 PM, Surya Kasturi wrote: > >> >> >> >> On Sat, Mar 23, 2013 at 5:42 PM, Ralf Gommers wrote: >> >>> >>> >>> >>> On Fri, Mar 22, 2013 at 3:04 PM, Surya Kasturi wrote: >>> >>>> Hi. guys, the design part of the website is finally done. you can view >>>> the changes, run locally in your systems by downloading spc at >>>> https://github.com/ksurya/SciPyCentral >>>> >>>> However, i need to update the software version in code before sending >>>> pull request. >>>> >>>> can anyone suggest on it? >>>> >>>> current version is 0.20 >>>> >>> >>> Hi Surya, I think the version numbering shouldn't be critical, because >>> it's for a website and not for a library that others will build on. For the >>> deployed version using the last commit ID would be enough. If there's a >>> version number that says 0.20 now, you could change it to 0.30. >>> >> >> Okay! >> >> >>> >>> >>>> >>>> New updates : >>>> >>>> 1. new design from scratch >>>> 2. fixed 1 bug >>>> 3. identified another (status: low) --typo** >>>> >>> >>> Sounds good. Looking at your design-1.0 branch, I noticed that there >>> aren't any meaningful commit messages. It would be good to edit those, then >>> send a PR. >>> >> >> Actually, most of the changes I made are from scratch even though files >> are same. That's the reason why for most of commits, I just dropped "design >> update". Will try to look into it and be specific.. Would you suggest any >> better for writing commit messages in this situation. >> > > This section of our devguide contains some advice on that: > http://docs.scipy.org/doc/numpy/dev/gitwash/development_workflow.html#writing-the-commit-message > > I didn't knew these standards earlier. Oops. anyways, can you please tell me how to "edit" commit messages in Git? > Cheers, > Ralf > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Mar 24 10:16:03 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 24 Mar 2013 16:16:03 +0200 Subject: [SciPy-Dev] scipycentral design complete -- need help in versioning In-Reply-To: References: Message-ID: Hi Surya & Ralf, 24.03.2013 15:24, Surya Kasturi kirjoitti: [clip] > I didn't knew these standards earlier. Oops. anyways, can you please > tell me how to "edit" commit messages in Git? I don't think we should spend time on fixing the commit messages before merging. I think we can merge your design-1.0 branch as-is, to keep the ball rolling (the design looks good to me and seems to work). For the future, you can edit your previous commits by using Git's interactive rebasing: https://help.github.com/articles/interactive-rebase This is also useful for collapsing several commits into one (e.g. you make a mistake and fix it in a new commit, and want to include the fix in the first commit) and reordering them. I use this basically all the time. Before using rebase, it's a good idea to know the `git reflog show BRANCHNAME' command, that together with `git reset --hard` allows you to undo whatever git operations you did. *** Some further suggestions on the design: - Add a top-bar for navigating between scipy.org, numpy.scipy.org, docs.scipy.org and scipy-central.org - Maybe there's room for the scipy-central logo? -- Pauli Virtanen From suryak at ieee.org Sun Mar 24 12:34:06 2013 From: suryak at ieee.org (Surya Kasturi) Date: Sun, 24 Mar 2013 22:04:06 +0530 Subject: [SciPy-Dev] scipycentral design complete -- need help in versioning In-Reply-To: References: Message-ID: On Sun, Mar 24, 2013 at 7:46 PM, Pauli Virtanen wrote: > Hi Surya & Ralf, > > 24.03.2013 15:24, Surya Kasturi kirjoitti: > [clip] > > I didn't knew these standards earlier. Oops. anyways, can you please > > tell me how to "edit" commit messages in Git? > > I don't think we should spend time on fixing the commit messages before > merging. I think we can merge your design-1.0 branch as-is, to keep the > ball rolling (the design looks good to me and seems to work). > > For the future, you can edit your previous commits by using Git's > interactive rebasing: > > https://help.github.com/articles/interactive-rebase > > This is also useful for collapsing several commits into one (e.g. you > make a mistake and fix it in a new commit, and want to include the fix > in the first commit) and reordering them. I use this basically all the > time. > > Before using rebase, it's a good idea to know the `git reflog show > BRANCHNAME' command, that together with `git reset --hard` allows you to > undo whatever git operations you did. > > *** > > Some further suggestions on the design: > > - Add a top-bar for navigating between scipy.org, numpy.scipy.org, > docs.scipy.org and scipy-central.org > > Well, I created a "quick links" on right side bar.. where these link lie. This should work. I am looking for a cleaner design. Too many links of top nav bar will over populate.. > - Maybe there's room for the scipy-central logo? > > I am even thinking of it.. We will do something about it in next PR. what do you say.. However we still got enough time to give a thought until we actually push changes to server. > -- > Pauli Virtanen > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > I updated commit messages. Sending PR NOW. -------------- next part -------------- An HTML attachment was scrubbed... URL: From suryak at ieee.org Sun Mar 24 13:19:04 2013 From: suryak at ieee.org (Surya Kasturi) Date: Sun, 24 Mar 2013 22:49:04 +0530 Subject: [SciPy-Dev] scipycentral design complete -- need help in versioning In-Reply-To: References: Message-ID: On Sun, Mar 24, 2013 at 10:04 PM, Surya Kasturi wrote: > > > > On Sun, Mar 24, 2013 at 7:46 PM, Pauli Virtanen wrote: > >> Hi Surya & Ralf, >> >> 24.03.2013 15:24, Surya Kasturi kirjoitti: >> [clip] >> > I didn't knew these standards earlier. Oops. anyways, can you please >> > tell me how to "edit" commit messages in Git? >> >> I don't think we should spend time on fixing the commit messages before >> merging. I think we can merge your design-1.0 branch as-is, to keep the >> ball rolling (the design looks good to me and seems to work). >> >> For the future, you can edit your previous commits by using Git's >> interactive rebasing: >> >> https://help.github.com/articles/interactive-rebase >> >> This is also useful for collapsing several commits into one (e.g. you >> make a mistake and fix it in a new commit, and want to include the fix >> in the first commit) and reordering them. I use this basically all the >> time. >> >> Before using rebase, it's a good idea to know the `git reflog show >> BRANCHNAME' command, that together with `git reset --hard` allows you to >> undo whatever git operations you did. >> >> *** >> >> Some further suggestions on the design: >> >> - Add a top-bar for navigating between scipy.org, numpy.scipy.org, >> docs.scipy.org and scipy-central.org >> >> Well, I created a "quick links" on right side bar.. where these link lie. > This should work. I am looking for a cleaner design. Too many links of top > nav bar will over populate.. > > >> - Maybe there's room for the scipy-central logo? >> >> I am even thinking of it.. We will do something about it in next PR. what > do you say.. However we still got enough time to give a thought until we > actually push changes to server. > >> -- >> Pauli Virtanen >> >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > I updated commit messages. Sending PR NOW. > I sent PR. (I think we need to preserve the existing code in a different branch before merging new changes).. -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Sun Mar 24 23:26:07 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 24 Mar 2013 23:26:07 -0400 Subject: [SciPy-Dev] Using sphinx :func: markup in docstring Message-ID: Is there a policy on using Sphinx hyperlinking references in docstrings? There are a couple references in scipy.stats that look like they are intended to use Sphinx markup, but they don't actually create links in the HTML output. Specifically, the ranksums docstring refers to `stats.mannwhitneyu`_, and f_oneway refers to `stats.kruskal`_. These do not create links in the HTML output. The only way I've been able to get a link generated is to use, e.g., :func:`scipy.stats.kruskal`. Do we have a policy on using such markup? Warren From pav at iki.fi Mon Mar 25 04:39:24 2013 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 25 Mar 2013 08:39:24 +0000 (UTC) Subject: [SciPy-Dev] Using sphinx :func: markup in docstring References: Message-ID: Warren Weckesser gmail.com> writes: > Is there a policy on using Sphinx hyperlinking references in > docstrings? There are a couple references in scipy.stats that look > like they are intended to use Sphinx markup, but they don't actually > create links in the HTML output. Specifically, the ranksums docstring > refers to `stats.mannwhitneyu`_, and f_oneway refers to > `stats.kruskal`_. These do not create links in the HTML output. The > only way I've been able to get a link generated is to use, e.g., > :func:`scipy.stats.kruskal`. `scipy.stats.kruskal` should also generate a link (note: no trailing _). -- Pauli Virtanen From warren.weckesser at gmail.com Mon Mar 25 09:58:27 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Mon, 25 Mar 2013 09:58:27 -0400 Subject: [SciPy-Dev] Using sphinx :func: markup in docstring In-Reply-To: References: Message-ID: On 3/25/13, Pauli Virtanen wrote: > Warren Weckesser gmail.com> writes: >> Is there a policy on using Sphinx hyperlinking references in >> docstrings? There are a couple references in scipy.stats that look >> like they are intended to use Sphinx markup, but they don't actually >> create links in the HTML output. Specifically, the ranksums docstring >> refers to `stats.mannwhitneyu`_, and f_oneway refers to >> `stats.kruskal`_. These do not create links in the HTML output. The >> only way I've been able to get a link generated is to use, e.g., >> :func:`scipy.stats.kruskal`. > > `scipy.stats.kruskal` should also generate a link (note: no trailing _). > Perfect, thanks. Warren > -- > Pauli Virtanen > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From ralf.gommers at gmail.com Mon Mar 25 16:35:33 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 25 Mar 2013 21:35:33 +0100 Subject: [SciPy-Dev] Lomb-Scargle Periodogram: Press & Rybicki Algorithm In-Reply-To: <20130318151922.GA20372@brutus.lostpackets.de> References: <20130309012621.GA65314@brutus.lostpackets.de> <20130318151922.GA20372@brutus.lostpackets.de> Message-ID: On Mon, Mar 18, 2013 at 4:19 PM, Christian Geier wrote: > On Fri, Mar 08, 2013 at 05:18:56PM -0800, Jacob Vanderplas wrote: > > Hi, > > We have a cython version of this Lomb-Scargle algorithm in astroML [1], > as > > well as a generalized version that does not depend on the sample mean > being > > a good approximation of the true mean. We've not yet implemented the FFT > > trick shown in Press & Rybicki, but it's fairly fast as-is for problems > of > > reasonable size. > > > > For some examples of it in use on astronomical data, see [2-3] below (the > > examples are figures from our upcoming astro/statistics textbook). This > > code is BSD-licensed, so if it seems generally useful enough to include > in > > scipy, it would be no problem to port it. > If it's faster and/or more accurate than what we have now, then adding it to scipy sounds like a good idea to me. > > > > Also, if you have implemented an FFT-based version, it would get a fair > bit > > of use in the astronomy community if you were willing to contribute it to > > astroML. > > > > Thanks, > > Jake > > > > [1] > > > https://github.com/astroML/astroML/blob/master/astroML_addons/periodogram.pyx > > [2] http://astroml.github.com/book_figures/chapter10/fig_LS_example.html > > [3] > > > http://astroml.github.com/book_figures/chapter10/fig_LS_sg_comparison.html > > > > > > On Fri, Mar 8, 2013 at 5:26 PM, Christian Geier >wrote: > > > > > Hello everyone! > > > > > > Would you in general be considering to include the Lombscargle > > > Periodogram by Press & Rybicki [1] into scipy in addition to the > already > > > present one? I find the included algorithm by Townend rather slow and > > > had recently some "interesting" results returned by it. > > > > > > I've recently translated the original FORTRAN code (which is actually > > > the description of the algorithm [1]) to (pure) python [2] and would > like > > > to know what the legal situation is in this case: can I release this > > > translated code under the BSD license? > A pure translation of the Fortran code in that paper is not OK to distribute under a BSD license. You'd have to get permission from the publisher probably. Reimplementing based on the equations in the paper would be OK though. Ralf > > > > > > In this case I would translate the code further to cython and supply > > > tests and more documentation. > > > > > > Greetings > > > > > > Christian Geier > > > > > > [1] http://adsabs.harvard.edu/full/1989ApJ...338..277P > > > [2] > > > > https://github.com/geier/scipy/commit/710bf4ca514d223df39891eb20401aba2edd8cdb > > Hi, > thanks for your answer. When time allows for it (hopefully next week) > I will improve the code some more and then contact astroML. > > Regards > Christian > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Mar 25 19:27:35 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 26 Mar 2013 00:27:35 +0100 Subject: [SciPy-Dev] NumPy/SciPy participation in GSoC 2013 In-Reply-To: References: Message-ID: On Thu, Mar 21, 2013 at 10:20 PM, Ralf Gommers wrote: > Hi all, > > It is the time of the year for Google Summer of Code applications. If we > want to participate with Numpy and/or Scipy, we need two things: enough > mentors and ideas for projects. If we get those, we'll apply under the PSF > umbrella. They've outlined the timeline they're working by and guidelines > at > http://pyfound.blogspot.nl/2013/03/get-ready-for-google-summer-of-code.html. > > > We should be able to come up with some interesting project ideas I'd > think, let's put those at > http://projects.scipy.org/scipy/wiki/SummerofCodeIdeas. Preferably with > enough detail to be understandable for people new to the projects and a > proposed mentor. > > We need at least 3 people willing to mentor a student. Ideally we'd have > enough mentors this week, so we can apply to the PSF on time. If you're > willing to be a mentor, please send me the following: name, email address, > phone nr, and what you're interested in mentoring. If you have time > constaints and have doubts about being able to be a primary mentor, being a > backup mentor would also be helpful. > So far we've only got one primary mentor (thanks Chuck!), most core devs do not seem to have the bandwidth this year. If there are other people interested in mentoring please let me know. If not, then it looks like we're not participating this year. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From fbreitling at aip.de Tue Mar 26 06:10:07 2013 From: fbreitling at aip.de (Frank Breitling) Date: Tue, 26 Mar 2013 11:10:07 +0100 Subject: [SciPy-Dev] OSError with Cookbook In-Reply-To: References: <510F8EF7.7010502@aip.de> Message-ID: <515173FF.4080708@aip.de> Thank you, I can create pages now. However, two issues remain: 1. I still get the following error message although the pages gets created (http://www.scipy.org/Cookbook/Histograms#preview): === Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, root at enthought.com and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log." === Maybe someone wants to fix this. 2. How can I move or delete a page? After creating http://www.scipy.org/Cookbook/Matplotlib/Histograms I realized that it would better fit into the Numpy section at http://www.scipy.org/Cookbook/Histograms . However, I didn't see a way how to move or delete the previous page. Can anybody tell me? Frank On 17.02.2013 01:23, Robert Kern wrote: > On Sat, Feb 16, 2013 at 9:07 PM, Ralf Gommers wrote: >> On Mon, Feb 4, 2013 at 11:35 AM, Frank Breitling wrote: >>> Hi, >>> >>> I have been trying to create a new page for the SciPy cookbook via >>> >>> http://www.scipy.org/Cookbook/Matplotlib/Histograms?action=edit >>> >>> but received the error below. >>> >>> Who can solve this? >> I'm running into the same issue. If it's one "kick the server" action away, >> it would be great if someone does that. Otherwise, hurrying up with the new >> scipy.org site may be effort better spent. > Please try again. As the spam pages accumulate, they eventually hit > the maximum number of files allowed in a directory. I have removed all > non-current page directories away. > From fbreitling at aip.de Tue Mar 26 06:15:02 2013 From: fbreitling at aip.de (Frank Breitling) Date: Tue, 26 Mar 2013 11:15:02 +0100 Subject: [SciPy-Dev] 2D histogram: request for plotting variable bin size In-Reply-To: <510D7C74.8040901@molden.no> References: <510A794D.7050102@aip.de> <510A845E.8030008@molden.no> <510A85CF.2050609@molden.no> <510A8E85.7030300@aip.de> <510A9426.9080401@molden.no> <510B9AA8.4030401@aip.de> <510BE65B.7040300@molden.no> <510CE218.3090908@aip.de> <510D347B.2000207@molden.no> <510D7007.5000808@aip.de> <510D7C74.8040901@molden.no> Message-ID: <51517526.5020909@aip.de> Hi, And how can I add or update images of the numpy doc (e.g. at http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html)? Meanwhile I have created an example for histograms with variable bin size at http://www.scipy.org/Cookbook/Histograms and would like to add this to the numpy doc, too. Therefore I need to replace the image. Frank On 02.02.2013 21:52, Sturla Molden wrote: > That is not how it works. > > If you have a suggestion for changing the NumPy (including docs) you > fork NumPy on GitHub.com and post a pull-request with your changes. And > that would be after asking on the NumPy list (not on scipy-dev!). > They might also want you to open an issue on their GitHub tracker. > > All users on scipy.org should be able to update the cookbook. At least > that it how it worked before. If you want to create a new page on the > wiki you just navigate to it with your browser. > > Sturla > > > On 02.02.2013 20:59, Frank Breitling wrote: >> Hi, >> >> So I registered an account, but I have to request write permission by >> email to scipy-dev at scipy.org first. >> So could somebody give me write permission to the page >> http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html >> and maybe also to http://www.scipy.org/Cookbook/Matplotlib ? >> >> Or otherwise could somebody add the example below for me? >> >> Frank >> >> --- >> import numpy as np, matplotlib.pyplot as plt >> >> x = np.random.normal(3, 2, 1000) >> y = np.random.normal(3, 1, 1000) >> >> yedges=xedges=[0,2,3,4,6] >> >> H, yedges, xedges = np.histogram2d(y,x, bins=(yedges,xedges)) >> extent = [xedges[0], xedges[-1], yedges[-1], yedges[0]] >> >> #If bin size is equal imshow can be used. It is fast and provides >> interpolation. >> #plt.imshow(H, extent=extent, interpolation='None', >> aspect='auto') >> >> #To display variable bin size pcolar can be used >> X,Y = np.meshgrid(xedges, yedges) >> plt.pcolor(X, Y, >> H) >> >> #If interpolation is needed in addition matplotlib provides the >> NonUniformImage: >> #http://matplotlib.org/examples/pylab_examples/image_nonuniform.html >> >> plt.colorbar() >> plt.show() >> >> >> >> On 2013-02-02 16:44, Sturla Molden wrote: >>> On 02.02.2013 10:53, Frank Breitling wrote: >>>> Hi, >>>> >>>> From the matplotlib developers it was pointed out, that there is a >>>> NonUniformImage which might be suite for representing interpolated >>>> variable bin size 2D histograms >>>> (https://github.com/matplotlib/matplotlib/issues/1729#issuecomment-13014723). >>>> There even exists an example >>>> (http://matplotlib.org/examples/pylab_examples/image_nonuniform.html) >>>> but it is very isolated and therefore not well known. >>>> It would be very useful to explain its usage or at least link to it in >>>> the histogram2d example at >>>> http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html. >>>> >>>> In addition a pcolor example (attached below) shouldn't be missing. Even >>>> though it is slow and can't do interpolation, it can at least do a >>>> correct representation for academic purposes. >>>> >>>> Can anybody do that or would you like me to do that myself? >>>> >>>> Frank >>> Since it is at the top of your head, why don't you do it? >>> >>> But it might be better for the SciPy Cookbook's matplotlib section >>> than the NumPy docs. I'm not sure if they want matplotlib examples in >>> the NumPy documentation (ask on the NumPy list before you waste your >>> time on it). >>> >>> >>> http://www.scipy.org/Cookbook/Matplotlib >>> >>> >>> >>> Sturla >>> >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From fbreitling at aip.de Tue Mar 26 06:52:58 2013 From: fbreitling at aip.de (Frank Breitling) Date: Tue, 26 Mar 2013 11:52:58 +0100 Subject: [SciPy-Dev] OSError with Cookbook In-Reply-To: <515173FF.4080708@aip.de> References: <510F8EF7.7010502@aip.de> <515173FF.4080708@aip.de> Message-ID: <51517E0A.5000804@aip.de> A third problem is: "You are not allowed to delete attachments on this page." However I created the side and uploaded the attachment. I just want to update it. Overwriting also fails. Frank On 26.03.2013 11:10, Frank Breitling wrote: > Thank you, I can create pages now. > However, two issues remain: > > 1. I still get the following error message although the pages gets > created (http://www.scipy.org/Cookbook/Histograms#preview): > === > Internal Server Error > > The server encountered an internal error or misconfiguration and was > unable to complete your request. > > Please contact the server administrator, root at enthought.com and inform > them of the time the error occurred, and anything you might have done > that may have caused the error. > > More information about this error may be available in the server error log." > === > Maybe someone wants to fix this. > > > 2. How can I move or delete a page? > After creating http://www.scipy.org/Cookbook/Matplotlib/Histograms > I realized that it would better fit into the Numpy section at > http://www.scipy.org/Cookbook/Histograms . > However, I didn't see a way how to move or delete the previous page. > Can anybody tell me? > > > Frank > > > On 17.02.2013 01:23, Robert Kern wrote: >> On Sat, Feb 16, 2013 at 9:07 PM, Ralf Gommers wrote: >>> On Mon, Feb 4, 2013 at 11:35 AM, Frank Breitling wrote: >>>> Hi, >>>> >>>> I have been trying to create a new page for the SciPy cookbook via >>>> >>>> http://www.scipy.org/Cookbook/Matplotlib/Histograms?action=edit >>>> >>>> but received the error below. >>>> >>>> Who can solve this? >>> I'm running into the same issue. If it's one "kick the server" action away, >>> it would be great if someone does that. Otherwise, hurrying up with the new >>> scipy.org site may be effort better spent. >> Please try again. As the spam pages accumulate, they eventually hit >> the maximum number of files allowed in a directory. I have removed all >> non-current page directories away. >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From suryak at ieee.org Tue Mar 26 12:23:50 2013 From: suryak at ieee.org (Surya Kasturi) Date: Tue, 26 Mar 2013 21:53:50 +0530 Subject: [SciPy-Dev] design decision needed on implementing Feeds in SPC? Message-ID: Hi guys, My next target in Scipy Central development is to implement RSS/Atom Feeds. As feeds provide us lots of functionality (Feed Readers, Email Subscriptions etc.. ) I decided to take this up. (what do you think of it?) >From the issue tracker on Github, I have also seen API as "need to be done" Here is the thing: If implemented nicely, Feeds can be used as API to a great extent. -- in cheapest possible way (no extra server load, no need to have OAuth, no need to have separate database tables, tracking calls etc.. ) This is where I stuck on -- taking design decision. If we got to use Feeds as API too, then we need to define our own xml elements, like "code", "snippet", "submission-link" etc.. (example).. I guess, these things are not required if we don't want api to be handled via feeds. So, what do you say? Any ideas on it? Note: considering my current time schedule, I will take this development, a bit slow. Surya -------------- next part -------------- An HTML attachment was scrubbed... URL: From suryak at ieee.org Tue Mar 26 12:42:53 2013 From: suryak at ieee.org (Surya Kasturi) Date: Tue, 26 Mar 2013 22:12:53 +0530 Subject: [SciPy-Dev] design decision needed on implementing Feeds in SPC? In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 9:53 PM, Surya Kasturi wrote: > Hi guys, > > My next target in Scipy Central development is to implement RSS/Atom > Feeds. As feeds provide us lots of functionality (Feed Readers, Email > Subscriptions etc.. ) I decided to take this up. > > (what do you think of it?) > > From the issue tracker on Github, I have also seen API as "need to be done" > > Here is the thing: If implemented nicely, Feeds can be used as API to a > great extent. -- in cheapest possible way (no extra server load, no need to > have OAuth, no need to have separate database tables, tracking calls etc.. ) > > This is where I stuck on -- taking design decision. > > If we got to use Feeds as API too, then we need to define our own xml > elements, like "code", "snippet", "submission-link" etc.. (example).. I > guess, these things are not required if we don't want api to be handled via > feeds. > > So, what do you say? Any ideas on it? > > Note: considering my current time schedule, I will take this development, > a bit slow. > > > Surya > Forgot to mention: If we are only assuming "read only" api... -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Mar 26 14:33:52 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 26 Mar 2013 19:33:52 +0100 Subject: [SciPy-Dev] 2D histogram: request for plotting variable bin size In-Reply-To: <51517526.5020909@aip.de> References: <510A794D.7050102@aip.de> <510A845E.8030008@molden.no> <510A85CF.2050609@molden.no> <510A8E85.7030300@aip.de> <510A9426.9080401@molden.no> <510B9AA8.4030401@aip.de> <510BE65B.7040300@molden.no> <510CE218.3090908@aip.de> <510D347B.2000207@molden.no> <510D7007.5000808@aip.de> <510D7C74.8040901@molden.no> <51517526.5020909@aip.de> Message-ID: On Tue, Mar 26, 2013 at 11:15 AM, Frank Breitling wrote: > Hi, > > And how can I add or update images of the numpy doc (e.g. at > http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html > )? > That's created by the example code in the histogram2d docstring. So you can add an example in that docstring in your own numpy git repo, then send a PR with that change on Github. Ralf > Meanwhile I have created an example for histograms with variable bin > size at > http://www.scipy.org/Cookbook/Histograms > and would like to add this to the numpy doc, too. Therefore I need to > replace the image. > > Frank > > > On 02.02.2013 21:52, Sturla Molden wrote: > > That is not how it works. > > > > If you have a suggestion for changing the NumPy (including docs) you > > fork NumPy on GitHub.com and post a pull-request with your changes. And > > that would be after asking on the NumPy list (not on scipy-dev!). > > They might also want you to open an issue on their GitHub tracker. > > > > All users on scipy.org should be able to update the cookbook. At least > > that it how it worked before. If you want to create a new page on the > > wiki you just navigate to it with your browser. > > > > Sturla > > > > > > On 02.02.2013 20:59, Frank Breitling wrote: > >> Hi, > >> > >> So I registered an account, but I have to request write permission by > >> email to scipy-dev at scipy.org first. > >> So could somebody give me write permission to the page > >> > http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html > >> and maybe also to http://www.scipy.org/Cookbook/Matplotlib ? > >> > >> Or otherwise could somebody add the example below for me? > >> > >> Frank > >> > >> --- > >> import numpy as np, matplotlib.pyplot as plt > >> > >> x = np.random.normal(3, 2, 1000) > >> y = np.random.normal(3, 1, 1000) > >> > >> yedges=xedges=[0,2,3,4,6] > >> > >> H, yedges, xedges = np.histogram2d(y,x, bins=(yedges,xedges)) > >> extent = [xedges[0], xedges[-1], yedges[-1], yedges[0]] > >> > >> #If bin size is equal imshow can be used. It is fast and provides > >> interpolation. > >> #plt.imshow(H, extent=extent, interpolation='None', > >> aspect='auto') > >> > >> #To display variable bin size pcolar can be used > >> X,Y = np.meshgrid(xedges, yedges) > >> plt.pcolor(X, Y, > >> H) > >> > >> #If interpolation is needed in addition matplotlib provides the > >> NonUniformImage: > >> #http://matplotlib.org/examples/pylab_examples/image_nonuniform.html > >> > >> plt.colorbar() > >> plt.show() > >> > >> > >> > >> On 2013-02-02 16:44, Sturla Molden wrote: > >>> On 02.02.2013 10:53, Frank Breitling wrote: > >>>> Hi, > >>>> > >>>> From the matplotlib developers it was pointed out, that there is a > >>>> NonUniformImage which might be suite for representing interpolated > >>>> variable bin size 2D histograms > >>>> ( > https://github.com/matplotlib/matplotlib/issues/1729#issuecomment-13014723 > ). > >>>> There even exists an example > >>>> (http://matplotlib.org/examples/pylab_examples/image_nonuniform.html) > >>>> but it is very isolated and therefore not well known. > >>>> It would be very useful to explain its usage or at least link to it in > >>>> the histogram2d example at > >>>> > http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html > . > >>>> > >>>> In addition a pcolor example (attached below) shouldn't be missing. > Even > >>>> though it is slow and can't do interpolation, it can at least do a > >>>> correct representation for academic purposes. > >>>> > >>>> Can anybody do that or would you like me to do that myself? > >>>> > >>>> Frank > >>> Since it is at the top of your head, why don't you do it? > >>> > >>> But it might be better for the SciPy Cookbook's matplotlib section > >>> than the NumPy docs. I'm not sure if they want matplotlib examples in > >>> the NumPy documentation (ask on the NumPy list before you waste your > >>> time on it). > >>> > >>> > >>> http://www.scipy.org/Cookbook/Matplotlib > >>> > >>> > >>> > >>> Sturla > >>> > >>> > >>> _______________________________________________ > >>> SciPy-Dev mailing list > >>> SciPy-Dev at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-dev > >>> > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-dev > >> > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Mar 26 16:59:15 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 26 Mar 2013 21:59:15 +0100 Subject: [SciPy-Dev] design decision needed on implementing Feeds in SPC? In-Reply-To: References: Message-ID: On Tue, Mar 26, 2013 at 5:42 PM, Surya Kasturi wrote: > > On Tue, Mar 26, 2013 at 9:53 PM, Surya Kasturi wrote: > >> Hi guys, >> >> My next target in Scipy Central development is to implement RSS/Atom >> Feeds. As feeds provide us lots of functionality (Feed Readers, Email >> Subscriptions etc.. ) I decided to take this up. >> >> (what do you think of it?) >> > Sounds useful to have an RSS feed. > >> From the issue tracker on Github, I have also seen API as "need to be >> done" >> > Note that while the issue tracker contains some useful ideas, it's certainly not a list of stuff that has to be done. I'm probably not telling you anything you didn't already know here, but just in case. >> Here is the thing: If implemented nicely, Feeds can be used as API to a >> great extent. -- in cheapest possible way (no extra server load, no need to >> have OAuth, no need to have separate database tables, tracking calls etc.. ) >> >> This is where I stuck on -- taking design decision. >> >> If we got to use Feeds as API too, then we need to define our own xml >> elements, like "code", "snippet", "submission-link" etc.. (example).. I >> guess, these things are not required if we don't want api to be handled via >> feeds. >> >> So, what do you say? Any ideas on it? >> > I have to confess that I don't know enough about web development to have an informed opinion on this. I agree that read-only API should be enough though. > >> Note: considering my current time schedule, I will take this development, >> a bit slow. >> > Given your time constraints, I do want to point out that while rss feeds are nice, they're mostly orthogonal to the rest of the site. At some point it will be better to get the new code up and running so people can start using it, rather than to keep writing new code. Users also generate new ideas (and bug reports). Just a thought. Cheers, Ralf > >> Surya >> > Forgot to mention: If we are only assuming "read only" api... > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Wed Mar 27 05:37:47 2013 From: toddrjen at gmail.com (Todd) Date: Wed, 27 Mar 2013 10:37:47 +0100 Subject: [SciPy-Dev] Vector Strength function In-Reply-To: References: Message-ID: On Fri, Feb 1, 2013 at 5:19 PM, Todd wrote: > On Wed, Jan 9, 2013 at 8:44 PM, wrote: > >> On Wed, Jan 9, 2013 at 12:32 PM, Todd wrote: >> > I am interested in implementing a function for scipy. The function is >> > called "vector strength". It is basically a measure of how reliably a >> set >> > of events occur at a particular phase. >> > >> > It was originally developed for neuroscience research, to determine how >> well >> > a set of neural events sync up with a periodic stimulus like a sound >> > waveform. >> > >> > However, it is useful for determining how periodic a supposedly >> periodic set >> > of events really are, for example: >> > >> > 1. Determining whether crime is really more common during a full moon >> and by >> > how much >> > 2. Determining how concentrated visitors to a coffee shop are during >> rush >> > hour >> > 3. Determining exactly how concentrated hurricanes are during hurricane >> > season >> > >> > >> > My thinking is that this could be implemented in stages: >> > >> > First would be a Numpy function that would add a set of vectors in polar >> > coordinates. Given a number of magnitude/angle pairs it would provide a >> > summed magnitude/angle pair. This would probably be combined with a >> > cartesian<->polar conversion functions. >> > >> > Making use of this function would be a scipy function that would >> actually >> > implement the vector strength calculation. This is done by treating >> each >> > event as a unit vector with a phase, then taking the average of the >> vectors. >> > If all events have the same phase, the result will have an amplitude of >> 1. >> > If they all have a different phases, the result will have an amplitude >> of 0. >> > >> > It may even be worth having a dedicated polar dtype, although that may >> be >> > too much. >> > >> > What does everyone think of this proposal? >> >> Is this the same as a mean resultant in circular statistics? >> >> def circular_resultant(rads, axis=0): >> mp = np.sum(np.exp(1j*rads), axis=axis) >> rho = np.abs(mp) >> mu = np.angle(mp) >> >> return mp, rho, mu >> >> Josef > > > It looks to be the same as the first part of my proposal. > So does anyone have any opinions on this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Wed Mar 27 06:06:43 2013 From: toddrjen at gmail.com (Todd) Date: Wed, 27 Mar 2013 11:06:43 +0100 Subject: [SciPy-Dev] Peak handling with plateaus Message-ID: Currently the scipy peak-finding algorithms work for something like this: a=np.array(1,2,3,4,5,4,3,2,1]) However, they fail for something like this: a=np.array(1,2,3,4,5,5,5,5,4,3,2,1]) I've come across a recursive algorithm that can handle these situations (I implemented it in python before). It works in two stages, and is not that difficult to implement. Assuming we are looking for the maximum, the algorithm goes like this: 1. Strip out all value from the beginning where diff > 0 and all values from the end where diff < 0 2: Find the miminum value 3. If the minimum value is equal to the maximum value, return the center index of the array (or the floor of the center index if there are an even number of elements) 4. Otherwise, split the array into sub-arrays around each instance of the minimum value and re-run the function on those sub-arrays. This can be followed up with a diff test around each maxima/minima to make sure the peaks are above a certain SNR or absolute amplitude. This method is probably not as fast as the existing implementation, but it has the advantage that it can handle unusual corner-cases better while still handling ordinary peaks just fine. Would anymore be interested in having this sort of algorithm in scipy? -------------- next part -------------- An HTML attachment was scrubbed... URL: From fbreitling at aip.de Wed Mar 27 06:14:22 2013 From: fbreitling at aip.de (Frank Breitling) Date: Wed, 27 Mar 2013 11:14:22 +0100 Subject: [SciPy-Dev] 2D histogram: request for plotting variable bin size In-Reply-To: References: <510A794D.7050102@aip.de> <510A845E.8030008@molden.no> <510A85CF.2050609@molden.no> <510A8E85.7030300@aip.de> <510A9426.9080401@molden.no> <510B9AA8.4030401@aip.de> <510BE65B.7040300@molden.no> <510CE218.3090908@aip.de> <510D347B.2000207@molden.no> <510D7007.5000808@aip.de> <510D7C74.8040901@molden.no> <51517526.5020909@aip.de> Message-ID: <5152C67E.70500@aip.de> Thanks Ralf, So is there no simple way to test changes before the commit? Frank On 26.03.2013 19:33, Ralf Gommers wrote: > > On Tue, Mar 26, 2013 at 11:15 AM, Frank Breitling > wrote: > > Hi, > > And how can I add or update images of the numpy doc (e.g. at > http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html)? > > > That's created by the example code in the histogram2d docstring. So > you can add an example in that docstring in your own numpy git repo, > then send a PR with that change on Github. > > Ralf > > Meanwhile I have created an example for histograms with variable bin > size at > http://www.scipy.org/Cookbook/Histograms > and would like to add this to the numpy doc, too. Therefore I need to > replace the image. > > Frank > > > On 02.02.2013 21:52, Sturla Molden wrote: > > That is not how it works. > > > > If you have a suggestion for changing the NumPy (including docs) you > > fork NumPy on GitHub.com and post a pull-request with your > changes. And > > that would be after asking on the NumPy list (not on scipy-dev!). > > They might also want you to open an issue on their GitHub tracker. > > > > All users on scipy.org should be able to > update the cookbook. At least > > that it how it worked before. If you want to create a new page > on the > > wiki you just navigate to it with your browser. > > > > Sturla > > > > > > On 02.02.2013 20:59, Frank Breitling wrote: > >> Hi, > >> > >> So I registered an account, but I have to request write > permission by > >> email to scipy-dev at scipy.org first. > >> So could somebody give me write permission to the page > >> > http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html > >> and maybe also to http://www.scipy.org/Cookbook/Matplotlib ? > >> > >> Or otherwise could somebody add the example below for me? > >> > >> Frank > >> > >> --- > >> import numpy as np, matplotlib.pyplot as plt > >> > >> x = np.random.normal(3, 2, 1000) > >> y = np.random.normal(3, 1, 1000) > >> > >> yedges=xedges=[0,2,3,4,6] > >> > >> H, yedges, xedges = np.histogram2d(y,x, bins=(yedges,xedges)) > >> extent = [xedges[0], xedges[-1], yedges[-1], yedges[0]] > >> > >> #If bin size is equal imshow can be used. It is fast and provides > >> interpolation. > >> #plt.imshow(H, extent=extent, interpolation='None', > >> aspect='auto') > >> > >> #To display variable bin size pcolar can be used > >> X,Y = np.meshgrid(xedges, yedges) > >> plt.pcolor(X, Y, > >> H) > >> > >> #If interpolation is needed in addition matplotlib provides the > >> NonUniformImage: > >> > #http://matplotlib.org/examples/pylab_examples/image_nonuniform.html > >> > >> plt.colorbar() > >> plt.show() > >> > >> > >> > >> On 2013-02-02 16:44, Sturla Molden wrote: > >>> On 02.02.2013 10:53, Frank Breitling wrote: > >>>> Hi, > >>>> > >>>> From the matplotlib developers it was pointed out, that > there is a > >>>> NonUniformImage which might be suite for representing > interpolated > >>>> variable bin size 2D histograms > >>>> > (https://github.com/matplotlib/matplotlib/issues/1729#issuecomment-13014723). > >>>> There even exists an example > >>>> > (http://matplotlib.org/examples/pylab_examples/image_nonuniform.html) > >>>> but it is very isolated and therefore not well known. > >>>> It would be very useful to explain its usage or at least link > to it in > >>>> the histogram2d example at > >>>> > http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html. > >>>> > >>>> In addition a pcolor example (attached below) shouldn't be > missing. Even > >>>> though it is slow and can't do interpolation, it can at least > do a > >>>> correct representation for academic purposes. > >>>> > >>>> Can anybody do that or would you like me to do that myself? > >>>> > >>>> Frank > >>> Since it is at the top of your head, why don't you do it? > >>> > >>> But it might be better for the SciPy Cookbook's matplotlib section > >>> than the NumPy docs. I'm not sure if they want matplotlib > examples in > >>> the NumPy documentation (ask on the NumPy list before you > waste your > >>> time on it). > >>> > >>> > >>> http://www.scipy.org/Cookbook/Matplotlib > >>> > >>> > >>> > >>> Sturla > >>> > >>> > >>> _______________________________________________ > >>> SciPy-Dev mailing list > >>> SciPy-Dev at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-dev > >>> > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-dev > >> > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From geier at lostpackets.de Wed Mar 27 08:27:27 2013 From: geier at lostpackets.de (Christian Geier) Date: Wed, 27 Mar 2013 12:27:27 +0000 Subject: [SciPy-Dev] Lomb-Scargle Periodogram: Press & Rybicki Algorithm In-Reply-To: References: <20130309012621.GA65314@brutus.lostpackets.de> <20130318151922.GA20372@brutus.lostpackets.de> Message-ID: <20130327122727.GA27335@brutus.lostpackets.de> On Mon, Mar 25, 2013 at 09:35:33PM +0100, Ralf Gommers wrote: > > A pure translation of the Fortran code in that paper is not OK to > distribute under a BSD license. You'd have to get permission from the > publisher probably. Reimplementing based on the equations in the paper > would be OK though. > > Ralf As the second option is not possible ("There are several other tricks involved in implementing this algorithm efficiently. Rather than try to describe these in words, we direct attention to the commented program implementation [?]"), I will try to get a permission from the publisher/authors. Regards, Christian From ralf.gommers at gmail.com Wed Mar 27 15:24:58 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 27 Mar 2013 20:24:58 +0100 Subject: [SciPy-Dev] 2D histogram: request for plotting variable bin size In-Reply-To: <5152C67E.70500@aip.de> References: <510A794D.7050102@aip.de> <510A845E.8030008@molden.no> <510A85CF.2050609@molden.no> <510A8E85.7030300@aip.de> <510A9426.9080401@molden.no> <510B9AA8.4030401@aip.de> <510BE65B.7040300@molden.no> <510CE218.3090908@aip.de> <510D347B.2000207@molden.no> <510D7007.5000808@aip.de> <510D7C74.8040901@molden.no> <51517526.5020909@aip.de> <5152C67E.70500@aip.de> Message-ID: On Wed, Mar 27, 2013 at 11:14 AM, Frank Breitling wrote: > Thanks Ralf, > > So is there no simple way to test changes before the commit? > Sure there is. One way is: 1. build numpy in-place (python setup.py build_ext -i) and add it to your PYTHONPATH. (alternatively: python setup.py develop) 2. edit the docstring 3. "make html in the doc/ directory 4. the built docs are now in doc/build/html, view the result in your browser (5. if the result looks good, commit it and send a PR) Cheers, Ralf > > Frank > > > On 26.03.2013 19:33, Ralf Gommers wrote: > > > On Tue, Mar 26, 2013 at 11:15 AM, Frank Breitling wrote: > >> Hi, >> >> And how can I add or update images of the numpy doc (e.g. at >> http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html >> )? >> > > That's created by the example code in the histogram2d docstring. So you > can add an example in that docstring in your own numpy git repo, then send > a PR with that change on Github. > > Ralf > > > >> Meanwhile I have created an example for histograms with variable bin >> size at >> http://www.scipy.org/Cookbook/Histograms >> and would like to add this to the numpy doc, too. Therefore I need to >> replace the image. >> >> Frank >> >> >> On 02.02.2013 21:52, Sturla Molden wrote: >> > That is not how it works. >> > >> > If you have a suggestion for changing the NumPy (including docs) you >> > fork NumPy on GitHub.com and post a pull-request with your changes. And >> > that would be after asking on the NumPy list (not on scipy-dev!). >> > They might also want you to open an issue on their GitHub tracker. >> > >> > All users on scipy.org should be able to update the cookbook. At least >> > that it how it worked before. If you want to create a new page on the >> > wiki you just navigate to it with your browser. >> > >> > Sturla >> > >> > >> > On 02.02.2013 20:59, Frank Breitling wrote: >> >> Hi, >> >> >> >> So I registered an account, but I have to request write permission by >> >> email to scipy-dev at scipy.org first. >> >> So could somebody give me write permission to the page >> >> >> http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html >> >> and maybe also to http://www.scipy.org/Cookbook/Matplotlib ? >> >> >> >> Or otherwise could somebody add the example below for me? >> >> >> >> Frank >> >> >> >> --- >> >> import numpy as np, matplotlib.pyplot as plt >> >> >> >> x = np.random.normal(3, 2, 1000) >> >> y = np.random.normal(3, 1, 1000) >> >> >> >> yedges=xedges=[0,2,3,4,6] >> >> >> >> H, yedges, xedges = np.histogram2d(y,x, bins=(yedges,xedges)) >> >> extent = [xedges[0], xedges[-1], yedges[-1], yedges[0]] >> >> >> >> #If bin size is equal imshow can be used. It is fast and provides >> >> interpolation. >> >> #plt.imshow(H, extent=extent, interpolation='None', >> >> aspect='auto') >> >> >> >> #To display variable bin size pcolar can be used >> >> X,Y = np.meshgrid(xedges, yedges) >> >> plt.pcolor(X, Y, >> >> H) >> >> >> >> #If interpolation is needed in addition matplotlib provides the >> >> NonUniformImage: >> >> #http://matplotlib.org/examples/pylab_examples/image_nonuniform.html >> >> >> >> plt.colorbar() >> >> plt.show() >> >> >> >> >> >> >> >> On 2013-02-02 16:44, Sturla Molden wrote: >> >>> On 02.02.2013 10:53, Frank Breitling wrote: >> >>>> Hi, >> >>>> >> >>>> From the matplotlib developers it was pointed out, that there is >> a >> >>>> NonUniformImage which might be suite for representing interpolated >> >>>> variable bin size 2D histograms >> >>>> ( >> https://github.com/matplotlib/matplotlib/issues/1729#issuecomment-13014723 >> ). >> >>>> There even exists an example >> >>>> (http://matplotlib.org/examples/pylab_examples/image_nonuniform.html >> ) >> >>>> but it is very isolated and therefore not well known. >> >>>> It would be very useful to explain its usage or at least link to it >> in >> >>>> the histogram2d example at >> >>>> >> http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html >> . >> >>>> >> >>>> In addition a pcolor example (attached below) shouldn't be missing. >> Even >> >>>> though it is slow and can't do interpolation, it can at least do a >> >>>> correct representation for academic purposes. >> >>>> >> >>>> Can anybody do that or would you like me to do that myself? >> >>>> >> >>>> Frank >> >>> Since it is at the top of your head, why don't you do it? >> >>> >> >>> But it might be better for the SciPy Cookbook's matplotlib section >> >>> than the NumPy docs. I'm not sure if they want matplotlib examples in >> >>> the NumPy documentation (ask on the NumPy list before you waste your >> >>> time on it). >> >>> >> >>> >> >>> http://www.scipy.org/Cookbook/Matplotlib >> >>> >> >>> >> >>> >> >>> Sturla >> >>> >> >>> >> >>> _______________________________________________ >> >>> SciPy-Dev mailing list >> >>> SciPy-Dev at scipy.org >> >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >>> >> >> _______________________________________________ >> >> SciPy-Dev mailing list >> >> SciPy-Dev at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > > > _______________________________________________ > SciPy-Dev mailing listSciPy-Dev at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cs770 at york.ac.uk Thu Mar 28 11:52:44 2013 From: cs770 at york.ac.uk (Clare Sutherland) Date: Thu, 28 Mar 2013 15:52:44 +0000 (UTC) Subject: [SciPy-Dev] Splines References: Message-ID: Charles R Harris gmail.com> writes: > > Hi All,There have been several threads on the list about splines and consolidation of splines. ....Thoughts? Chuck > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > Hear hear! I've been trying to use Python to interpolate natural cubic splines.... This has not been easy (especially since it's my first script in Python!) Part of the problem is that the documentation on it just now is so terse that without a mathematical background it's difficult to know what the options are.It doesn't help that online unoffical user documentation refers to scipy.interpolate as if it creates a natural cubic spline, but this is incorrect (I think...) So my idea is that it would be useful to have a more informative, clearer official user documentation. This would hopefully clear up some of the confusion. Best wishes, Clare From denis-bz-py at t-online.de Thu Mar 28 13:44:59 2013 From: denis-bz-py at t-online.de (denis) Date: Thu, 28 Mar 2013 17:44:59 +0000 (UTC) Subject: [SciPy-Dev] Splines References: Message-ID: Folks, yet another little testcase for splines turns out to be quite funny: http://imgur.com/Z9ikNCk is splev / interp1d do on random-uniform X -> linspace Y -- X = np.r_[ 0, np.sort( np.random.uniform( size = nx - 2 )), 1 ] Y = np.r_[ 0., np.arange(nx - 2), nx - 2 ] interp1d-splev.py is in https://gist.github.com/denis-bz/5265158 While I guess this won't surprise spline experts it does show HOW non-monotone splines can get and the importance of boundary conditions. cheers -- denis From suryak at ieee.org Fri Mar 29 02:01:37 2013 From: suryak at ieee.org (Surya Kasturi) Date: Fri, 29 Mar 2013 11:31:37 +0530 Subject: [SciPy-Dev] design decision needed on implementing Feeds in SPC? In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 2:29 AM, Ralf Gommers wrote: > > > > On Tue, Mar 26, 2013 at 5:42 PM, Surya Kasturi wrote: > >> >> On Tue, Mar 26, 2013 at 9:53 PM, Surya Kasturi wrote: >> >>> Hi guys, >>> >>> My next target in Scipy Central development is to implement RSS/Atom >>> Feeds. As feeds provide us lots of functionality (Feed Readers, Email >>> Subscriptions etc.. ) I decided to take this up. >>> >>> (what do you think of it?) >>> >> > Sounds useful to have an RSS feed. > > >> >>> >From the issue tracker on Github, I have also seen API as "need to be >>> done" >>> >> > Note that while the issue tracker contains some useful ideas, it's > certainly not a list of stuff that has to be done. I'm probably not telling > you anything you didn't already know here, but just in case. > > >>> Here is the thing: If implemented nicely, Feeds can be used as API to a >>> great extent. -- in cheapest possible way (no extra server load, no need to >>> have OAuth, no need to have separate database tables, tracking calls etc.. ) >>> >>> This is where I stuck on -- taking design decision. >>> >>> If we got to use Feeds as API too, then we need to define our own xml >>> elements, like "code", "snippet", "submission-link" etc.. (example).. I >>> guess, these things are not required if we don't want api to be handled via >>> feeds. >>> >>> So, what do you say? Any ideas on it? >>> >> > I have to confess that I don't know enough about web development to have > an informed opinion on this. I agree that read-only API should be enough > though. > > >> >>> Note: considering my current time schedule, I will take this >>> development, a bit slow. >>> >> > Given your time constraints, I do want to point out that while rss feeds > are nice, they're mostly orthogonal to the rest of the site. At some point > it will be better to get the new code up and running so people can start > using it, rather than to keep writing new code. Users also generate new > ideas (and bug reports). Just a thought. > I am currently running a "test" site on production server.. Waiting for: 1. design-1.0 merge into master 2. spc database from kevin (emailed him) > > Cheers, > Ralf > > > >> >>> Surya >>> >> Forgot to mention: If we are only assuming "read only" api... >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Mar 29 16:41:56 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 29 Mar 2013 21:41:56 +0100 Subject: [SciPy-Dev] design decision needed on implementing Feeds in SPC? In-Reply-To: References: Message-ID: On Fri, Mar 29, 2013 at 7:01 AM, Surya Kasturi wrote: > > > > On Wed, Mar 27, 2013 at 2:29 AM, Ralf Gommers wrote: > >> >> >> >> On Tue, Mar 26, 2013 at 5:42 PM, Surya Kasturi wrote: >> >>> >>> On Tue, Mar 26, 2013 at 9:53 PM, Surya Kasturi wrote: >>> >>>> Hi guys, >>>> >>>> My next target in Scipy Central development is to implement RSS/Atom >>>> Feeds. As feeds provide us lots of functionality (Feed Readers, Email >>>> Subscriptions etc.. ) I decided to take this up. >>>> >>>> (what do you think of it?) >>>> >>> >> Sounds useful to have an RSS feed. >> >> >>> >>>> >From the issue tracker on Github, I have also seen API as "need to be >>>> done" >>>> >>> >> Note that while the issue tracker contains some useful ideas, it's >> certainly not a list of stuff that has to be done. I'm probably not telling >> you anything you didn't already know here, but just in case. >> >> >>>> Here is the thing: If implemented nicely, Feeds can be used as API to a >>>> great extent. -- in cheapest possible way (no extra server load, no need to >>>> have OAuth, no need to have separate database tables, tracking calls etc.. ) >>>> >>>> This is where I stuck on -- taking design decision. >>>> >>>> If we got to use Feeds as API too, then we need to define our own xml >>>> elements, like "code", "snippet", "submission-link" etc.. (example).. I >>>> guess, these things are not required if we don't want api to be handled via >>>> feeds. >>>> >>>> So, what do you say? Any ideas on it? >>>> >>> >> I have to confess that I don't know enough about web development to have >> an informed opinion on this. I agree that read-only API should be enough >> though. >> >> >>> >>>> Note: considering my current time schedule, I will take this >>>> development, a bit slow. >>>> >>> >> Given your time constraints, I do want to point out that while rss feeds >> are nice, they're mostly orthogonal to the rest of the site. At some point >> it will be better to get the new code up and running so people can start >> using it, rather than to keep writing new code. Users also generate new >> ideas (and bug reports). Just a thought. >> > > I am currently running a "test" site on production server.. > > Waiting for: > 1. design-1.0 merge into master > 2. spc database from kevin (emailed him) > Awesome Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Mar 29 19:03:25 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 30 Mar 2013 01:03:25 +0200 Subject: [SciPy-Dev] design decision needed on implementing Feeds in SPC? In-Reply-To: References: Message-ID: 29.03.2013 22:41, Ralf Gommers kirjoitti: [clip] > Waiting for: > 1. design-1.0 merge into master > 2. spc database from kevin (emailed him) The design branch is merged. Btw, we should allow uploading IPython notebook files. This should be rather simple, as all rendering can be offloaded to http://nbviewer.ipython.org/ -- Pauli Virtanen From ralf.gommers at gmail.com Sat Mar 30 08:31:36 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 30 Mar 2013 13:31:36 +0100 Subject: [SciPy-Dev] ANN: SciPy 0.12.0 release candidate 1 Message-ID: Hi, I am pleased to announce the availability of the first release candidate of SciPy 0.12.0. This is shaping up to be another solid release, with some cool new features (see highlights below) and a large amount of bug fixes and maintenance work under the hood. The number of contributors also keeps rising steadily, we're at 74 so far for this release. Sources and binaries can be found at http://sourceforge.net/projects/scipy/files/scipy/0.12.0rc1/, release notes are copied below. Please try this release and report any problems on the mailing list. If no issues are found, the final release will be in one week. Cheers, Ralf ========================== SciPy 0.12.0 Release Notes ========================== .. note:: Scipy 0.12.0 is not released yet! .. contents:: SciPy 0.12.0 is the culmination of 7 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.12.x branch, and on adding new features on the master branch. Some of the highlights of this release are: - Completed QHull wrappers in scipy.spatial. - cKDTree now a drop-in replacement for KDTree. - A new global optimizer, basinhopping. - Support for Python 2 and Python 3 from the same code base (no more 2to3). This release requires Python 2.6, 2.7 or 3.1-3.3 and NumPy 1.5.1 or greater. Support for Python 2.4 and 2.5 has been dropped as of this release. New features ============ ``scipy.spatial`` improvements ------------------------------ cKDTree feature-complete ^^^^^^^^^^^^^^^^^^^^^^^^ Cython version of KDTree, cKDTree, is now feature-complete. Most operations (construction, query, query_ball_point, query_pairs, count_neighbors and sparse_distance_matrix) are between 200 and 1000 times faster in cKDTree than in KDTree. With very minor caveats, cKDTree has exactly the same interface as KDTree, and can be used as a drop-in replacement. Voronoi diagrams and convex hulls ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `scipy.spatial` now contains functionality for computing Voronoi diagrams and convex hulls using the Qhull library. (Delaunay triangulation was available since Scipy 0.9.0.) Delaunay improvements ^^^^^^^^^^^^^^^^^^^^^ It's now possible to pass in custom Qhull options in Delaunay triangulation. Coplanar points are now also recorded, if present. Incremental construction of Delaunay triangulations is now also possible. Spectral estimators (``scipy.signal``) -------------------------------------- The functions ``scipy.signal.periodogram`` and ``scipy.signal.welch`` were added, providing DFT-based spectral estimators. ``scipy.optimize`` improvements ------------------------------- Callback functions in L-BFGS-B and TNC ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A callback mechanism was added to L-BFGS-B and TNC minimization solvers. Basin hopping global optimization (``scipy.optimize.basinhopping``) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A new global optimization algorithm. Basinhopping is designed to efficiently find the global minimum of a smooth function. ``scipy.special`` improvements ------------------------------ Revised complex error functions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The computation of special functions related to the error function now uses a new `Faddeeva library from MIT `__ which increases their numerical precision. The scaled and imaginary error functions ``erfcx`` and ``erfi`` were also added, and the Dawson integral ``dawsn`` can now be evaluated for a complex argument. Faster orthogonal polynomials ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Evaluation of orthogonal polynomials (the ``eval_*`` routines) in now faster in ``scipy.special``, and their ``out=`` argument functions properly. ``scipy.sparse.linalg`` features -------------------------------- - In ``scipy.sparse.linalg.spsolve``, the ``b`` argument can now be either a vector or a matrix. - ``scipy.sparse.linalg.inv`` was added. This uses ``spsolve`` to compute a sparse matrix inverse. - ``scipy.sparse.linalg.expm`` was added. This computes the exponential of a sparse matrix using a similar algorithm to the existing dense array implementation in ``scipy.linalg.expm``. Listing Matlab(R) file contents in ``scipy.io`` ----------------------------------------------- A new function ``whosmat`` is available in ``scipy.io`` for inspecting contents of MAT files without reading them to memory. Documented BLAS and LAPACK low-level interfaces (``scipy.linalg``) ------------------------------------------------------------------ The modules `scipy.linalg.blas` and `scipy.linalg.lapack` can be used to access low-level BLAS and LAPACK functions. Polynomial interpolation improvements (``scipy.interpolate``) ------------------------------------------------------------- The barycentric, Krogh, piecewise and pchip polynomial interpolators in ``scipy.interpolate`` accept now an ``axis`` argument. Deprecated features =================== `scipy.lib.lapack` ------------------ The module `scipy.lib.lapack` is deprecated. You can use `scipy.linalg.lapack` instead. The module `scipy.lib.blas` was deprecated earlier in Scipy 0.10.0. `fblas` and `cblas` ------------------- Accessing the modules `scipy.linalg.fblas`, `cblas`, `flapack`, `clapack` is deprecated. Instead, use the modules `scipy.linalg.lapack` and `scipy.linalg.blas`. Backwards incompatible changes ============================== Removal of ``scipy.io.save_as_module`` -------------------------------------- The function ``scipy.io.save_as_module`` was deprecated in Scipy 0.11.0, and is now removed. Its private support modules ``scipy.io.dumbdbm_patched`` and ``scipy.io.dumb_shelve`` are also removed. Other changes ============= Authors ======= * Anton Akhmerov + * Alexander Ebersp?cher + * Anne Archibald * Jisk Attema + * K.-Michael Aye + * bemasc + * Sebastian Berg + * Fran?ois Boulogne + * Matthew Brett * Lars Buitinck * Steven Byrnes + * Tim Cera + * Christian + * Keith Clawson + * David Cournapeau * Nathan Crock + * endolith * Bradley M. Froehle + * Matthew R Goodman * Christoph Gohlke * Ralf Gommers * Robert David Grant + * Yaroslav Halchenko * Charles Harris * Jonathan Helmus * Andreas Hilboll * Hugo + * Oleksandr Huziy * Jeroen Demeyer + * Johannes Sch?nberger + * Steven G. Johnson + * Chris Jordan-Squire * Jonathan Taylor + * Niklas Kroeger + * Jerome Kieffer + * kingson + * Josh Lawrence * Denis Laxalde * Alex Leach + * Lorenzo Luengo + * Stephen McQuay + * MinRK * Sturla Molden + * Eric Moore + * mszep + * Matt Newville + * Vlad Niculae * Travis Oliphant * David Parker + * Fabian Pedregosa * Josef Perktold * Zach Ploskey + * Alex Reinhart + * Richard Lindsley + * Gilles Rochefort + * Ciro Duran Santillli + * Jan Schlueter + * Jonathan Scholz + * Anthony Scopatz * Skipper Seabold * Fabrice Silva + * Scott Sinclair * Jacob Stevenson + * Sturla Molden + * Julian Taylor + * thorstenkranz + * John Travers + * True Price + * Nicky van Foreest * Jacob Vanderplas * Patrick Varilly * Daniel Velkov + * Pauli Virtanen * Stefan van der Walt * Warren Weckesser A total of 74 people contributed to this release. People with a "+" by their names contributed a patch for the first time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From suryak at ieee.org Sat Mar 30 12:15:48 2013 From: suryak at ieee.org (Surya Kasturi) Date: Sat, 30 Mar 2013 21:45:48 +0530 Subject: [SciPy-Dev] How to load sample data's respective static files (SPC) Message-ID: Hi, Please let me know if you can help me on the below things: During the development, after loading sample data using ``./manage.py loaddata sample`` It does load submissions, revisions. However, what about their respective static files (in case of snippets, libraries) that should be restored to "static_root/code" directory along with their respective .hg repos When ever we edit those sample snippets they are raising IOError pointing to those respective static files.. -- surya -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Mar 30 12:45:30 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 30 Mar 2013 18:45:30 +0200 Subject: [SciPy-Dev] How to load sample data's respective static files (SPC) In-Reply-To: References: Message-ID: 30.03.2013 18:15, Surya Kasturi kirjoitti: > Please let me know if you can help me on the below things: > > During the development, after loading sample data using ``./manage.py > loaddata sample`` > > It does load submissions, revisions. However, what about their > respective static files (in case of snippets, libraries) that should be > restored to "static_root/code" directory along with their respective .hg > repos > > When ever we edit those sample snippets they are raising IOError > pointing to those respective static files.. I don't know, but I suspect Django doesn't have builtin functionality for this. You'll need to browse through the Django docs to see if there's a hook for this. However, it's likely not worth your time to try to find out how to do it. It's just some placeholder lorem ipsum data. The sample dataset should be split into two parts --- one fixture that contains the tag and license definitions etc. (stuff that is actually useful), and the second part with the lorem ipsum. -- Pauli Virtanen From tim.leslie at gmail.com Sat Mar 30 22:08:10 2013 From: tim.leslie at gmail.com (Tim Leslie) Date: Sun, 31 Mar 2013 13:08:10 +1100 Subject: [SciPy-Dev] ANN: SciPy 0.12.0 release candidate 1 In-Reply-To: References: Message-ID: Hi Ralf, scipy.misc.pilutil.radon is marked as deprecated and to be removed in 0.12, but is still available: https://github.com/scipy/scipy/blob/maintenance/0.12.x/scipy/misc/pilutil.py#L456 Should it be removed before the release? Cheers, Tim On 30 March 2013 23:31, Ralf Gommers wrote: > Hi, > > I am pleased to announce the availability of the first release candidate of > SciPy 0.12.0. This is shaping up to be another solid release, with some cool > new features (see highlights below) and a large amount of bug fixes and > maintenance work under the hood. The number of contributors also keeps > rising steadily, we're at 74 so far for this release. > > Sources and binaries can be found at > http://sourceforge.net/projects/scipy/files/scipy/0.12.0rc1/, release notes > are copied below. > > Please try this release and report any problems on the mailing list. If no > issues are found, the final release will be in one week. > > Cheers, > Ralf > > > ========================== > SciPy 0.12.0 Release Notes > ========================== > > .. note:: Scipy 0.12.0 is not released yet! > > .. contents:: > > SciPy 0.12.0 is the culmination of 7 months of hard work. It contains > many new features, numerous bug-fixes, improved test coverage and > better documentation. There have been a number of deprecations and > API changes in this release, which are documented below. All users > are encouraged to upgrade to this release, as there are a large number > of bug-fixes and optimizations. Moreover, our development attention > will now shift to bug-fix releases on the 0.12.x branch, and on adding > new features on the master branch. > > Some of the highlights of this release are: > > - Completed QHull wrappers in scipy.spatial. > - cKDTree now a drop-in replacement for KDTree. > - A new global optimizer, basinhopping. > - Support for Python 2 and Python 3 from the same code base (no more > 2to3). > > This release requires Python 2.6, 2.7 or 3.1-3.3 and NumPy 1.5.1 or greater. > Support for Python 2.4 and 2.5 has been dropped as of this release. > > > New features > ============ > > ``scipy.spatial`` improvements > ------------------------------ > > cKDTree feature-complete > ^^^^^^^^^^^^^^^^^^^^^^^^ > Cython version of KDTree, cKDTree, is now feature-complete. Most operations > (construction, query, query_ball_point, query_pairs, count_neighbors and > sparse_distance_matrix) are between 200 and 1000 times faster in cKDTree > than > in KDTree. With very minor caveats, cKDTree has exactly the same interface > as > KDTree, and can be used as a drop-in replacement. > > Voronoi diagrams and convex hulls > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > `scipy.spatial` now contains functionality for computing Voronoi > diagrams and convex hulls using the Qhull library. (Delaunay > triangulation was available since Scipy 0.9.0.) > > Delaunay improvements > ^^^^^^^^^^^^^^^^^^^^^ > It's now possible to pass in custom Qhull options in Delaunay > triangulation. Coplanar points are now also recorded, if present. > Incremental construction of Delaunay triangulations is now also > possible. > > Spectral estimators (``scipy.signal``) > -------------------------------------- > The functions ``scipy.signal.periodogram`` and ``scipy.signal.welch`` were > added, providing DFT-based spectral estimators. > > > ``scipy.optimize`` improvements > ------------------------------- > > Callback functions in L-BFGS-B and TNC > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > A callback mechanism was added to L-BFGS-B and TNC minimization solvers. > > Basin hopping global optimization (``scipy.optimize.basinhopping``) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > A new global optimization algorithm. Basinhopping is designed to > efficiently > find the global minimum of a smooth function. > > > ``scipy.special`` improvements > ------------------------------ > > Revised complex error functions > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > The computation of special functions related to the error function now uses > a > new `Faddeeva library from MIT `__ which > increases their numerical precision. The scaled and imaginary error > functions > ``erfcx`` and ``erfi`` were also added, and the Dawson integral ``dawsn`` > can > now be evaluated for a complex argument. > > Faster orthogonal polynomials > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > Evaluation of orthogonal polynomials (the ``eval_*`` routines) in now > faster in ``scipy.special``, and their ``out=`` argument functions > properly. > > > ``scipy.sparse.linalg`` features > -------------------------------- > - In ``scipy.sparse.linalg.spsolve``, the ``b`` argument can now be either > a vector or a matrix. > - ``scipy.sparse.linalg.inv`` was added. This uses ``spsolve`` to compute > a sparse matrix inverse. > - ``scipy.sparse.linalg.expm`` was added. This computes the exponential of > a sparse matrix using a similar algorithm to the existing dense array > implementation in ``scipy.linalg.expm``. > > > Listing Matlab(R) file contents in ``scipy.io`` > ----------------------------------------------- > A new function ``whosmat`` is available in ``scipy.io`` for inspecting > contents > of MAT files without reading them to memory. > > > Documented BLAS and LAPACK low-level interfaces (``scipy.linalg``) > ------------------------------------------------------------------ > The modules `scipy.linalg.blas` and `scipy.linalg.lapack` can be used > to access low-level BLAS and LAPACK functions. > > > Polynomial interpolation improvements (``scipy.interpolate``) > ------------------------------------------------------------- > The barycentric, Krogh, piecewise and pchip polynomial interpolators in > ``scipy.interpolate`` accept now an ``axis`` argument. > > > Deprecated features > =================== > > `scipy.lib.lapack` > ------------------ > The module `scipy.lib.lapack` is deprecated. You can use > `scipy.linalg.lapack` > instead. The module `scipy.lib.blas` was deprecated earlier in Scipy 0.10.0. > > > `fblas` and `cblas` > ------------------- > Accessing the modules `scipy.linalg.fblas`, `cblas`, `flapack`, `clapack` is > deprecated. Instead, use the modules `scipy.linalg.lapack` and > `scipy.linalg.blas`. > > > Backwards incompatible changes > ============================== > > Removal of ``scipy.io.save_as_module`` > -------------------------------------- > The function ``scipy.io.save_as_module`` was deprecated in Scipy 0.11.0, and > is > now removed. > > Its private support modules ``scipy.io.dumbdbm_patched`` and > ``scipy.io.dumb_shelve`` are also removed. > > > Other changes > ============= > > > Authors > ======= > * Anton Akhmerov + > * Alexander Ebersp?cher + > * Anne Archibald > * Jisk Attema + > * K.-Michael Aye + > * bemasc + > * Sebastian Berg + > * Fran?ois Boulogne + > * Matthew Brett > * Lars Buitinck > * Steven Byrnes + > * Tim Cera + > * Christian + > * Keith Clawson + > * David Cournapeau > * Nathan Crock + > * endolith > * Bradley M. Froehle + > * Matthew R Goodman > * Christoph Gohlke > * Ralf Gommers > * Robert David Grant + > * Yaroslav Halchenko > * Charles Harris > * Jonathan Helmus > * Andreas Hilboll > * Hugo + > * Oleksandr Huziy > * Jeroen Demeyer + > * Johannes Sch?nberger + > * Steven G. Johnson + > * Chris Jordan-Squire > * Jonathan Taylor + > * Niklas Kroeger + > * Jerome Kieffer + > * kingson + > * Josh Lawrence > * Denis Laxalde > * Alex Leach + > * Lorenzo Luengo + > * Stephen McQuay + > * MinRK > * Sturla Molden + > * Eric Moore + > * mszep + > * Matt Newville + > * Vlad Niculae > * Travis Oliphant > * David Parker + > * Fabian Pedregosa > * Josef Perktold > * Zach Ploskey + > * Alex Reinhart + > * Richard Lindsley + > * Gilles Rochefort + > * Ciro Duran Santillli + > * Jan Schlueter + > * Jonathan Scholz + > * Anthony Scopatz > * Skipper Seabold > * Fabrice Silva + > * Scott Sinclair > * Jacob Stevenson + > * Sturla Molden + > * Julian Taylor + > * thorstenkranz + > * John Travers + > * True Price + > * Nicky van Foreest > * Jacob Vanderplas > * Patrick Varilly > * Daniel Velkov + > * Pauli Virtanen > * Stefan van der Walt > * Warren Weckesser > > A total of 74 people contributed to this release. > People with a "+" by their names contributed a patch for the first time. > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From ralf.gommers at gmail.com Sun Mar 31 07:35:21 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 31 Mar 2013 13:35:21 +0200 Subject: [SciPy-Dev] ANN: SciPy 0.12.0 release candidate 1 In-Reply-To: References: Message-ID: On Sun, Mar 31, 2013 at 4:08 AM, Tim Leslie wrote: > Hi Ralf, > > scipy.misc.pilutil.radon is marked as deprecated and to be removed in > 0.12, but is still available: > > > https://github.com/scipy/scipy/blob/maintenance/0.12.x/scipy/misc/pilutil.py#L456 > > Should it be removed before the release? > Hi Tim, thanks for noticing that. It should have been removed, however after RC1 is out I think it's a bit late. I'd prefer to change the message to 0.13 and remove it in master now. I'll send a PR for that. Ralf > > Cheers, > > Tim > > On 30 March 2013 23:31, Ralf Gommers wrote: > > Hi, > > > > I am pleased to announce the availability of the first release candidate > of > > SciPy 0.12.0. This is shaping up to be another solid release, with some > cool > > new features (see highlights below) and a large amount of bug fixes and > > maintenance work under the hood. The number of contributors also keeps > > rising steadily, we're at 74 so far for this release. > > > > Sources and binaries can be found at > > http://sourceforge.net/projects/scipy/files/scipy/0.12.0rc1/, release > notes > > are copied below. > > > > Please try this release and report any problems on the mailing list. If > no > > issues are found, the final release will be in one week. > > > > Cheers, > > Ralf > > > > > > ========================== > > SciPy 0.12.0 Release Notes > > ========================== > > > > .. note:: Scipy 0.12.0 is not released yet! > > > > .. contents:: > > > > SciPy 0.12.0 is the culmination of 7 months of hard work. It contains > > many new features, numerous bug-fixes, improved test coverage and > > better documentation. There have been a number of deprecations and > > API changes in this release, which are documented below. All users > > are encouraged to upgrade to this release, as there are a large number > > of bug-fixes and optimizations. Moreover, our development attention > > will now shift to bug-fix releases on the 0.12.x branch, and on adding > > new features on the master branch. > > > > Some of the highlights of this release are: > > > > - Completed QHull wrappers in scipy.spatial. > > - cKDTree now a drop-in replacement for KDTree. > > - A new global optimizer, basinhopping. > > - Support for Python 2 and Python 3 from the same code base (no more > > 2to3). > > > > This release requires Python 2.6, 2.7 or 3.1-3.3 and NumPy 1.5.1 or > greater. > > Support for Python 2.4 and 2.5 has been dropped as of this release. > > > > > > New features > > ============ > > > > ``scipy.spatial`` improvements > > ------------------------------ > > > > cKDTree feature-complete > > ^^^^^^^^^^^^^^^^^^^^^^^^ > > Cython version of KDTree, cKDTree, is now feature-complete. Most > operations > > (construction, query, query_ball_point, query_pairs, count_neighbors and > > sparse_distance_matrix) are between 200 and 1000 times faster in cKDTree > > than > > in KDTree. With very minor caveats, cKDTree has exactly the same > interface > > as > > KDTree, and can be used as a drop-in replacement. > > > > Voronoi diagrams and convex hulls > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > `scipy.spatial` now contains functionality for computing Voronoi > > diagrams and convex hulls using the Qhull library. (Delaunay > > triangulation was available since Scipy 0.9.0.) > > > > Delaunay improvements > > ^^^^^^^^^^^^^^^^^^^^^ > > It's now possible to pass in custom Qhull options in Delaunay > > triangulation. Coplanar points are now also recorded, if present. > > Incremental construction of Delaunay triangulations is now also > > possible. > > > > Spectral estimators (``scipy.signal``) > > -------------------------------------- > > The functions ``scipy.signal.periodogram`` and ``scipy.signal.welch`` > were > > added, providing DFT-based spectral estimators. > > > > > > ``scipy.optimize`` improvements > > ------------------------------- > > > > Callback functions in L-BFGS-B and TNC > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > A callback mechanism was added to L-BFGS-B and TNC minimization solvers. > > > > Basin hopping global optimization (``scipy.optimize.basinhopping``) > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > A new global optimization algorithm. Basinhopping is designed to > > efficiently > > find the global minimum of a smooth function. > > > > > > ``scipy.special`` improvements > > ------------------------------ > > > > Revised complex error functions > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > The computation of special functions related to the error function now > uses > > a > > new `Faddeeva library from MIT `__ > which > > increases their numerical precision. The scaled and imaginary error > > functions > > ``erfcx`` and ``erfi`` were also added, and the Dawson integral ``dawsn`` > > can > > now be evaluated for a complex argument. > > > > Faster orthogonal polynomials > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > Evaluation of orthogonal polynomials (the ``eval_*`` routines) in now > > faster in ``scipy.special``, and their ``out=`` argument functions > > properly. > > > > > > ``scipy.sparse.linalg`` features > > -------------------------------- > > - In ``scipy.sparse.linalg.spsolve``, the ``b`` argument can now be > either > > a vector or a matrix. > > - ``scipy.sparse.linalg.inv`` was added. This uses ``spsolve`` to > compute > > a sparse matrix inverse. > > - ``scipy.sparse.linalg.expm`` was added. This computes the exponential > of > > a sparse matrix using a similar algorithm to the existing dense array > > implementation in ``scipy.linalg.expm``. > > > > > > Listing Matlab(R) file contents in ``scipy.io`` > > ----------------------------------------------- > > A new function ``whosmat`` is available in ``scipy.io`` for inspecting > > contents > > of MAT files without reading them to memory. > > > > > > Documented BLAS and LAPACK low-level interfaces (``scipy.linalg``) > > ------------------------------------------------------------------ > > The modules `scipy.linalg.blas` and `scipy.linalg.lapack` can be used > > to access low-level BLAS and LAPACK functions. > > > > > > Polynomial interpolation improvements (``scipy.interpolate``) > > ------------------------------------------------------------- > > The barycentric, Krogh, piecewise and pchip polynomial interpolators in > > ``scipy.interpolate`` accept now an ``axis`` argument. > > > > > > Deprecated features > > =================== > > > > `scipy.lib.lapack` > > ------------------ > > The module `scipy.lib.lapack` is deprecated. You can use > > `scipy.linalg.lapack` > > instead. The module `scipy.lib.blas` was deprecated earlier in Scipy > 0.10.0. > > > > > > `fblas` and `cblas` > > ------------------- > > Accessing the modules `scipy.linalg.fblas`, `cblas`, `flapack`, > `clapack` is > > deprecated. Instead, use the modules `scipy.linalg.lapack` and > > `scipy.linalg.blas`. > > > > > > Backwards incompatible changes > > ============================== > > > > Removal of ``scipy.io.save_as_module`` > > -------------------------------------- > > The function ``scipy.io.save_as_module`` was deprecated in Scipy 0.11.0, > and > > is > > now removed. > > > > Its private support modules ``scipy.io.dumbdbm_patched`` and > > ``scipy.io.dumb_shelve`` are also removed. > > > > > > Other changes > > ============= > > > > > > Authors > > ======= > > * Anton Akhmerov + > > * Alexander Ebersp?cher + > > * Anne Archibald > > * Jisk Attema + > > * K.-Michael Aye + > > * bemasc + > > * Sebastian Berg + > > * Fran?ois Boulogne + > > * Matthew Brett > > * Lars Buitinck > > * Steven Byrnes + > > * Tim Cera + > > * Christian + > > * Keith Clawson + > > * David Cournapeau > > * Nathan Crock + > > * endolith > > * Bradley M. Froehle + > > * Matthew R Goodman > > * Christoph Gohlke > > * Ralf Gommers > > * Robert David Grant + > > * Yaroslav Halchenko > > * Charles Harris > > * Jonathan Helmus > > * Andreas Hilboll > > * Hugo + > > * Oleksandr Huziy > > * Jeroen Demeyer + > > * Johannes Sch?nberger + > > * Steven G. Johnson + > > * Chris Jordan-Squire > > * Jonathan Taylor + > > * Niklas Kroeger + > > * Jerome Kieffer + > > * kingson + > > * Josh Lawrence > > * Denis Laxalde > > * Alex Leach + > > * Lorenzo Luengo + > > * Stephen McQuay + > > * MinRK > > * Sturla Molden + > > * Eric Moore + > > * mszep + > > * Matt Newville + > > * Vlad Niculae > > * Travis Oliphant > > * David Parker + > > * Fabian Pedregosa > > * Josef Perktold > > * Zach Ploskey + > > * Alex Reinhart + > > * Richard Lindsley + > > * Gilles Rochefort + > > * Ciro Duran Santillli + > > * Jan Schlueter + > > * Jonathan Scholz + > > * Anthony Scopatz > > * Skipper Seabold > > * Fabrice Silva + > > * Scott Sinclair > > * Jacob Stevenson + > > * Sturla Molden + > > * Julian Taylor + > > * thorstenkranz + > > * John Travers + > > * True Price + > > * Nicky van Foreest > > * Jacob Vanderplas > > * Patrick Varilly > > * Daniel Velkov + > > * Pauli Virtanen > > * Stefan van der Walt > > * Warren Weckesser > > > > A total of 74 people contributed to this release. > > People with a "+" by their names contributed a patch for the first time. > > > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.leslie at gmail.com Sun Mar 31 11:15:25 2013 From: tim.leslie at gmail.com (Tim Leslie) Date: Mon, 1 Apr 2013 02:15:25 +1100 Subject: [SciPy-Dev] ANN: SciPy 0.12.0 release candidate 1 In-Reply-To: References: Message-ID: That sounds fine to me, I don't actually use the function, I just noticed that comment while browsing the source. Tim On 31 March 2013 22:35, Ralf Gommers wrote: > > > > On Sun, Mar 31, 2013 at 4:08 AM, Tim Leslie wrote: >> >> Hi Ralf, >> >> scipy.misc.pilutil.radon is marked as deprecated and to be removed in >> 0.12, but is still available: >> >> >> https://github.com/scipy/scipy/blob/maintenance/0.12.x/scipy/misc/pilutil.py#L456 >> >> Should it be removed before the release? > > > Hi Tim, thanks for noticing that. It should have been removed, however after > RC1 is out I think it's a bit late. I'd prefer to change the message to 0.13 > and remove it in master now. I'll send a PR for that. > > Ralf > >> >> >> Cheers, >> >> Tim >> >> On 30 March 2013 23:31, Ralf Gommers wrote: >> > Hi, >> > >> > I am pleased to announce the availability of the first release candidate >> > of >> > SciPy 0.12.0. This is shaping up to be another solid release, with some >> > cool >> > new features (see highlights below) and a large amount of bug fixes and >> > maintenance work under the hood. The number of contributors also keeps >> > rising steadily, we're at 74 so far for this release. >> > >> > Sources and binaries can be found at >> > http://sourceforge.net/projects/scipy/files/scipy/0.12.0rc1/, release >> > notes >> > are copied below. >> > >> > Please try this release and report any problems on the mailing list. If >> > no >> > issues are found, the final release will be in one week. >> > >> > Cheers, >> > Ralf >> > >> > >> > ========================== >> > SciPy 0.12.0 Release Notes >> > ========================== >> > >> > .. note:: Scipy 0.12.0 is not released yet! >> > >> > .. contents:: >> > >> > SciPy 0.12.0 is the culmination of 7 months of hard work. It contains >> > many new features, numerous bug-fixes, improved test coverage and >> > better documentation. There have been a number of deprecations and >> > API changes in this release, which are documented below. All users >> > are encouraged to upgrade to this release, as there are a large number >> > of bug-fixes and optimizations. Moreover, our development attention >> > will now shift to bug-fix releases on the 0.12.x branch, and on adding >> > new features on the master branch. >> > >> > Some of the highlights of this release are: >> > >> > - Completed QHull wrappers in scipy.spatial. >> > - cKDTree now a drop-in replacement for KDTree. >> > - A new global optimizer, basinhopping. >> > - Support for Python 2 and Python 3 from the same code base (no more >> > 2to3). >> > >> > This release requires Python 2.6, 2.7 or 3.1-3.3 and NumPy 1.5.1 or >> > greater. >> > Support for Python 2.4 and 2.5 has been dropped as of this release. >> > >> > >> > New features >> > ============ >> > >> > ``scipy.spatial`` improvements >> > ------------------------------ >> > >> > cKDTree feature-complete >> > ^^^^^^^^^^^^^^^^^^^^^^^^ >> > Cython version of KDTree, cKDTree, is now feature-complete. Most >> > operations >> > (construction, query, query_ball_point, query_pairs, count_neighbors and >> > sparse_distance_matrix) are between 200 and 1000 times faster in cKDTree >> > than >> > in KDTree. With very minor caveats, cKDTree has exactly the same >> > interface >> > as >> > KDTree, and can be used as a drop-in replacement. >> > >> > Voronoi diagrams and convex hulls >> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> > `scipy.spatial` now contains functionality for computing Voronoi >> > diagrams and convex hulls using the Qhull library. (Delaunay >> > triangulation was available since Scipy 0.9.0.) >> > >> > Delaunay improvements >> > ^^^^^^^^^^^^^^^^^^^^^ >> > It's now possible to pass in custom Qhull options in Delaunay >> > triangulation. Coplanar points are now also recorded, if present. >> > Incremental construction of Delaunay triangulations is now also >> > possible. >> > >> > Spectral estimators (``scipy.signal``) >> > -------------------------------------- >> > The functions ``scipy.signal.periodogram`` and ``scipy.signal.welch`` >> > were >> > added, providing DFT-based spectral estimators. >> > >> > >> > ``scipy.optimize`` improvements >> > ------------------------------- >> > >> > Callback functions in L-BFGS-B and TNC >> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> > A callback mechanism was added to L-BFGS-B and TNC minimization solvers. >> > >> > Basin hopping global optimization (``scipy.optimize.basinhopping``) >> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> > A new global optimization algorithm. Basinhopping is designed to >> > efficiently >> > find the global minimum of a smooth function. >> > >> > >> > ``scipy.special`` improvements >> > ------------------------------ >> > >> > Revised complex error functions >> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> > The computation of special functions related to the error function now >> > uses >> > a >> > new `Faddeeva library from MIT `__ >> > which >> > increases their numerical precision. The scaled and imaginary error >> > functions >> > ``erfcx`` and ``erfi`` were also added, and the Dawson integral >> > ``dawsn`` >> > can >> > now be evaluated for a complex argument. >> > >> > Faster orthogonal polynomials >> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> > Evaluation of orthogonal polynomials (the ``eval_*`` routines) in now >> > faster in ``scipy.special``, and their ``out=`` argument functions >> > properly. >> > >> > >> > ``scipy.sparse.linalg`` features >> > -------------------------------- >> > - In ``scipy.sparse.linalg.spsolve``, the ``b`` argument can now be >> > either >> > a vector or a matrix. >> > - ``scipy.sparse.linalg.inv`` was added. This uses ``spsolve`` to >> > compute >> > a sparse matrix inverse. >> > - ``scipy.sparse.linalg.expm`` was added. This computes the exponential >> > of >> > a sparse matrix using a similar algorithm to the existing dense array >> > implementation in ``scipy.linalg.expm``. >> > >> > >> > Listing Matlab(R) file contents in ``scipy.io`` >> > ----------------------------------------------- >> > A new function ``whosmat`` is available in ``scipy.io`` for inspecting >> > contents >> > of MAT files without reading them to memory. >> > >> > >> > Documented BLAS and LAPACK low-level interfaces (``scipy.linalg``) >> > ------------------------------------------------------------------ >> > The modules `scipy.linalg.blas` and `scipy.linalg.lapack` can be used >> > to access low-level BLAS and LAPACK functions. >> > >> > >> > Polynomial interpolation improvements (``scipy.interpolate``) >> > ------------------------------------------------------------- >> > The barycentric, Krogh, piecewise and pchip polynomial interpolators in >> > ``scipy.interpolate`` accept now an ``axis`` argument. >> > >> > >> > Deprecated features >> > =================== >> > >> > `scipy.lib.lapack` >> > ------------------ >> > The module `scipy.lib.lapack` is deprecated. You can use >> > `scipy.linalg.lapack` >> > instead. The module `scipy.lib.blas` was deprecated earlier in Scipy >> > 0.10.0. >> > >> > >> > `fblas` and `cblas` >> > ------------------- >> > Accessing the modules `scipy.linalg.fblas`, `cblas`, `flapack`, >> > `clapack` is >> > deprecated. Instead, use the modules `scipy.linalg.lapack` and >> > `scipy.linalg.blas`. >> > >> > >> > Backwards incompatible changes >> > ============================== >> > >> > Removal of ``scipy.io.save_as_module`` >> > -------------------------------------- >> > The function ``scipy.io.save_as_module`` was deprecated in Scipy 0.11.0, >> > and >> > is >> > now removed. >> > >> > Its private support modules ``scipy.io.dumbdbm_patched`` and >> > ``scipy.io.dumb_shelve`` are also removed. >> > >> > >> > Other changes >> > ============= >> > >> > >> > Authors >> > ======= >> > * Anton Akhmerov + >> > * Alexander Ebersp?cher + >> > * Anne Archibald >> > * Jisk Attema + >> > * K.-Michael Aye + >> > * bemasc + >> > * Sebastian Berg + >> > * Fran?ois Boulogne + >> > * Matthew Brett >> > * Lars Buitinck >> > * Steven Byrnes + >> > * Tim Cera + >> > * Christian + >> > * Keith Clawson + >> > * David Cournapeau >> > * Nathan Crock + >> > * endolith >> > * Bradley M. Froehle + >> > * Matthew R Goodman >> > * Christoph Gohlke >> > * Ralf Gommers >> > * Robert David Grant + >> > * Yaroslav Halchenko >> > * Charles Harris >> > * Jonathan Helmus >> > * Andreas Hilboll >> > * Hugo + >> > * Oleksandr Huziy >> > * Jeroen Demeyer + >> > * Johannes Sch?nberger + >> > * Steven G. Johnson + >> > * Chris Jordan-Squire >> > * Jonathan Taylor + >> > * Niklas Kroeger + >> > * Jerome Kieffer + >> > * kingson + >> > * Josh Lawrence >> > * Denis Laxalde >> > * Alex Leach + >> > * Lorenzo Luengo + >> > * Stephen McQuay + >> > * MinRK >> > * Sturla Molden + >> > * Eric Moore + >> > * mszep + >> > * Matt Newville + >> > * Vlad Niculae >> > * Travis Oliphant >> > * David Parker + >> > * Fabian Pedregosa >> > * Josef Perktold >> > * Zach Ploskey + >> > * Alex Reinhart + >> > * Richard Lindsley + >> > * Gilles Rochefort + >> > * Ciro Duran Santillli + >> > * Jan Schlueter + >> > * Jonathan Scholz + >> > * Anthony Scopatz >> > * Skipper Seabold >> > * Fabrice Silva + >> > * Scott Sinclair >> > * Jacob Stevenson + >> > * Sturla Molden + >> > * Julian Taylor + >> > * thorstenkranz + >> > * John Travers + >> > * True Price + >> > * Nicky van Foreest >> > * Jacob Vanderplas >> > * Patrick Varilly >> > * Daniel Velkov + >> > * Pauli Virtanen >> > * Stefan van der Walt >> > * Warren Weckesser >> > >> > A total of 74 people contributed to this release. >> > People with a "+" by their names contributed a patch for the first time. >> > >> > >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev >