From dwf at cs.toronto.edu Fri May 1 00:10:10 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 1 May 2009 00:10:10 -0400 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> Message-ID: <52BB8B99-5F9F-4A6F-9E7F-2248B8809012@cs.toronto.edu> On 30-Apr-09, at 11:52 PM, Ondrej Certik wrote: > Is umfpack in scipy really broken? That is very disappointing. I was > planning to use umfpack through scipy. I hope it will not be difficult > for me to fix it. I don't use it, but a friend of mine was having trouble building SciPy from svn recently and UMFPACK turned out to be the culprit. I have been planning to open a ticket, but I haven't had a chance to try it out myself and confirm the behaviour. David From gael.varoquaux at normalesup.org Fri May 1 04:17:29 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 1 May 2009 10:17:29 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: References: <20090430135619.GB9195@phare.normalesup.org> <20090430155413.GD9195@phare.normalesup.org> Message-ID: <20090501081729.GD30225@phare.normalesup.org> On Thu, Apr 30, 2009 at 11:47:38PM -0400, Nathan Bell wrote: > On Thu, Apr 30, 2009 at 11:54 AM, Gael Varoquaux > wrote: > > I have dug a bit further. It seems that all that needs to be done is > > expose the L, U, perm_c and perm_r attributes of the SciPyLUObject (AKA > > factored_lu object) returned by splu. Of course this easier said than > > done, as exposing these objects requires creating scipy sparse matrices > > from the inner SuperLU representation of these objects. > > I'd really appreciate if someone (most probably Nathan) could give me a > > hand with this. I realise that I am asking people to do my work, and I > > know exactly what my reaction is when someone comes around to me with > > this request (ie, not happy), but I am not sure I have to time required > > to learn the library and the tools to do this myself, and would hate to > > have to find an ugly workaround (this does sound like a bad excuse). > I hate to disappoint, but I don't have enough free time right now. If > you or someone else wants to give it a try I can probably provide some > support. Fair enough. I can understand that. In the mean time, I figured out that I only needed to expose thee permutation vectors from the factored_lu object. These permutation vector are not sparse objects, so it is very easy to expose them to numpy (just a question of PyArray_SimpleNewFromData and using the 'base' trick ( http://blog.enthought.com/?p=62 ) to do the reference counting. This is a bit ugly, because it does not expose all the information. Another issue is that as long as there are references on the arrays created as views from the permutation vectors, the whole factored_lu object (much heavier in memory) is not garbaged collecter. If we can live with these limitations, I think I can contribute a clean patch. It does not solve completely my problem, but still does one step in the right direction. Ga?l From gael.varoquaux at normalesup.org Fri May 1 04:22:02 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 1 May 2009 10:22:02 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> Message-ID: <20090501082202.GE30225@phare.normalesup.org> On Thu, Apr 30, 2009 at 08:52:28PM -0700, Ondrej Certik wrote: > Is umfpack in scipy really broken? That is very disappointing. There are some signs of it being depreciated in the source code. > I was planning to use umfpack through scipy. I hope it will not be > difficult for me to fix it. I had an email exchange with one of the pysparse authors, and it does seem that the pysparse is currently well-maintained, and the latest release may be exposing the best wrappers to umfpack. In the interest of reducing duplication and exposing a coherent view to our users, it might be good to focus on pysparse for sparse matrix functionnality not exposed in scipy. This brings to an important point: how do we list semi-official extension to scipy that we know are well QAed and complete well the functionnality of scipy? Ga?l From david at ar.media.kyoto-u.ac.jp Fri May 1 04:08:37 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 01 May 2009 17:08:37 +0900 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <20090501082202.GE30225@phare.normalesup.org> References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> Message-ID: <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> Gael Varoquaux wrote: > On Thu, Apr 30, 2009 at 08:52:28PM -0700, Ondrej Certik wrote: > >> Is umfpack in scipy really broken? That is very disappointing. >> > > There are some signs of it being depreciated in the source code. > umfpack is under the GPL. My understanding is that new contributions to umfpack should go to the corresponding scikit. cheers, David From gael.varoquaux at normalesup.org Fri May 1 04:33:42 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 1 May 2009 10:33:42 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> Message-ID: <20090501083342.GF30225@phare.normalesup.org> On Fri, May 01, 2009 at 05:08:37PM +0900, David Cournapeau wrote: > Gael Varoquaux wrote: > > On Thu, Apr 30, 2009 at 08:52:28PM -0700, Ondrej Certik wrote: > >> Is umfpack in scipy really broken? That is very disappointing. > > There are some signs of it being depreciated in the source code. > umfpack is under the GPL. My understanding is that new contributions to > umfpack should go to the corresponding scikit. Right, new versions of UMFPACK are indeed under GPL. I just noticed that. Have you tried contacting Tim Davis about that. The web page does suggest to contacting him for distributing UMFPACK under a different license. My guess is that distirbuting UMFPACK under BSD with scipy would be contrary to distributing it under GPL in the main download page, so our chances are small. Now I understand better the switch to SuperLU. Ga?l From cimrman3 at ntc.zcu.cz Fri May 1 05:07:26 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 01 May 2009 11:07:26 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <20090501083342.GF30225@phare.normalesup.org> References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> Message-ID: <49FABBCE.7060700@ntc.zcu.cz> Gael Varoquaux wrote: > On Fri, May 01, 2009 at 05:08:37PM +0900, David Cournapeau wrote: >> Gael Varoquaux wrote: >>> On Thu, Apr 30, 2009 at 08:52:28PM -0700, Ondrej Certik wrote: > >>>> Is umfpack in scipy really broken? That is very disappointing. > >>> There are some signs of it being depreciated in the source code. > >> umfpack is under the GPL. My understanding is that new contributions to >> umfpack should go to the corresponding scikit. > > Right, new versions of UMFPACK are indeed under GPL. I just noticed that. > Have you tried contacting Tim Davis about that. The web page does suggest > to contacting him for distributing UMFPACK under a different license. My > guess is that distirbuting UMFPACK under BSD with scipy would be contrary > to distributing it under GPL in the main download page, so our chances > are small. > > Now I understand better the switch to SuperLU. Yes, we have tried to contact Tim Davis (me and Nathan Bell, if I remeber correctly?), but he refused to provide an exception for scipy. That is why the umfpack wrappers (written quite a time ago by me) were changed into the umfpack scikit. Now there is a situation when some version of the wrappers still survives in scipy proper, while another version is in the scikit. This is not good, I know, but I have been very busy recently. But I use the wrappers on a daily basis without problem myself - if you have problems, open a ticket, please, and notify me. r. From lists at vrbka.net Fri May 1 04:56:18 2009 From: lists at vrbka.net (Lubos Vrbka) Date: Fri, 01 May 2009 10:56:18 +0200 Subject: [SciPy-user] fourier transform in numpy/scipy Message-ID: <49FAB932.30301@vrbka.net> hi guys, are the discrete fourier transform pairs in numpy/scipy self adjoint (np.fft.fft - np.fft.ifft)? i read somewhere, that this property doesn't necessarily have to be fulfilled. i performed some tests and it seems that it indeed is so, but i wanted to be sure... best, -- Lubos _ at _" http://www.lubos.vrbka.net From gael.varoquaux at normalesup.org Fri May 1 05:43:30 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 1 May 2009 11:43:30 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <49FABBCE.7060700@ntc.zcu.cz> References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> Message-ID: <20090501094330.GG30225@phare.normalesup.org> On Fri, May 01, 2009 at 11:07:26AM +0200, Robert Cimrman wrote: > if you have problems, open a ticket, please, and notify me. http://projects.scipy.org/scipy/ticket/935 Stefan is looking at fixing a few things with the scikit. IMHO the UMFPACK code in scipy should either be killed or fixed. By the way, you need to ad an 'import nose' at the end of 'test_umfpack.py' in the scikit, because nose is no longer imported in numpy.testing. Thanks for all your work, Ga?l From cimrman3 at ntc.zcu.cz Fri May 1 05:49:34 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 01 May 2009 11:49:34 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <20090501094330.GG30225@phare.normalesup.org> References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> Message-ID: <49FAC5AE.6090902@ntc.zcu.cz> Gael Varoquaux wrote: > On Fri, May 01, 2009 at 11:07:26AM +0200, Robert Cimrman wrote: >> if you have problems, open a ticket, please, and notify me. > > http://projects.scipy.org/scipy/ticket/935 Thanks! > Stefan is looking at fixing a few things with the scikit. IMHO the > UMFPACK code in scipy should either be killed or fixed. You mean kill the code in scipy proper, and maintain the scikit? +1 to that. > By the way, you need to ad an 'import nose' at the end of > 'test_umfpack.py' in the scikit, because nose is no longer imported in > numpy.testing. I see, thanks! r. From gael.varoquaux at normalesup.org Fri May 1 05:51:52 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 1 May 2009 11:51:52 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <49FAC5AE.6090902@ntc.zcu.cz> References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> Message-ID: <20090501095152.GA18818@phare.normalesup.org> On Fri, May 01, 2009 at 11:49:34AM +0200, Robert Cimrman wrote: > > Stefan is looking at fixing a few things with the scikit. IMHO the > > UMFPACK code in scipy should either be killed or fixed. > You mean kill the code in scipy proper, and maintain the scikit? +1 to that. Actually, I'd love if the (currenly-BSD) umfpack in scipy could be fixed, because I'll probably need to ship some BSD code at some point, but if nobody can do this, than it needs to be killed. Ga?l From cimrman3 at ntc.zcu.cz Fri May 1 06:04:40 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 01 May 2009 12:04:40 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <20090501095152.GA18818@phare.normalesup.org> References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <20090501095152.GA18818@phare.normalesup.org> Message-ID: <49FAC938.6000304@ntc.zcu.cz> Gael Varoquaux wrote: > On Fri, May 01, 2009 at 11:49:34AM +0200, Robert Cimrman wrote: >>> Stefan is looking at fixing a few things with the scikit. IMHO the >>> UMFPACK code in scipy should either be killed or fixed. > >> You mean kill the code in scipy proper, and maintain the scikit? +1 to that. > > Actually, I'd love if the (currenly-BSD) umfpack in scipy could be fixed, > because I'll probably need to ship some BSD code at some point, but if > nobody can do this, than it needs to be killed. There was a discussion about this some time ago resulting in that I have made the scikit, and the scipy wrappers were "scheduled to be dumped" - there were some packaging reasons in that. It was not me who wanted the umfpack wrappers go out :) - it was just more work for me with no functional gain. So of course, the umfpack in scipy can be fixed, and preserved, and I, personally, am +1 to that. But I won't commit my time to that unless there is a general consensus. cheers, r. From pav at iki.fi Fri May 1 06:23:35 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 1 May 2009 10:23:35 +0000 (UTC) Subject: [SciPy-user] Sparse factorisation References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <20090501095152.GA18818@phare.normalesup.org> Message-ID: Fri, 01 May 2009 11:51:52 +0200, Gael Varoquaux wrote: [clip] > Actually, I'd love if the (currenly-BSD) umfpack in scipy could be > fixed, because I'll probably need to ship some BSD code at some point, > but if nobody can do this, than it needs to be killed. I think it's probably not really broken. (I use it sparse.linalg.spsolve quite often, and see the umfpack deprecation warnings...) Can you check your build log if the umfpack extension was really built? (It's supposed to fail "silently" if it can't be built so that the rest of scipy.sparse still functions.) -- Pauli Virtanen From stefan at sun.ac.za Fri May 1 06:25:09 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 1 May 2009 12:25:09 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <49FAC5AE.6090902@ntc.zcu.cz> References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> Message-ID: <9457e7c80905010325x15aba97xc9896ea2565a5488@mail.gmail.com> 2009/5/1 Robert Cimrman : > Gael Varoquaux wrote: >> On Fri, May 01, 2009 at 11:07:26AM +0200, Robert Cimrman wrote: >>> if you have problems, open a ticket, please, and notify me. >> >> http://projects.scipy.org/scipy/ticket/935 > > Thanks! > >> Stefan is looking at fixing a few things with the scikit. I don't currently have write access to scikits SVN, so I'm doing the work here: http://github.com/stefanv/umfpack Cheers St?fan From gael.varoquaux at normalesup.org Fri May 1 06:40:32 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 1 May 2009 12:40:32 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <20090501095152.GA18818@phare.normalesup.org> Message-ID: <20090501104031.GC18818@phare.normalesup.org> On Fri, May 01, 2009 at 10:23:35AM +0000, Pauli Virtanen wrote: > Fri, 01 May 2009 11:51:52 +0200, Gael Varoquaux wrote: > [clip] > > Actually, I'd love if the (currenly-BSD) umfpack in scipy could be > > fixed, because I'll probably need to ship some BSD code at some point, > > but if nobody can do this, than it needs to be killed. > I think it's probably not really broken. (I use it sparse.linalg.spsolve > quite often, and see the umfpack deprecation warnings...) Correct. It is just that numpy didn't detect umfpack when I built. I am investigating why. It shouldn't happen. However, the error message could be improved. > Can you check your build log if the umfpack extension was really built? > (It's supposed to fail "silently" if it can't be built so that the rest > of scipy.sparse still functions.) As you have mentioned on the track ticket, the build is failing, and we'll fix this. I am feeling we are making progress on this issue. Fixing the error message is really important though. I'll cook up a patch to get things working on my system, with a proper error message. Ga?l From cimrman3 at ntc.zcu.cz Fri May 1 06:54:48 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 01 May 2009 12:54:48 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <9457e7c80905010325x15aba97xc9896ea2565a5488@mail.gmail.com> References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <9457e7c80905010325x15aba97xc9896ea2565a5488@mail.gmail.com> Message-ID: <49FAD4F8.3000302@ntc.zcu.cz> St?fan van der Walt wrote: > 2009/5/1 Robert Cimrman : >> Gael Varoquaux wrote: >>> On Fri, May 01, 2009 at 11:07:26AM +0200, Robert Cimrman wrote: >>>> if you have problems, open a ticket, please, and notify me. >>> http://projects.scipy.org/scipy/ticket/935 >> Thanks! >> >>> Stefan is looking at fixing a few things with the scikit. > > I don't currently have write access to scikits SVN, so I'm doing the work here: > > http://github.com/stefanv/umfpack Thank you Stefan! I really appreciate you jumped in - numpy/scipy evolve so fast now that I lag behing the framework, I use git too for my projects - it would be nice to have a git mirror of the scikits. r. From gael.varoquaux at normalesup.org Fri May 1 07:19:08 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 1 May 2009 13:19:08 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <20090501104031.GC18818@phare.normalesup.org> References: <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <20090501095152.GA18818@phare.normalesup.org> <20090501104031.GC18818@phare.normalesup.org> Message-ID: <20090501111908.GA29281@phare.normalesup.org> On Fri, May 01, 2009 at 12:40:32PM +0200, Gael Varoquaux wrote: > As you have mentioned on the track ticket, the build is failing, and > we'll fix this. OK, that was purely on my side. I was being stupid: I did not have the proper headers installed. > I'll cook up a patch to get things working on my system, with a proper > error message. So, I'll make only an error message, something like: Something like: """ Scipy was built without umfpack support. For umfpack, you need to install umfpack and the developement headers before building scipy. """ We can add this on the import failure, scipy/sparse/linalg/dsolve/umfpack $ vim umfpack.py, line 13, in with case we need to delay this import. The other option is to add the error message in the init of UmfpackContext. It wouid thus give a meangingful error message if _um is None. What do you think? Ga?l From cimrman3 at ntc.zcu.cz Fri May 1 07:25:53 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 01 May 2009 13:25:53 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <20090501111908.GA29281@phare.normalesup.org> References: <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <20090501095152.GA18818@phare.normalesup.org> <20090501104031.GC18818@phare.normalesup.org> <20090501111908.GA29281@phare.normalesup.org> Message-ID: <49FADC41.4050300@ntc.zcu.cz> Gael Varoquaux wrote: > On Fri, May 01, 2009 at 12:40:32PM +0200, Gael Varoquaux wrote: >> As you have mentioned on the track ticket, the build is failing, and >> we'll fix this. > > OK, that was purely on my side. I was being stupid: I did not have the > proper headers installed. > >> I'll cook up a patch to get things working on my system, with a proper >> error message. > > So, I'll make only an error message, something like: > > > Something like: > > """ > Scipy was built without umfpack support. For umfpack, you need to install > umfpack and the developement headers before building scipy. > """ > > We can add this on the import failure, scipy/sparse/linalg/dsolve/umfpack > $ vim umfpack.py, line 13, in with case we need to delay this import. The > other option is to add the error message in the init of UmfpackContext. > It wouid thus give a meangingful error message if _um is None. > > What do you think? I suggest not printing anything unless the user wants to use it - so the UmfpackContext should take care of it, IMHO. r. From pav at iki.fi Fri May 1 07:45:39 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 1 May 2009 11:45:39 +0000 (UTC) Subject: [SciPy-user] Scikits Git mirrors (was: Sparse factorisation) References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <9457e7c80905010325x15aba97xc9896ea2565a5488@mail.gmail.com> <49FAD4F8.3000302@ntc.zcu.cz> Message-ID: Fri, 01 May 2009 12:54:48 +0200, Robert Cimrman wrote: [clip] > I use git too for my projects - it would be nice to have a git mirror of > the scikits. Here: http://projects.scipy.org/git/ -- Pauli Virtanen From cimrman3 at ntc.zcu.cz Fri May 1 07:50:17 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 01 May 2009 13:50:17 +0200 Subject: [SciPy-user] Scikits Git mirrors In-Reply-To: References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <9457e7c80905010325x15aba97xc9896ea2565a5488@mail.gmail.com> <49FAD4F8.3000302@ntc.zcu.cz> Message-ID: <49FAE1F9.1030605@ntc.zcu.cz> Pauli Virtanen wrote: > Fri, 01 May 2009 12:54:48 +0200, Robert Cimrman wrote: > [clip] >> I use git too for my projects - it would be nice to have a git mirror of >> the scikits. > > Here: http://projects.scipy.org/git/ Great! (Oh, I _really_ lag behind.) r. From pav at iki.fi Fri May 1 07:54:50 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 1 May 2009 11:54:50 +0000 (UTC) Subject: [SciPy-user] Scikits Git mirrors References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <9457e7c80905010325x15aba97xc9896ea2565a5488@mail.gmail.com> <49FAD4F8.3000302@ntc.zcu.cz> <49FAE1F9.1030605@ntc.zcu.cz> Message-ID: Fri, 01 May 2009 13:50:17 +0200, Robert Cimrman wrote: > Pauli Virtanen wrote: >> Fri, 01 May 2009 12:54:48 +0200, Robert Cimrman wrote: [clip] >>> I use git too for my projects - it would be nice to have a git mirror >>> of the scikits. >> >> Here: http://projects.scipy.org/git/ > > Great! (Oh, I _really_ lag behind.) Only half an hour or so :) I couldn't figure out how to get the tag/branch layout of the scikits repository properly in git-svn, so right now they are missing. -- Pauli Virtanen From cimrman3 at ntc.zcu.cz Fri May 1 08:00:09 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 01 May 2009 14:00:09 +0200 Subject: [SciPy-user] Scikits Git mirrors In-Reply-To: References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <9457e7c80905010325x15aba97xc9896ea2565a5488@mail.gmail.com> <49FAD4F8.3000302@ntc.zcu.cz> <49FAE1F9.1030605@ntc.zcu.cz> Message-ID: <49FAE449.6090602@ntc.zcu.cz> Pauli Virtanen wrote: > Fri, 01 May 2009 13:50:17 +0200, Robert Cimrman wrote: >> Pauli Virtanen wrote: >>> Fri, 01 May 2009 12:54:48 +0200, Robert Cimrman wrote: [clip] >>>> I use git too for my projects - it would be nice to have a git mirror >>>> of the scikits. >>> Here: http://projects.scipy.org/git/ >> Great! (Oh, I _really_ lag behind.) > > Only half an hour or so :) So it's true - things tend to happen if enough people wants them :) BTW. is the mirroring working both ways? I have yet to look at git-svn. thanks! r. From david at ar.media.kyoto-u.ac.jp Fri May 1 07:49:59 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 01 May 2009 20:49:59 +0900 Subject: [SciPy-user] Scikits Git mirrors In-Reply-To: <49FAE449.6090602@ntc.zcu.cz> References: <20090430135619.GB9195@phare.normalesup.org> <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <20090501082202.GE30225@phare.normalesup.org> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <9457e7c80905010325x15aba97xc9896ea2565a5488@mail.gmail.com> <49FAD4F8.3000302@ntc.zcu.cz> <49FAE1F9.1030605@ntc.zcu.cz> <49FAE449.6090602@ntc.zcu.cz> Message-ID: <49FAE1E7.6060103@ar.media.kyoto-u.ac.jp> Robert Cimrman wrote: > > So it's true - things tend to happen if enough people wants them :) > > BTW. is the mirroring working both ways? I have yet to look at git-svn. > Yes, at least numpy and scipy mirrors can be pushed to (I almost never use svn anymore for my own contributions to either project), but git-svn has some caveats. Some doc are here: http://projects.scipy.org/numpy/wiki/GitMirror http://projects.scipy.org/numpy/wiki/GitWorkflow cheers, David From gael.varoquaux at normalesup.org Fri May 1 10:21:27 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 1 May 2009 16:21:27 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <20090501081729.GD30225@phare.normalesup.org> References: <20090430135619.GB9195@phare.normalesup.org> <20090430155413.GD9195@phare.normalesup.org> <20090501081729.GD30225@phare.normalesup.org> Message-ID: <20090501142127.GG29660@phare.normalesup.org> On Fri, May 01, 2009 at 10:17:29AM +0200, Gael Varoquaux wrote: > In the mean time, I figured out that I only needed to expose thee > permutation vectors from the factored_lu object. These permutation vector > are not sparse objects, so it is very easy to expose them to numpy (just > a question of PyArray_SimpleNewFromData and using the 'base' trick ( > http://blog.enthought.com/?p=62 ) to do the reference counting. > This is a bit ugly, because it does not expose all the information. > Another issue is that as long as there are references on the arrays > created as views from the permutation vectors, the whole factored_lu > object (much heavier in memory) is not garbaged collecter. > If we can live with these limitations, I think I can contribute a clean > patch. It does not solve completely my problem, but still does one step > in the right direction. I attached a patch implementing the above plan. It is up for review. http://projects.scipy.org/scipy/ticket/937 Ga?l From cimrman3 at ntc.zcu.cz Fri May 1 10:27:57 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 01 May 2009 16:27:57 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <20090430155413.GD9195@phare.normalesup.org> References: <20090430135619.GB9195@phare.normalesup.org> <20090430155413.GD9195@phare.normalesup.org> Message-ID: <49FB06ED.8060308@ntc.zcu.cz> Gael Varoquaux wrote: > On Thu, Apr 30, 2009 at 03:56:19PM +0200, Gael Varoquaux wrote: >> I have been looking around in the sparse code source, as well as the >> scikits.umfpack code source, and I must admit I am a bit at loss to what >> is the best way to achieve my goals. > > I have dug a bit further. It seems that all that needs to be done is > expose the L, U, perm_c and perm_r attributes of the SciPyLUObject (AKA > factored_lu object) returned by splu. Of course this easier said than > done, as exposing these objects requires creating scipy sparse matrices > from the inner SuperLU representation of these objects. > > I'd really appreciate if someone (most probably Nathan) could give me a > hand with this. I realise that I am asking people to do my work, and I > know exactly what my reaction is when someone comes around to me with > this request (ie, not happy), but I am not sure I have to time required > to learn the library and the tools to do this myself, and would hate to > have to find an ugly workaround (this does sound like a bad excuse). Hi again, Gael, I have completely forgotten that Nathan added lu() method to the UmfpackContext class, that returns all the L, U, P, Q, R matrices. from scipy.sparse.linalg.dsolve import umfpack uc = umfpack.UmfpackContext() uc.lu(a) Is it what you need? r. From gael.varoquaux at normalesup.org Fri May 1 10:40:03 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 1 May 2009 16:40:03 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <49FB06ED.8060308@ntc.zcu.cz> References: <20090430135619.GB9195@phare.normalesup.org> <20090430155413.GD9195@phare.normalesup.org> <49FB06ED.8060308@ntc.zcu.cz> Message-ID: <20090501144003.GH29660@phare.normalesup.org> On Fri, May 01, 2009 at 04:27:57PM +0200, Robert Cimrman wrote: > I have completely forgotten that Nathan added lu() method to the > UmfpackContext class, that returns all the L, U, P, Q, R matrices. > from scipy.sparse.linalg.dsolve import umfpack > uc = umfpack.UmfpackContext() > uc.lu(a) > Is it what you need? It is indeed (well almost). This is why fixing umfpack was important to me. And with my patch we now also have an option for users that do not have umfpack installed. Now, what I really need, is the same thing for a Cholesky, and there is a relationship between sparse LU and sparse Choleski (google symamd), so I'll move forward from that. I must worry about other things, now, I spent way too much time understanding all this, and tomorrow I am leaving for a week... Thanks for your input, Ga?l From cimrman3 at ntc.zcu.cz Fri May 1 10:46:36 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 01 May 2009 16:46:36 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <20090501144003.GH29660@phare.normalesup.org> References: <20090430135619.GB9195@phare.normalesup.org> <20090430155413.GD9195@phare.normalesup.org> <49FB06ED.8060308@ntc.zcu.cz> <20090501144003.GH29660@phare.normalesup.org> Message-ID: <49FB0B4C.5090501@ntc.zcu.cz> Gael Varoquaux wrote: > On Fri, May 01, 2009 at 04:27:57PM +0200, Robert Cimrman wrote: >> I have completely forgotten that Nathan added lu() method to the >> UmfpackContext class, that returns all the L, U, P, Q, R matrices. > >> from scipy.sparse.linalg.dsolve import umfpack >> uc = umfpack.UmfpackContext() >> uc.lu(a) > >> Is it what you need? > > It is indeed (well almost). This is why fixing umfpack was important to > me. And with my patch we now also have an option for users that do not > have umfpack installed. I see. Then splu() should be updated to interface both the superlu and umfpack factorization facilities, similarly to spsolve() and factorized() ... > Now, what I really need, is the same thing for a Cholesky, and there is > a relationship between sparse LU and sparse Choleski (google symamd), so > I'll move forward from that. ... and spchol() should be added. > I must worry about other things, now, I spent way too much time > understanding all this, and tomorrow I am leaving for a week... See you here after, then. cheers, r. From gael.varoquaux at normalesup.org Fri May 1 10:51:21 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 1 May 2009 16:51:21 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <49FB0B4C.5090501@ntc.zcu.cz> References: <20090430135619.GB9195@phare.normalesup.org> <20090430155413.GD9195@phare.normalesup.org> <49FB06ED.8060308@ntc.zcu.cz> <20090501144003.GH29660@phare.normalesup.org> <49FB0B4C.5090501@ntc.zcu.cz> Message-ID: <20090501145121.GA6637@phare.normalesup.org> On Fri, May 01, 2009 at 04:46:36PM +0200, Robert Cimrman wrote: > ... and spchol() should be added. That would be fantastic. I'll see if I find time, but I doubt it, as it is a non trivial bit, as I find exposing sparse matrices from C to numpy is tough. In addition, I don't think that SuperLU has a Cholesky routine. As discussed in projects.scipy.org/scipy/ticket/261, an interesting alley would be to the TAUCS package. Ga?l From cimrman3 at ntc.zcu.cz Fri May 1 11:03:33 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 01 May 2009 17:03:33 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <20090501145121.GA6637@phare.normalesup.org> References: <20090430135619.GB9195@phare.normalesup.org> <20090430155413.GD9195@phare.normalesup.org> <49FB06ED.8060308@ntc.zcu.cz> <20090501144003.GH29660@phare.normalesup.org> <49FB0B4C.5090501@ntc.zcu.cz> <20090501145121.GA6637@phare.normalesup.org> Message-ID: <49FB0F45.6030706@ntc.zcu.cz> Gael Varoquaux wrote: > On Fri, May 01, 2009 at 04:46:36PM +0200, Robert Cimrman wrote: >> ... and spchol() should be added. > > That would be fantastic. I'll see if I find time, but I doubt it, as it > is a non trivial bit, as I find exposing sparse matrices from C to numpy > is tough. In addition, I don't think that SuperLU has a Cholesky routine. > As discussed in projects.scipy.org/scipy/ticket/261, an interesting alley > would be to the TAUCS package. TAUCS looks very interesting, thanks for mentioning it. r. From stefan at sun.ac.za Fri May 1 12:45:43 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 1 May 2009 18:45:43 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <20090501111908.GA29281@phare.normalesup.org> References: <85b5c3130904302052k7fe30d7ekeb3de8fbf9cd8f72@mail.gmail.com> <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <20090501095152.GA18818@phare.normalesup.org> <20090501104031.GC18818@phare.normalesup.org> <20090501111908.GA29281@phare.normalesup.org> Message-ID: <9457e7c80905010945l2387da0r9397c25124c73706@mail.gmail.com> 2009/5/1 Gael Varoquaux : > On Fri, May 01, 2009 at 12:40:32PM +0200, Gael Varoquaux wrote: >> As you have mentioned on the track ticket, the build is failing, and >> we'll fix this. > > OK, that was purely on my side. I was being stupid: I did not have the > proper headers installed. I'm always caught out by specifying illegal paths in site.cfg, so I now added a warning for that as well. Cheers St?fan From kcarnold at mit.edu Fri May 1 13:43:36 2009 From: kcarnold at mit.edu (Kenneth Arnold) Date: Fri, 1 May 2009 13:43:36 -0400 Subject: [SciPy-user] sparse SVD Message-ID: 2009/4/9 Rob Patro : > Is there any implementation of sparse SVD available in scipy? If not, > does anyone know of an implementation available in python at all? I'd > like to port a project on which I'm working from Matlab to Python, but > it is crucial that I am able to perform the SVD of large and *very* > sparse matrices. The Commonsense Computing Initiative at the MIT Media Lab (http://csc.media.mit.edu but probably best known for http://openmind.media.mit.edu) had a similar problem two years ago: we wanted to run an SVD on a large, sparse semantic network. So we build Divisi (http://divisi.media.mit.edu), which is based on numpy, but also: * wraps SVDLIBC (first with SWIG, now with Cython) (the SVD functionality is abstracted, so we could easily switch to something like cvxopt or ARPACK which I hadn't heard of) * has a data structure for sparse tensors (i.e., matrices with dim > 2) * has a layered model of views enabling: - labeling rows and columns with arbitrary Python objects - various forms of normalization - unfolding tensors into 2D for the higher-order SVD (HO-SVD) operation * supports various math with the SVD results * supports "blending" data from different sources * (in progress) can reason by association as well as similarity The result, refined over almost 2 years of work (by grad students) has powered nearly all of our group's research during this time. It's released under GPL, but other licensing is possible especially if your company sponsors the Media Lab. If you have the numpy headers, you should be able to just `easy_install divisi`. We've recently been working on distribution, so let us know if anything about that is broken. We think that significant chunks of this code would make a great addition to numpy/scipy. We don't have the resources to push integration ourselves, though, but we could certainly help anyone who is interested in assimilating our code. And in the mean time it should be useful to anyone wanting to run sparse SVDs. -Ken From stefan at sun.ac.za Fri May 1 13:46:10 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 1 May 2009 19:46:10 +0200 Subject: [SciPy-user] firwin upgrades In-Reply-To: <23246480.post@talk.nabble.com> References: <23246480.post@talk.nabble.com> Message-ID: <9457e7c80905011046s674105ffj99099676b08be0db@mail.gmail.com> Hi Tom 2009/4/26 Tom K. : > 1) add "btype" kwarg after cutoff that may be any key of the band_dict > 2) Allow cutoff to be an arbitrary length list. ?Only need a boolean to > 3) Same as option 2), but instead of a new boolean argument, allow "0" as > > What are your preferences from the above (or, suggest an alternative)? My preferences is for a notation that allows the reader of code to see what is happening without examining the API of firfilter. So, suggestion 1 with the prerequisite that the user must specify the filter type appeals to me: firfilter([0, 0.1], type='pass') # low-pass firfilter([0, 0.1], type='stop') # high-pass firfilter([0.1, 0.2], type='pass') # band-pass filter firfilter([0.1, 0.2], type='stop') # band-stop filter > NULL AT NYQUIST ISSUE > ?a) issue a warning, increase length by 1, and return the longer filter > [this is the behavior of another popular signal processing package] > ?b) design the filter anyway, and issue either an error if noScale is False > (since the scaling would cause a divide by 0 - see proposal below) or a > warning if noScale is True. Not sure about this. Is there an elegant solution? (a) seems as good as any. > SUPPORT FOR NO SCALING > Currently, the filter is scaled so that the DC value of the filter is unity. > This filter no longer minimizes the integral of the square of the error > because the scaling is not natural. ?I propose we provide a boolean > "noScale" argument that will allow the filter to float according to the > actual least-squares filter. ? How does that sound? Sure, as long as we avoid the camelCaps :-) In your experience, is "no_scale=False" the most common behaviour? Regards St?fan From stefan at sun.ac.za Fri May 1 13:58:23 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 1 May 2009 19:58:23 +0200 Subject: [SciPy-user] sparse SVD In-Reply-To: References: Message-ID: <9457e7c80905011058y91bd794s54823406399f07ba@mail.gmail.com> Hi Ken 2009/5/1 Kenneth Arnold : > We think that significant chunks of this code would make a great > addition to numpy/scipy. We don't have the resources to push > integration ourselves, though, but we could certainly help anyone who > is interested in assimilating our code. And in the mean time it should > be useful to anyone wanting to run sparse SVDs. We are always glad for new code contributions! SciPy and NumPy are BSD licensed, so would your lab be able to relicense the code? I think we could benefit from having both SVDLIBC and ARPACK sparse SVD wrappers in SciPy. Can you tell us a bit more about the sparse tensor representation you use? Regards St?fan From gael.varoquaux at normalesup.org Fri May 1 19:39:46 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 2 May 2009 01:39:46 +0200 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <9457e7c80905010945l2387da0r9397c25124c73706@mail.gmail.com> References: <49FAAE05.3090206@ar.media.kyoto-u.ac.jp> <20090501083342.GF30225@phare.normalesup.org> <49FABBCE.7060700@ntc.zcu.cz> <20090501094330.GG30225@phare.normalesup.org> <49FAC5AE.6090902@ntc.zcu.cz> <20090501095152.GA18818@phare.normalesup.org> <20090501104031.GC18818@phare.normalesup.org> <20090501111908.GA29281@phare.normalesup.org> <9457e7c80905010945l2387da0r9397c25124c73706@mail.gmail.com> Message-ID: <20090501233946.GC31876@phare.normalesup.org> On Fri, May 01, 2009 at 06:45:43PM +0200, St?fan van der Walt wrote: > I'm always caught out by specifying illegal paths in site.cfg, so I > now added a warning for that as well. Good. Warnings are very useful. Ga?l From adam.ginsburg at colorado.edu Fri May 1 20:19:58 2009 From: adam.ginsburg at colorado.edu (Adam Ginsburg) Date: Fri, 1 May 2009 18:19:58 -0600 Subject: [SciPy-user] Constrained least-squares fitting routine? Message-ID: Hi Scipy group, Is there a constrained least squares fitting routine available, or can anyone offer me tips on implementing such a beast? I have been using scipy.optimize.leastsq, but I do not know how to constrain parameters. The model I'm looking to emulate is Craig Markwardt's mpfit.pro (http://www.physics.wisc.edu/~craigm/idl/down/mpfit.pro), in particular the parinfo section that allows max/min and fixed parameters. I've tried simply constraining parameters in my fitting function using if statements to set min/max values, but this strategy fails, I think because the algorithm pushes into space outside of the limits and can't get back. I don't think the constrained fitting tools, e.g. fmin_cobyla, are what I'm looking for, but I can't be certain I understand them. Are they likely/likelier to get stuck in local minima than the Levenberg-Marquardt algorithm used in leastsq? Thanks, Adam From thomas.robitaille at gmail.com Fri May 1 20:49:31 2009 From: thomas.robitaille at gmail.com (Thomas Robitaille) Date: Fri, 1 May 2009 20:49:31 -0400 Subject: [SciPy-user] Ignoring pixels with gaussian_filter In-Reply-To: <49F288A7.7040609@ncsu.edu> References: <274C4D39-07A2-4236-B14C-6C104EE11352@gmail.com> <49F288A7.7040609@ncsu.edu> Message-ID: <9673F763-87D7-4268-BA86-E296D1D74297@gmail.com> Thanks a lot for pointing this out! In the end I've used this property and implemented a fortran routine to do the smoothing, wrapped using f2py. The resulting routine is almost as fast as the default gaussian_filter from scipy. Cheers, Thomas On 24 Apr 2009, at 23:51, alex wrote: > Thomas Robitaille wrote: >> ... >> # define gaussian function >> def gaussian(cx, cy, w): >> return lambda x,y: np.exp(-(((cx-x)/w)**2+((cy-y)/w)**2)/2) >> ... > > A neat mathematical property of gaussian blur that is not true of 2d > kernels in general is that it can be applied to the x and y axes > separately. That is, it can be implemented as two 1d passes instead > of > one 2d pass. This could speed up your code a lot if you aren't > already > doing it. > > Alex > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From rob.clewley at gmail.com Fri May 1 21:07:41 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Fri, 1 May 2009 21:07:41 -0400 Subject: [SciPy-user] Constrained least-squares fitting routine? In-Reply-To: References: Message-ID: On Fri, May 1, 2009 at 8:19 PM, Adam Ginsburg wrote: > Hi Scipy group, > ? Is there a constrained least squares fitting routine available, or > can anyone offer me tips on implementing such a beast? I don't think so, but I'm not absolutely sure. Anyway, see below. > ?I have been > using scipy.optimize.leastsq, but I do not know how to constrain > parameters. ?The model I'm looking to emulate is Craig Markwardt's > mpfit.pro (http://www.physics.wisc.edu/~craigm/idl/down/mpfit.pro), in > particular the parinfo section that allows max/min and fixed > parameters. ?I've tried simply constraining parameters in my fitting > function using if statements to set min/max values, but this strategy > fails, I think because the algorithm pushes into space outside of the > limits and can't get back. Well, of course, because the poor algorithm can't see the discrete boundary "coming". At the very least you have to make the penalty vary smoothly with the parameters because you're dealing with a *gradient* descent algorithm. I have had a lot of success with penalty functions, even though they are a bit of a hack and certainly don't come with any theoretical guarantees. You can try appropriately rescaled 1/x or log functions and sometimes other funky things, provided at least that there is some feedback given to the algorithm about exactly *how* badly it is failing when it goes past the boundary (I sometimes scale a large constant penalty by the square of how far the parameter passed the boundary). Preferably, if you can know that your solution won't be right at the boundary, you can make your penalty function kick in before the boundary is even reached to push back from it before something bad might happen (in case your system catastrophically fails for values beyond the boundary). In general this is an extremely non-trivial problem and I'm not aware of good solutions apart from spending a lot more time analyzing your parameter space in other ways (sensitivities) and coming up with better measures of fitness than the naive "distance" between two curves (for instance). HTH, Rob From aisaac at american.edu Fri May 1 23:05:42 2009 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 01 May 2009 23:05:42 -0400 Subject: [SciPy-user] Constrained least-squares fitting routine? In-Reply-To: References: Message-ID: <49FBB886.80608@american.edu> On 5/1/2009 8:19 PM Adam Ginsburg apparently wrote: > Is there a constrained least squares fitting routine available, or > can anyone offer me tips on implementing such a beast? Are any of these helpful to you? http://openopt.org/Problems (e.g., http://openopt.org/LLSP) Alan Isaac From william.ratcliff at gmail.com Sat May 2 00:43:16 2009 From: william.ratcliff at gmail.com (william ratcliff) Date: Sat, 2 May 2009 00:43:16 -0400 Subject: [SciPy-user] Constrained least-squares fitting routine? In-Reply-To: <49FBB886.80608@american.edu> References: <49FBB886.80608@american.edu> Message-ID: <827183970905012143m25f5f43ar4cc9e27125f61fa0@mail.gmail.com> Some time back, there was a python port of mpfit, using Numeric. I talked to that author and he is happy with a BSD license. Since, several people have patched it to use numpy (see http://code.google.com/p/astrolibpy/) and the author there is happy with a BSD license--though there is room for cosmetic improvement. Does anyone know the original license for minpack? I plan to talk to the author of mpfit next week and see if he is amenable to a BSD license. If so, would this fit into numpy? It takes care of the annoyance of making a wrapper to leastsq for the simple case of fixed parameters and has an ansatz for deaing with limits. It relies on a qr factorization, but we could either switch out the python chunks of code which do that for their minpack equivalents, or use the Numerical Recipes suggestion (not the code!!!!) to use SVD instead of QR--but as a stop-gap, could one of the developers tell me if we do manage to get BSD licensing agreements, can this go into scipy, or do we have to implement from scratch? Also, for the BSD agreements, are emails sufficient, or do I need to try to get faxes? For openopt, there seems to be a way to fix variables only for the ralg algorithm, and I don't see where you get a covariance matrix out at the end so that you have a fighting chance of getting errorbars.... William On Fri, May 1, 2009 at 11:05 PM, Alan G Isaac wrote: > On 5/1/2009 8:19 PM Adam Ginsburg apparently wrote: > > Is there a constrained least squares fitting routine available, or > > can anyone offer me tips on implementing such a beast? > > Are any of these helpful to you? > http://openopt.org/Problems > (e.g., http://openopt.org/LLSP) > > Alan Isaac > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat May 2 08:38:28 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 2 May 2009 07:38:28 -0500 Subject: [SciPy-user] Constrained least-squares fitting routine? In-Reply-To: <827183970905012143m25f5f43ar4cc9e27125f61fa0@mail.gmail.com> References: <49FBB886.80608@american.edu> <827183970905012143m25f5f43ar4cc9e27125f61fa0@mail.gmail.com> Message-ID: <3d375d730905020538h44a190bfj47d7719dcddae653@mail.gmail.com> On Fri, May 1, 2009 at 23:43, william ratcliff wrote: > Some time back, there was a python port of mpfit, using Numeric.?? I talked > to that author and he is happy with a BSD license.? Since, several people > have patched it to use numpy (see http://code.google.com/p/astrolibpy/) and > the author there is happy with a BSD license--though there is room for > cosmetic improvement.? Does anyone know the original license for minpack? > I plan to talk to the author of mpfit next week and see if he is amenable to > a BSD license.? If so, would this fit into numpy?? It takes care of the > annoyance of making a wrapper to leastsq for the simple case of fixed > parameters and has an ansatz for deaing with limits.? It relies on a qr > factorization, but we could either switch out the python chunks of code > which do that for their minpack equivalents, or use the Numerical Recipes > suggestion (not the code!!!!) to use SVD instead of QR--but as a stop-gap, > could one of the developers tell me if we do manage to get BSD licensing > agreements, can this go into scipy, or do we have to implement from scratch? It can go into scipy. > Also, for the BSD agreements, are emails sufficient, or do I need to try to > get faxes? Emails are sufficient. Try to get a clear, full statement (e.g. "I release SuchAndSuch under the BSD license.") rather than something like "Sure." :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From haase at msg.ucsf.edu Sat May 2 10:45:35 2009 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sat, 2 May 2009 16:45:35 +0200 Subject: [SciPy-user] Constrained least-squares fitting routine? In-Reply-To: References: Message-ID: On Sat, May 2, 2009 at 2:19 AM, Adam Ginsburg wrote: > Hi Scipy group, > ? Is there a constrained least squares fitting routine available, or > can anyone offer me tips on implementing such a beast? ?I have been > using scipy.optimize.leastsq, but I do not know how to constrain > parameters. ?The model I'm looking to emulate is Craig Markwardt's > mpfit.pro (http://www.physics.wisc.edu/~craigm/idl/down/mpfit.pro), in > particular the parinfo section that allows max/min and fixed > parameters. ?I've tried simply constraining parameters in my fitting > function using if statements to set min/max values, but this strategy > fails, I think because the algorithm pushes into space outside of the > limits and can't get back. > ? ? I don't think the constrained fitting tools, e.g. fmin_cobyla, > are what I'm looking for, but I can't be certain I understand them. > Are they likely/likelier to get stuck in local minima than the > Levenberg-Marquardt algorithm used in leastsq? > how about cobyla - that is part of scipy and ready to go ... - Sebastian Haase From adam.ginsburg at colorado.edu Sat May 2 17:39:04 2009 From: adam.ginsburg at colorado.edu (Adam Ginsburg) Date: Sat, 2 May 2009 15:39:04 -0600 Subject: [SciPy-user] Constrained least-squares fitting routine? Message-ID: Thanks for the replies. Re: Rob - You're right that it's not a simple problem to solve, but I'm pretty sure other people have solved it and I was hoping for a solution that was already implemented. However, your response gives me some insight into how to deal with the problem. Thanks. Re: Alan - That looks potentially very useful, but OpenOpt looks to be in pretty early stages of development, especially in terms of documentation. Is there any chance OpenOpt code will be included in scipy in the (near) future? Re: William - Good call on mpfit.py. Somehow my google searches completely missed it, but once you pointed it out I realized I already had the astrolibpy code. mpfit.py is also at http://cars9.uchicago.edu/software/python/mpfit.html... and even though it's not explicitly compatible with numpy, I haven't run into any problems using that version either. Thanks all! I think I have my solution, though I hope mpfit or some variant will be included in scipy in the future. Adam From tpk at kraussfamily.org Sat May 2 17:56:44 2009 From: tpk at kraussfamily.org (Tom K.) Date: Sat, 2 May 2009 14:56:44 -0700 (PDT) Subject: [SciPy-user] firwin upgrades In-Reply-To: <9457e7c80905011046s674105ffj99099676b08be0db@mail.gmail.com> References: <23246480.post@talk.nabble.com> <9457e7c80905011046s674105ffj99099676b08be0db@mail.gmail.com> Message-ID: <23350618.post@talk.nabble.com> St?fan van der Walt wrote: > > Hi Tom > > 2009/4/26 Tom K. : >> 1) add "btype" kwarg after cutoff that may be any key of the band_dict >> 2) Allow cutoff to be an arbitrary length list. ?Only need a boolean to >> 3) Same as option 2), but instead of a new boolean argument, allow "0" as >> >> What are your preferences from the above (or, suggest an alternative)? > > My preferences is for a notation that allows the reader of code to see > what is happening without examining the API of firfilter. So, > suggestion 1 with the prerequisite that the user must specify the > filter type appeals to me: > > firfilter([0, 0.1], type='pass') # low-pass > firfilter([0, 0.1], type='stop') # high-pass > > firfilter([0.1, 0.2], type='pass') # band-pass filter > firfilter([0.1, 0.2], type='stop') # band-stop filter > Thanks for the suggestion St?fan! I think readability of the client code is a great prerequisite. However, what you suggest is not backwards compatible with the current firwin behavior - and I was talking about upgrading firwin rather than adding a new function. Are you proposing a new function? Or proposing that the upgrades not be backwards compatible? St?fan van der Walt wrote: > >> NULL AT NYQUIST ISSUE >> ?a) issue a warning, increase length by 1, and return the longer filter >> [this is the behavior of another popular signal processing package] >> ?b) design the filter anyway, and issue either an error if noScale is >> False >> (since the scaling would cause a divide by 0 - see proposal below) or a >> warning if noScale is True. > > Not sure about this. Is there an elegant solution? (a) seems as good as > any. > Hmm. I think bumping up the filter order seems kind of slimy now that I think about it. If someone asks for a filter of a certain (even) length, but also asks for it to be a highpass filter scaled so that the response at Nyquist is 1, then I think that is a bad choice and an error should result. St?fan van der Walt wrote: > >> SUPPORT FOR NO SCALING >> Currently, the filter is scaled so that the DC value of the filter is >> unity. >> This filter no longer minimizes the integral of the square of the error >> because the scaling is not natural. ?I propose we provide a boolean >> "noScale" argument that will allow the filter to float according to the >> actual least-squares filter. ? How does that sound? > > Sure, as long as we avoid the camelCaps :-) In your experience, is > "no_scale=False" the most common behaviour? > The scaling is the default behavior which is meant AFAIK to avoid a problem where the bandwidth is very narrow and the filter length is very small - but I think it is misleading because you really end up scaling up the filter and moving the cutoff frequency in that case. Here's an example - note the scaling in front of sinc of 0.05 is equal to the Nyquist-normalized cutoff, in order to get a least-squares approximation to a passband of "1": h=.05*special.sinc(.05*(numpy.arange(11)-5)) H=numpy.fft.fft(h, 1000) h1=.05*special.sinc(.05*(numpy.arange(101)-50)) H1=numpy.fft.fft(h1, 1000) plot(abs(H)) plot(abs(H1)) In the plot, the frequency response of the long length 101 filter is close to the desired value of 1 across the narrow pass band. But the frequency response H of the short length 11 filter only comes up to 0.5. The current scaling behavior will bring this value up to exactly 1. I am proposing we add an option to allow users of this function to avoid this scaling, so they can decide whether to tack down the magnitude in this way for themselves. I now prefer a "scale" flag which defaults to True, somehow noScale with a default of False seems like a double negative. This also avoids the camelCaps ;-) Cheers, Tom K. -- View this message in context: http://www.nabble.com/firwin-upgrades-tp23246480p23350618.html Sent from the Scipy-User mailing list archive at Nabble.com. From william.ratcliff at gmail.com Sat May 2 18:41:05 2009 From: william.ratcliff at gmail.com (william ratcliff) Date: Sat, 2 May 2009 18:41:05 -0400 Subject: [SciPy-user] Constrained least-squares fitting routine? In-Reply-To: References: Message-ID: <827183970905021541y28c4395fq4c644d97b173a484@mail.gmail.com> Be careful with the version on cars that is using Numeric--at least when I used it with just trying to swap numpy to numeric, I found that there was a glitch in fixed parameters. That is fixed in the version at astrolibpy. William On Sat, May 2, 2009 at 5:39 PM, Adam Ginsburg wrote: > Thanks for the replies. > > Re: Rob - You're right that it's not a simple problem to solve, but > I'm pretty sure other people have solved it and I was hoping for a > solution that was already implemented. However, your response gives > me some insight into how to deal with the problem. Thanks. > > Re: Alan - That looks potentially very useful, but OpenOpt looks to be > in pretty early stages of development, especially in terms of > documentation. Is there any chance OpenOpt code will be included in > scipy in the (near) future? > > Re: William - Good call on mpfit.py. Somehow my google searches > completely missed it, but once you pointed it out I realized I already > had the astrolibpy code. mpfit.py is also at > http://cars9.uchicago.edu/software/python/mpfit.html... and even > though it's not explicitly compatible with numpy, I haven't run into > any problems using that version either. > > Thanks all! I think I have my solution, though I hope mpfit or some > variant will be included in scipy in the future. > > Adam > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sat May 2 20:33:12 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 3 May 2009 02:33:12 +0200 Subject: [SciPy-user] firwin upgrades In-Reply-To: <23350618.post@talk.nabble.com> References: <23246480.post@talk.nabble.com> <9457e7c80905011046s674105ffj99099676b08be0db@mail.gmail.com> <23350618.post@talk.nabble.com> Message-ID: <9457e7c80905021733v3bf24777y9e66eedfa72b08d2@mail.gmail.com> Hi Tom 2009/5/2 Tom K. : > I think readability of the client code is a great prerequisite. ?However, > what you suggest is not backwards compatible with the current firwin > behavior - and I was talking about upgrading firwin rather than adding a new > function. ?Are you proposing a new function? ?Or proposing that the upgrades > not be backwards compatible? The current signature is s.firwin(N, cutoff, width=None, window='hamming') In the suggested API (which is just that, since I haven't designed filters in ages) the input is always an array/list, so what I would do is to change the signature to s.firwin(N, freqs, type='pass', width=None, window='hamming') Whenever freqs is a scalar and type is 'pass', we have the old behaviour (and API compatibility). For other behaviour, the user has to change type to 'stop', or has to specify an array of values in freqs. If you don't want the 'type' flag, it may be simpler to have two separate functions fir_pass and fir_stop that call firwin with the appropriate parameters. > I now prefer a "scale" flag which defaults to True, somehow noScale with a > default of False seems like a double negative. ?This also avoids the > camelCaps ;-) Good idea. The scaling is necessary by default, so its best the user manually switches it off if not required. Is freqz working properly? I'm trying to get a filter response out of it, but the results look a bit strange. Cheers St?fan From tpk at kraussfamily.org Sat May 2 21:49:28 2009 From: tpk at kraussfamily.org (Tom K.) Date: Sat, 2 May 2009 18:49:28 -0700 (PDT) Subject: [SciPy-user] firwin upgrades In-Reply-To: <9457e7c80905021733v3bf24777y9e66eedfa72b08d2@mail.gmail.com> References: <23246480.post@talk.nabble.com> <9457e7c80905011046s674105ffj99099676b08be0db@mail.gmail.com> <23350618.post@talk.nabble.com> <9457e7c80905021733v3bf24777y9e66eedfa72b08d2@mail.gmail.com> Message-ID: <23351898.post@talk.nabble.com> St?fan van der Walt wrote: > > The current signature is > > s.firwin(N, cutoff, width=None, window='hamming') > > In the suggested API (which is just that, since I haven't designed > filters in ages) the input is always an array/list, so what I would do > is to change the signature to > > s.firwin(N, freqs, type='pass', width=None, window='hamming') > > Whenever freqs is a scalar and type is 'pass', we have the old > behaviour (and API compatibility). For other behaviour, the user has > to change type to 'stop', or has to specify an array of values in > freqs. > > Is freqz working properly? I'm trying to get a filter response out of > it, but the results look a bit strange. > That sounds good. I see now that it can be made backwards compatible - BUT I think we need to keep the signature static, so the new 'type' kwarg should be at the end, and the 'cutoff' should not change to 'freqs' (in case someone called with kwargs e.g. firwin(N=101, cutoff=.1)). Hence new signature would be: s.firwin(N, cutoff, width=None, window='hamming', type='pass') Does that sound about right? What aspect of freqz is not working for you? This is the first time I've ever run it (in this language :-), seemed to work reasonably well: w,H1=signal.freqz(h1,worN=2000) plot(w/numpy.pi, abs(H1)) -- View this message in context: http://www.nabble.com/firwin-upgrades-tp23246480p23351898.html Sent from the Scipy-User mailing list archive at Nabble.com. From stefan at sun.ac.za Sat May 2 22:14:01 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 3 May 2009 04:14:01 +0200 Subject: [SciPy-user] firwin upgrades In-Reply-To: <23351898.post@talk.nabble.com> References: <23246480.post@talk.nabble.com> <9457e7c80905011046s674105ffj99099676b08be0db@mail.gmail.com> <23350618.post@talk.nabble.com> <9457e7c80905021733v3bf24777y9e66eedfa72b08d2@mail.gmail.com> <23351898.post@talk.nabble.com> Message-ID: <9457e7c80905021914i5e52630fj22d741183f8b145f@mail.gmail.com> 2009/5/3 Tom K. : > That sounds good. ?I see now that it can be made backwards compatible - BUT > I think we need to keep the signature static, so the new 'type' kwarg should > be at the end, and the 'cutoff' should not change to 'freqs' (in case > someone called with kwargs e.g. firwin(N=101, cutoff=.1)). ?Hence new > signature would be: > ?s.firwin(N, cutoff, width=None, window='hamming', type='pass') > Does that sound about right? Sounds good to me! Cheers St?fan From tpk at kraussfamily.org Sat May 2 22:28:36 2009 From: tpk at kraussfamily.org (Tom K.) Date: Sat, 2 May 2009 19:28:36 -0700 (PDT) Subject: [SciPy-user] firwin upgrades In-Reply-To: <9457e7c80905021914i5e52630fj22d741183f8b145f@mail.gmail.com> References: <23246480.post@talk.nabble.com> <9457e7c80905011046s674105ffj99099676b08be0db@mail.gmail.com> <23350618.post@talk.nabble.com> <9457e7c80905021733v3bf24777y9e66eedfa72b08d2@mail.gmail.com> <23351898.post@talk.nabble.com> <9457e7c80905021914i5e52630fj22d741183f8b145f@mail.gmail.com> Message-ID: <23352086.post@talk.nabble.com> Oops, forgot the scale: s.firwin(N, cutoff, width=None, window='hamming', type='pass', scale=True) -- View this message in context: http://www.nabble.com/firwin-upgrades-tp23246480p23352086.html Sent from the Scipy-User mailing list archive at Nabble.com. From tpk at kraussfamily.org Sat May 2 23:01:24 2009 From: tpk at kraussfamily.org (Tom K.) Date: Sat, 2 May 2009 20:01:24 -0700 (PDT) Subject: [SciPy-user] firwin upgrades In-Reply-To: <23352086.post@talk.nabble.com> References: <23246480.post@talk.nabble.com> <9457e7c80905011046s674105ffj99099676b08be0db@mail.gmail.com> <23350618.post@talk.nabble.com> <9457e7c80905021733v3bf24777y9e66eedfa72b08d2@mail.gmail.com> <23351898.post@talk.nabble.com> <9457e7c80905021914i5e52630fj22d741183f8b145f@mail.gmail.com> <23352086.post@talk.nabble.com> Message-ID: <23352236.post@talk.nabble.com> Also, 'type' is too generic and almost a key word of the language, so I propose 'btype': s.firwin(N, cutoff, width=None, window='hamming', btype='pass', scale=True) -- View this message in context: http://www.nabble.com/firwin-upgrades-tp23246480p23352236.html Sent from the Scipy-User mailing list archive at Nabble.com. From pav at iki.fi Sun May 3 06:24:36 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 3 May 2009 10:24:36 +0000 (UTC) Subject: [SciPy-user] Constrained least-squares fitting routine? References: Message-ID: Sat, 02 May 2009 15:39:04 -0600, Adam Ginsburg wrote: [clip] > Re: Alan - That looks potentially very useful, but OpenOpt looks to be > in pretty early stages of development, especially in terms of > documentation. Is there any chance OpenOpt code will be included in > scipy in the (near) future? I got the impression that its developers preferred to have a separate project rather than being closely linked with Scipy. (Openopt was a scikit for a while.) -- Pauli Virtanen From wnbell at gmail.com Sun May 3 17:17:27 2009 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 3 May 2009 17:17:27 -0400 Subject: [SciPy-user] Sparse factorisation In-Reply-To: <20090501142127.GG29660@phare.normalesup.org> References: <20090430135619.GB9195@phare.normalesup.org> <20090430155413.GD9195@phare.normalesup.org> <20090501081729.GD30225@phare.normalesup.org> <20090501142127.GG29660@phare.normalesup.org> Message-ID: On Fri, May 1, 2009 at 10:21 AM, Gael Varoquaux wrote: > > I attached a patch implementing the above plan. It is up for review. > > http://projects.scipy.org/scipy/ticket/937 > This looks fine. Going forward it would be nice to expose this functionality through a function lu(A) that returned matrices P,Q,L, and U such that P * A * Q = L * U. However, our SuperLU code is quite old, so we should update it first. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From josephsmidt at gmail.com Sun May 3 19:10:47 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Sun, 3 May 2009 16:10:47 -0700 Subject: [SciPy-user] How Do I Interpolate a Grid of Data Message-ID: <142682e10905031610y3fab3e27r3eb65c29dc472f1f@mail.gmail.com> Hello, I am new to python and scipy. Lets say I have a file called grid.txt that looks like this ( x, y, f(x,y)): 1 1 0.6 3 1 0.8 7 1 2.3 1 3 0.3 3 3 1.5 7 3 1.3 1 7 2.6 3 7 2.8 7 7 1.3 How would I, using scipy, interpolate this so I get a value at any point on a new grid [0,10]x[0,10]? IE, I would like to take the above information and create some NXN array f so that I could say print f[1][2] or print f[9][10] say, and it would give me the interpolated value at that point. Thanks. Joseph Smidt -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From djvine at gmail.com Sun May 3 19:10:51 2009 From: djvine at gmail.com (David Vine) Date: Mon, 04 May 2009 09:10:51 +1000 Subject: [SciPy-user] scipy import problem when running from terminal Message-ID: <1241392251.12959.9.camel@dvine-laptop> Hello, when I execute my python scripts from the terminal (ubuntu 9.04) the scipy module cannot be found when i try to import it and i get an ImportError exception. However I am confident that the scipy module is installed correctly because calling the same script from within IPython using execfile has no problem finding the scipy module. To be specific, the following code: #test.py import scipy as sp def main(): m = scipy.zeros((100)) return 0 if __name__ == '__main__': main() will execute to completion under IPython. However, calling it from the command line in a terminal with: > python test.py gives the exception > import scipy as sp > ImportError: No module named scipy I cannot figure out why? Any help would be appreciated. Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sun May 3 19:46:55 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 4 May 2009 01:46:55 +0200 Subject: [SciPy-user] scipy import problem when running from terminal In-Reply-To: <1241392251.12959.9.camel@dvine-laptop> References: <1241392251.12959.9.camel@dvine-laptop> Message-ID: <9457e7c80905031646y40234cd3ld94fb6546281fe17@mail.gmail.com> Hi David 2009/5/4 David Vine : > when I execute my python scripts from the terminal (ubuntu 9.04) the scipy > module cannot be found when i try to import it and i get an ImportError > exception. However I am confident that the scipy module is installed > correctly because calling the same script from within IPython? using > execfile has no problem finding the scipy module. I noticed that python 2.6 is now the default on Ubuntu, but you may have the 2.5 packages for SciPy and IPython installed. Maybe try "python2.5 myscript.py". Cheers St?fan From josef.pktd at gmail.com Sun May 3 20:43:23 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 3 May 2009 20:43:23 -0400 Subject: [SciPy-user] How Do I Interpolate a Grid of Data In-Reply-To: <142682e10905031610y3fab3e27r3eb65c29dc472f1f@mail.gmail.com> References: <142682e10905031610y3fab3e27r3eb65c29dc472f1f@mail.gmail.com> Message-ID: <1cd32cbb0905031743y27ea8cd7nb87e3b14d137363b@mail.gmail.com> On Sun, May 3, 2009 at 7:10 PM, Joseph Smidt wrote: > Hello, > > ? ? ? ? I am new to python and scipy. ?Lets say I have a file called > grid.txt that looks like this > ( x, y, f(x,y)): > > 1 ? 1 ? 0.6 > 3 ? 1 ? 0.8 > 7 ? 1 ? 2.3 > 1 ? 3 ? 0.3 > 3 ? 3 ? 1.5 > 7 ? 3 ?1.3 > 1 ? 7 ? 2.6 > 3 ? 7 ? 2.8 > 7 ? 7 ? 1.3 > > How would I, using scipy, interpolate this so I get a value at any point on > a new grid [0,10]x[0,10]? ? IE, ?I would like to take the above > information and create some NXN array f so that I could say print > f[1][2] or print f[9][10] say, and it would give me the interpolated > value at that point. ?Thanks. > scipy.interpolate.interp2d is easy to use for interpolation, I just tried, and for points outside of the knot points it looks like they are assumed constant. for extrapolation, I'm not sure whether or how well any of the interpolation methods in scipy.interpolate or scipy.ndimage.interpolation work, but scipy.interpolate.Rbf might be worth a try if the number of knot points is not too large. Otherwise, I would try to first estimate or fix the extreme points of your grid [0,10]x[0,10], before using interp2d. Josef >>> from scipy import interpolate >>> x = [0,1,2,0,2,0,1,2]; y = [0,0,0,3,3,7,7,7] >>> z = (1+np.array(x))*(1+np.array(y)) >>> z array([ 1, 2, 3, 4, 12, 8, 16, 24]) >>> ip = interpolate.interp2d(x,y,z) >>> ip(3,7) array([ 24.]) >>> ip(0,10) array([ 8.]) >>> ip(1,10) array([ 16.]) >>> xn=np.linspace(0,2,5) >>> yn=np.linspace(0,7,5) >>> zn = ip(xn,yn) From leon_r_adams at hotmail.com Mon May 4 06:43:54 2009 From: leon_r_adams at hotmail.com (Leon Adams) Date: Mon, 4 May 2009 03:43:54 -0700 Subject: [SciPy-user] Additional constraints on optimizing inputs for optimize.fmin_l_bfgs_b Message-ID: Hi, I was wondering if it is possible to place constraints on input variables for the optimize.fmin_l_bfgs_b optimization routine. For example I am optimizing on X1,X2,X3,Y1,Y2,Y3 each with bounds [0,1]. And with the additional constraint that X1+X2+X3==1, Y1+Y2+Y3==1. Thanks in advance. _________________________________________________________________ Hotmail? goes with you. http://windowslive.com/Tutorial/Hotmail/Mobile?ocid=TXT_TAGLM_WL_HM_Tutorial_Mobile1_052009 -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon May 4 09:36:25 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 4 May 2009 09:36:25 -0400 Subject: [SciPy-user] Additional constraints on optimizing inputs for optimize.fmin_l_bfgs_b In-Reply-To: References: Message-ID: <1cd32cbb0905040636s6f421ad1pb3f345db9b751551@mail.gmail.com> On Mon, May 4, 2009 at 6:43 AM, Leon Adams wrote: > Hi, > > I was wondering if it is possible to place constraints on input variables > for the optimize.fmin_l_bfgs_b optimization routine. For example I am > optimizing on X1,X2,X3,Y1,Y2,Y3 each with bounds [0,1]. And with the > additional constraint that X1+X2+X3==1, Y1+Y2+Y3==1. > from the description of fmin_l_bfgs_b, it doesn't seem to be possible to use the sum constraints. As alternative you could transform the variables or use openopt see recent thread with title "Optimization fmin_tnc with equality constraint" Josef From mchandra at iitk.ac.in Mon May 4 10:27:01 2009 From: mchandra at iitk.ac.in (Mani chandra) Date: Mon, 04 May 2009 07:27:01 -0700 Subject: [SciPy-user] Restricting the values to be plotted while using contourf Message-ID: <49FEFB35.8020409@iitk.ac.in> Hi, How can I restrict the values of 'z' while using contourf. For example, if my dataset has values of z ranging from say 0 to 10000, I only want the plot of those values of 'z' from 0 to 100 and appropriately set the colormap. Thanks Mani chandra From tim.whitcomb at nrlmry.navy.mil Mon May 4 11:50:34 2009 From: tim.whitcomb at nrlmry.navy.mil (Whitcomb, Mr. Tim) Date: Mon, 4 May 2009 08:50:34 -0700 Subject: [SciPy-user] Restricting the values to be plotted while usingcontourf In-Reply-To: <49FEFB35.8020409@iitk.ac.in> References: <49FEFB35.8020409@iitk.ac.in> Message-ID: We did this by creating a new instance of matplotlib.colors.Normalize that restricted the range: something like contourf(X, Y, Z, norm=matplotlib.colors(Normalize(vmin=0, vmax=100)). Adding extend='both' in the contourf helped as well, eliminating the out-of-range white spots. I would be very interested if there is a better/more standard way of doing this. Tim > -----Original Message----- > From: scipy-user-bounces at scipy.org > [mailto:scipy-user-bounces at scipy.org] On Behalf Of Mani chandra > Sent: Monday, May 04, 2009 7:27 > To: SciPy Users List > Subject: [SciPy-user] Restricting the values to be plotted > while usingcontourf > > Hi, > > How can I restrict the values of 'z' while using > contourf. For example, if my dataset has values of z ranging > from say 0 to 10000, I only want the plot of those values of > 'z' from 0 to 100 and appropriately set the colormap. > > Thanks > Mani chandra > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From adam.reichold at mailbox.tu-dresden.de Mon May 4 09:42:06 2009 From: adam.reichold at mailbox.tu-dresden.de (Adam Reichold) Date: Mon, 4 May 2009 13:42:06 +0000 (UTC) Subject: [SciPy-user] Custom tolerance estimate for odeint Message-ID: I am a physics student and I am currently using scipy for a lecture project. I would like to integrate some ODE using my own "tolerance function," meaning the neccessary stepwidth should not be estimated based on the standard error approximation used in LSODE, but rather using a function that I pass to odeint. Is that possible? I did not find any word on that in the documentation. Maybe someone could give me a hint. Best regards, Adam. P.S.: Maybe this clears it up a bit more: I am integrating an ODE describing a physical problem and I want to use things like conservation of energy or momentum as a tolerance. Meaning I want to use the error of energy conservation to control the stepwidth. From mchandra at iitk.ac.in Mon May 4 13:11:04 2009 From: mchandra at iitk.ac.in (Mani chandra) Date: Mon, 04 May 2009 10:11:04 -0700 Subject: [SciPy-user] [*] Re: Restricting the values to be plotted while usingcontourf In-Reply-To: References: <49FEFB35.8020409@iitk.ac.in> Message-ID: <49FF21A8.1050508@iitk.ac.in> Whitcomb, Mr. Tim wrote: > We did this by creating a new instance of matplotlib.colors.Normalize > that restricted the range: something like > contourf(X, Y, Z, norm=matplotlib.colors(Normalize(vmin=0, vmax=100)). > Adding extend='both' in the contourf helped as well, eliminating the > out-of-range white spots. > > I would be very interested if there is a better/more standard way of > doing this. > > Tim > > >> -----Original Message----- >> From: scipy-user-bounces at scipy.org >> [mailto:scipy-user-bounces at scipy.org] On Behalf Of Mani chandra >> Sent: Monday, May 04, 2009 7:27 >> To: SciPy Users List >> Subject: [SciPy-user] Restricting the values to be plotted >> while usingcontourf >> >> Hi, >> >> How can I restrict the values of 'z' while using >> contourf. For example, if my dataset has values of z ranging >> from say 0 to 10000, I only want the plot of those values of >> 'z' from 0 to 100 and appropriately set the colormap. >> >> Thanks >> Mani chandra >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > Hi, Your method does not seem to work. I keep getting the following error: contourf(A, B, z, 100, extend='both', norm=matplotlib.colors(Normalize(vmin=0, vmax=100) ) ) TypeError: 'module' object is not callable From tim.whitcomb at nrlmry.navy.mil Mon May 4 13:18:52 2009 From: tim.whitcomb at nrlmry.navy.mil (Whitcomb, Mr. Tim) Date: Mon, 4 May 2009 10:18:52 -0700 Subject: [SciPy-user] [*] Re: Restricting the values to be plotted while usingcontourf In-Reply-To: <49FF21A8.1050508@iitk.ac.in> References: <49FEFB35.8020409@iitk.ac.in> <49FF21A8.1050508@iitk.ac.in> Message-ID: > > We did this by creating a new instance of > matplotlib.colors.Normalize > > that restricted the range: something like contourf(X, Y, Z, > > norm=matplotlib.colors(Normalize(vmin=0, vmax=100)). > Your method does not seem to work. I keep getting the > following error: > > contourf(A, B, z, 100, extend='both', > norm=matplotlib.colors(Normalize(vmin=0, vmax=100) ) ) > TypeError: 'module' object is not callable No surprise there - I typed it in wrong: it should be contourf(A, B, z, 100, extend='both', norm=matplotlib.colors.Normalize(vmin=0,vmax=100)) As I understand it, contourf uses a normalizer to map your values to [0,1] then applies the colormap. You're simply replacing the default with a new instance of Normalize that handles a specific range. Tim From wesmckinn at gmail.com Mon May 4 13:41:34 2009 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 4 May 2009 13:41:34 -0400 Subject: [SciPy-user] Trac down? Message-ID: <6c476c8a0905041041q62251f08t1ee65f7aadfbf679@mail.gmail.com> Getting a "database is locked error". Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Mon May 4 13:46:46 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 04 May 2009 10:46:46 -0700 Subject: [SciPy-user] How Do I Interpolate a Grid of Data In-Reply-To: <142682e10905031610y3fab3e27r3eb65c29dc472f1f@mail.gmail.com> References: <142682e10905031610y3fab3e27r3eb65c29dc472f1f@mail.gmail.com> Message-ID: <49FF2A06.4060205@noaa.gov> Joseph Smidt wrote: > How would I, using scipy, interpolate this so I get a value at any point on > a new grid [0,10]x[0,10]? I think "griddata" is what you want -- there is one in Matplotlib -- I don't know if there is one in Scipy without MPL.... -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From stefan at sun.ac.za Mon May 4 13:58:02 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 4 May 2009 19:58:02 +0200 Subject: [SciPy-user] Trac down? In-Reply-To: <6c476c8a0905041041q62251f08t1ee65f7aadfbf679@mail.gmail.com> References: <6c476c8a0905041041q62251f08t1ee65f7aadfbf679@mail.gmail.com> Message-ID: <9457e7c80905041058t7e0071a6m82cc3ca503130700@mail.gmail.com> 2009/5/4 Wes McKinney : > Getting a "database is locked error". Should be fixed now. Thanks St?fan From kenneth.arnold at gmail.com Mon May 4 14:16:52 2009 From: kenneth.arnold at gmail.com (Kenneth Arnold) Date: Mon, 4 May 2009 14:16:52 -0400 Subject: [SciPy-user] How Do I Interpolate a Grid of Data In-Reply-To: <142682e10905031610y3fab3e27r3eb65c29dc472f1f@mail.gmail.com> References: <142682e10905031610y3fab3e27r3eb65c29dc472f1f@mail.gmail.com> Message-ID: On Sun, May 3, 2009 at 7:10 PM, Joseph Smidt wrote: > How would I, using scipy, interpolate this so I get a value at any point on > a new grid [0,10]x[0,10]? This is, of course, an underconstrained problem: what prior hypothesis do you have about the form of f(x,y)? e.g., do you think it's linear? Periodic? Are you equally confident about all of the data points you have already? etc. Methods for accomplishing your goal range from just using the nearest point to making a hierarchical Bayesian model over possible models :) -Ken From hbabcock at mac.com Mon May 4 19:06:07 2009 From: hbabcock at mac.com (Hazen Babcock) Date: Mon, 04 May 2009 19:06:07 -0400 Subject: [SciPy-user] 2D clustering question Message-ID: <49FF74DF.2070500@mac.com> Hello, I've been using scipy.cluster.hierarchy.fclusterdata() to cluster groups of points based on their x and y position. This works well for data sets without out too many points, but seems to get pretty slow as the number of points gets into the high thousands (i.e. 6000+). Does anyone know of a more specialized clustering algorithm that might be able to handle even larger numbers of points, i.e. up to 10e6 or so? The points are spread out over 0 - 200 or so in X and Y and I'm clustering with a 0.5 cutoff. One approach is to break the data set down into smaller sections based on X,Y coordinate, but perhaps something like this already exists? thanks, -Hazen From josef.pktd at gmail.com Tue May 5 11:46:01 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 5 May 2009 11:46:01 -0400 Subject: [SciPy-user] examples in docs Message-ID: <1cd32cbb0905050846v6b6b18cbmafe63e6c60d0c664@mail.gmail.com> Rob Falck just pointed out a nice set of examples for fmin_slsqp at http://projects.scipy.org/scipy/attachment/ticket/570/slsqp_test.py. What's the best way to include them in the docs? the tutorial? Is there a way to include or link to examples that are too long for a docstring? Josef From haase at msg.ucsf.edu Tue May 5 11:58:24 2009 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 5 May 2009 17:58:24 +0200 Subject: [SciPy-user] examples in docs In-Reply-To: <1cd32cbb0905050846v6b6b18cbmafe63e6c60d0c664@mail.gmail.com> References: <1cd32cbb0905050846v6b6b18cbmafe63e6c60d0c664@mail.gmail.com> Message-ID: On Tue, May 5, 2009 at 5:46 PM, wrote: > Rob Falck just pointed out a nice set of examples for fmin_slsqp at > http://projects.scipy.org/scipy/attachment/ticket/570/slsqp_test.py. > > What's the best way to include them in the docs? the tutorial? > Is there a way to include or link to examples that are too long for a docstring? > > Josef very interesting - thanks for the info. Just in case someone else is interested, I pasted the output I got below -- Sebastian Haase Unbounded optimization. Derivatives approximated. NIT FC OBJFUN GNORM 1 4 7.000000E+00 8.485281E+00 2 9 -2.000000E-01 1.697056E+00 3 13 -1.928000E+00 3.394112E-01 4 17 -2.000000E+00 6.664002E-08 Optimization terminated successfully. (Exit mode 0) Current function value: -2.0 Iterations: 4 Function evaluations: 17 Gradient evaluations: 4 Elapsed time: 0.897169113159 ms Results [[1.9999999422580017, 0.99999995250254925], -1.9999999999999969, 4, 0, 'Optimization terminated successfully.'] Unbounded optimization. Derivatives provided. NIT FC OBJFUN GNORM 1 4 7.000000E+00 8.485281E+00 2 9 -2.000000E-01 1.697056E+00 3 13 -1.928000E+00 3.394112E-01 4 17 -2.000000E+00 6.664002E-08 Optimization terminated successfully. (Exit mode 0) Current function value: -2.0 Iterations: 4 Function evaluations: 17 Gradient evaluations: 4 Elapsed time: 0.773191452026 ms Results [[1.9999999422580017, 0.99999995250254925], -1.9999999999999969, 4, 0, 'Optimization terminated successfully.'] Bound optimization. Derivatives approximated. NIT FC OBJFUN GNORM 1 4 7.000000E+00 8.485281E+00 2 8 8.881784E-16 2.000000E+00 3 12 -9.722222E-01 2.603417E+00 4 16 -1.000000E+00 2.828427E+00 Optimization terminated successfully. (Exit mode 0) Current function value: -1.0 Iterations: 4 Function evaluations: 16 Gradient evaluations: 4 Elapsed time: 0.833034515381 ms Results [[1.0000000042219968, 1.0000000042219968], -0.99999999999999956, 4, 0, 'Optimization terminated successfully.'] Bound optimization (equality constraints). Derivatives provided. NIT FC OBJFUN GNORM 1 1 7.000000E+00 8.485281E+00 2 2 8.881784E-16 2.000000E+00 3 3 -9.722222E-01 2.603417E+00 4 4 -1.000000E+00 2.828427E+00 Optimization terminated successfully. (Exit mode 0) Current function value: -1.0 Iterations: 4 Function evaluations: 4 Gradient evaluations: 4 Elapsed time: 0.710964202881 ms Results [[0.99999999999999978, 0.99999999999999978], -1.0, 4, 0, 'Optimization terminated successfully.'] Bound optimization (equality and inequality constraints). Derivatives provided. NIT FC OBJFUN GNORM 1 1 7.000000E+00 8.485281E+00 2 2 -7.500000E-01 2.236068E+00 3 3 -9.932445E-01 2.946957E+00 4 4 -1.000000E+00 2.828427E+00 Optimization terminated successfully. (Exit mode 0) Current function value: -1.0 Iterations: 4 Function evaluations: 4 Gradient evaluations: 4 Elapsed time: 0.903844833374 ms Results [[0.99999999999999978, 0.99999999999999978], -1.0, 4, 0, 'Optimization terminated successfully.'] Bound optimization (equality and inequality constraints). Derivatives provided via functions. NIT FC OBJFUN GNORM 1 1 7.000000E+00 8.485281E+00 2 2 3.444444E+00 6.599663E+00 3 5 1.636490E+00 5.614645E+00 4 8 9.071530E-01 5.091175E+00 5 10 4.618203E-01 4.748310E+00 6 11 -1.269073E+00 2.418143E+00 7 12 -1.034890E+00 2.778647E+00 8 13 -1.000605E+00 2.827571E+00 9 14 -1.000000E+00 2.828427E+00 Optimization terminated successfully. (Exit mode 0) Current function value: -1.00000018313 Iterations: 9 Function evaluations: 14 Gradient evaluations: 9 Elapsed time: 3.03888320923 ms Results [[1.0000000915654403, 1.0], -1.0000001831308722, 9, 0, 'Optimization terminated successfully.'] Bound optimization (equality and inequality constraints). Derivatives provided via functions. Constraint jacobians provided via functions NIT FC OBJFUN GNORM 1 1 7.000000E+00 8.485281E+00 2 2 3.444444E+00 6.599663E+00 3 5 1.636489E+00 5.614644E+00 4 8 9.071728E-01 5.091190E+00 5 10 4.618196E-01 4.748311E+00 6 11 -1.269070E+00 2.418148E+00 7 12 -1.034890E+00 2.778648E+00 8 13 -1.000605E+00 2.827571E+00 9 14 -1.000000E+00 2.828427E+00 Optimization terminated successfully. (Exit mode 0) Current function value: -1.00000018311 Iterations: 9 Function evaluations: 14 Gradient evaluations: 9 Elapsed time: 2.02298164368 ms Results [[1.000000091552611, 1.0], -1.0000001831052137, 9, 0, 'Optimization terminated successfully.'] From pav at iki.fi Tue May 5 12:27:14 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 5 May 2009 16:27:14 +0000 (UTC) Subject: [SciPy-user] examples in docs References: <1cd32cbb0905050846v6b6b18cbmafe63e6c60d0c664@mail.gmail.com> Message-ID: Tue, 05 May 2009 11:46:01 -0400, josef.pktd wrote: > Rob Falck just pointed out a nice set of examples for fmin_slsqp at > http://projects.scipy.org/scipy/attachment/ticket/570/slsqp_test.py. > > What's the best way to include them in the docs? the tutorial? The tutorial, I'd say. > Is there a way to include or link to examples that are too long for a > docstring? Add a label like .. _tutorial-sqlsp: at the relevant point in the tutorial, and refer it to as More examples :ref:`in the tutorial ` or :ref:`tutorial-sqlsp` in the docstring, or something along these lines. -- Pauli Virtanen From josef.pktd at gmail.com Tue May 5 14:17:15 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 5 May 2009 14:17:15 -0400 Subject: [SciPy-user] examples in docs In-Reply-To: References: <1cd32cbb0905050846v6b6b18cbmafe63e6c60d0c664@mail.gmail.com> Message-ID: <1cd32cbb0905051117i459dc715tc0127ace002e9c3c@mail.gmail.com> On Tue, May 5, 2009 at 12:27 PM, Pauli Virtanen wrote: > Tue, 05 May 2009 11:46:01 -0400, josef.pktd wrote: > >> Rob Falck just pointed out a nice set of examples for fmin_slsqp at >> http://projects.scipy.org/scipy/attachment/ticket/570/slsqp_test.py. >> >> What's the best way to include them in the docs? the tutorial? > > The tutorial, I'd say. > >> Is there a way to include or link to examples that are too long for a >> docstring? > > Add a label like > > .. _tutorial-sqlsp: > > at the relevant point in the tutorial, and refer it to as > > ? ? ? ?More examples :ref:`in the tutorial ` > > or :ref:`tutorial-sqlsp` in the docstring, or something along these lines. > I did this, but mostly cut and paste: http://docs.scipy.org/scipy/docs/scipy.optimize.slsqp.fmin_slsqp/#scipy-optimize-fmin-slsqp http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/optimize.rst/#tutorial-sqlsp Maybe someone who knows more about fmin_slsqp can edit it. One more question: http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/optimize.rst does a literal include of some files in the examples subdirectory. I didn't find a way to display and edit them in the docs editor. If we include more sample scripts, it might be good, to find a way to add and edit them through the doc editor. Right now the relatively long example script that I pasted looks a bit ugly in the tutorial text. Josef From robfalck at gmail.com Tue May 5 14:43:20 2009 From: robfalck at gmail.com (Rob Falck) Date: Tue, 5 May 2009 14:43:20 -0400 Subject: [SciPy-user] examples in docs In-Reply-To: <1cd32cbb0905051117i459dc715tc0127ace002e9c3c@mail.gmail.com> References: <1cd32cbb0905050846v6b6b18cbmafe63e6c60d0c664@mail.gmail.com> <1cd32cbb0905051117i459dc715tc0127ace002e9c3c@mail.gmail.com> Message-ID: That looks good. I apologize for not adding the documentation myself. On Tue, May 5, 2009 at 2:17 PM, wrote: > On Tue, May 5, 2009 at 12:27 PM, Pauli Virtanen wrote: > > Tue, 05 May 2009 11:46:01 -0400, josef.pktd wrote: > > > >> Rob Falck just pointed out a nice set of examples for fmin_slsqp at > >> http://projects.scipy.org/scipy/attachment/ticket/570/slsqp_test.py. > >> > >> What's the best way to include them in the docs? the tutorial? > > > > The tutorial, I'd say. > > > >> Is there a way to include or link to examples that are too long for a > >> docstring? > > > > Add a label like > > > > .. _tutorial-sqlsp: > > > > at the relevant point in the tutorial, and refer it to as > > > > More examples :ref:`in the tutorial ` > > > > or :ref:`tutorial-sqlsp` in the docstring, or something along these > lines. > > > > I did this, but mostly cut and paste: > > > http://docs.scipy.org/scipy/docs/scipy.optimize.slsqp.fmin_slsqp/#scipy-optimize-fmin-slsqp > > http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/optimize.rst/#tutorial-sqlsp > > Maybe someone who knows more about fmin_slsqp can edit it. > > One more question: > http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/optimize.rst > does a literal include of some files in the examples subdirectory. I > didn't find a way to display and edit them in the docs editor. > If we include more sample scripts, it might be good, to find a way to > add and edit them through the doc editor. Right now the relatively > long example script that I pasted looks a bit ugly in the tutorial > text. > > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From SGONG at mdacorporation.com Tue May 5 14:57:49 2009 From: SGONG at mdacorporation.com (Shawn GONG) Date: Tue, 5 May 2009 11:57:49 -0700 Subject: [SciPy-user] when will scipy win32 for Python2.6 be ready Message-ID: <5F76F80A436C1146A7C04C6F844026FBB726F6@VMXYVR2.ds.mda.ca> hi, I am running Python2.6 and MS VS 2005 on Windows XP. I'd like to use scipy. When will scipy binary install for Python2.6 be ready? I tried to run "python setup.py install", but got error message saying: numpy.distutils.system_info.BlasNotFoundError: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. Thanks, Shawn -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue May 5 15:06:38 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 5 May 2009 15:06:38 -0400 Subject: [SciPy-user] examples in docs In-Reply-To: References: <1cd32cbb0905050846v6b6b18cbmafe63e6c60d0c664@mail.gmail.com> <1cd32cbb0905051117i459dc715tc0127ace002e9c3c@mail.gmail.com> Message-ID: <1cd32cbb0905051206l61b20428t371a0aa467ebf9c7@mail.gmail.com> On Tue, May 5, 2009 at 2:43 PM, Rob Falck wrote: > That looks good.? I apologize for not adding the documentation myself. > > On Tue, May 5, 2009 at 2:17 PM, wrote: >> >> On Tue, May 5, 2009 at 12:27 PM, Pauli Virtanen wrote: >> > Tue, 05 May 2009 11:46:01 -0400, josef.pktd wrote: >> > >> >> Rob Falck just pointed out a nice set of examples for fmin_slsqp at >> >> http://projects.scipy.org/scipy/attachment/ticket/570/slsqp_test.py. >> >> >> >> What's the best way to include them in the docs? the tutorial? >> > >> > The tutorial, I'd say. >> > >> >> Is there a way to include or link to examples that are too long for a >> >> docstring? >> > >> > Add a label like >> > >> > .. _tutorial-sqlsp: >> > >> > at the relevant point in the tutorial, and refer it to as >> > >> > ? ? ? ?More examples :ref:`in the tutorial ` >> > >> > or :ref:`tutorial-sqlsp` in the docstring, or something along these >> > lines. >> > >> >> I did this, but mostly cut and paste: >> >> >> http://docs.scipy.org/scipy/docs/scipy.optimize.slsqp.fmin_slsqp/#scipy-optimize-fmin-slsqp >> >> http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/optimize.rst/#tutorial-sqlsp >> >> Maybe someone who knows more about fmin_slsqp can edit it. >> >> One more question: >> http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/optimize.rst >> does a literal include of some files in the examples subdirectory. I >> didn't find a way to display and edit them in the docs editor. >> If we include more sample scripts, it might be good, to find a way to >> add and edit them through the doc editor. Right now the relatively >> long example script that I pasted looks a bit ugly in the tutorial >> text. Thanks for checking. Do you have a brief description available? Without looking at it more carefully I got a bit confused about the relationship to least squares. From the description in the fortran file it looks like a minimizer of an objective function and not a least squares minimizer. Is iterative least squares just the algorithm or is there a closer relationship to LS? Josef PS we prefer bottom posting. From robfalck at gmail.com Tue May 5 15:25:40 2009 From: robfalck at gmail.com (Rob Falck) Date: Tue, 5 May 2009 15:25:40 -0400 Subject: [SciPy-user] examples in docs In-Reply-To: <1cd32cbb0905051206l61b20428t371a0aa467ebf9c7@mail.gmail.com> References: <1cd32cbb0905050846v6b6b18cbmafe63e6c60d0c664@mail.gmail.com> <1cd32cbb0905051117i459dc715tc0127ace002e9c3c@mail.gmail.com> <1cd32cbb0905051206l61b20428t371a0aa467ebf9c7@mail.gmail.com> Message-ID: On Tue, May 5, 2009 at 3:06 PM, wrote: > On Tue, May 5, 2009 at 2:43 PM, Rob Falck wrote: > > That looks good. I apologize for not adding the documentation myself. > > > > On Tue, May 5, 2009 at 2:17 PM, wrote: > >> > >> On Tue, May 5, 2009 at 12:27 PM, Pauli Virtanen wrote: > >> > Tue, 05 May 2009 11:46:01 -0400, josef.pktd wrote: > >> > > >> >> Rob Falck just pointed out a nice set of examples for fmin_slsqp at > >> >> http://projects.scipy.org/scipy/attachment/ticket/570/slsqp_test.py. > >> >> > >> >> What's the best way to include them in the docs? the tutorial? > >> > > >> > The tutorial, I'd say. > >> > > >> >> Is there a way to include or link to examples that are too long for a > >> >> docstring? > >> > > >> > Add a label like > >> > > >> > .. _tutorial-sqlsp: > >> > > >> > at the relevant point in the tutorial, and refer it to as > >> > > >> > More examples :ref:`in the tutorial ` > >> > > >> > or :ref:`tutorial-sqlsp` in the docstring, or something along these > >> > lines. > >> > > >> > >> I did this, but mostly cut and paste: > >> > >> > >> > http://docs.scipy.org/scipy/docs/scipy.optimize.slsqp.fmin_slsqp/#scipy-optimize-fmin-slsqp > >> > >> > http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/optimize.rst/#tutorial-sqlsp > >> > >> Maybe someone who knows more about fmin_slsqp can edit it. > >> > >> One more question: > >> http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/optimize.rst > >> does a literal include of some files in the examples subdirectory. I > >> didn't find a way to display and edit them in the docs editor. > >> If we include more sample scripts, it might be good, to find a way to > >> add and edit them through the doc editor. Right now the relatively > >> long example script that I pasted looks a bit ugly in the tutorial > >> text. > > Thanks for checking. > > Do you have a brief description available? > Without looking at it more carefully I got a bit confused about the > relationship to least squares. From the description in the fortran > file it looks like a minimizer of an objective function and not a > least squares minimizer. Is iterative least squares just the algorithm > or is there a closer relationship to LS? > > Josef > > PS we prefer bottom posting. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Sequential Least Squares is just the name of the algorithm. Honestly I use it as a black box and am not extremely familiar with the algorithm implemented in Fortran. -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue May 5 15:54:09 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 5 May 2009 15:54:09 -0400 Subject: [SciPy-user] examples in docs In-Reply-To: References: <1cd32cbb0905050846v6b6b18cbmafe63e6c60d0c664@mail.gmail.com> <1cd32cbb0905051117i459dc715tc0127ace002e9c3c@mail.gmail.com> <1cd32cbb0905051206l61b20428t371a0aa467ebf9c7@mail.gmail.com> Message-ID: <1cd32cbb0905051254y63093189ya572b4c5b75d12c8@mail.gmail.com> On Tue, May 5, 2009 at 3:25 PM, Rob Falck wrote: > On Tue, May 5, 2009 at 3:06 PM, wrote: >> >> On Tue, May 5, 2009 at 2:43 PM, Rob Falck wrote: >> > That looks good.? I apologize for not adding the documentation myself. >> > >> > On Tue, May 5, 2009 at 2:17 PM, wrote: >> >> >> >> On Tue, May 5, 2009 at 12:27 PM, Pauli Virtanen wrote: >> >> > Tue, 05 May 2009 11:46:01 -0400, josef.pktd wrote: >> >> > >> >> >> Rob Falck just pointed out a nice set of examples for fmin_slsqp at >> >> >> http://projects.scipy.org/scipy/attachment/ticket/570/slsqp_test.py. >> >> >> >> >> >> What's the best way to include them in the docs? the tutorial? >> >> > >> >> > The tutorial, I'd say. >> >> > >> >> >> Is there a way to include or link to examples that are too long for >> >> >> a >> >> >> docstring? >> >> > >> >> > Add a label like >> >> > >> >> > .. _tutorial-sqlsp: >> >> > >> >> > at the relevant point in the tutorial, and refer it to as >> >> > >> >> > ? ? ? ?More examples :ref:`in the tutorial ` >> >> > >> >> > or :ref:`tutorial-sqlsp` in the docstring, or something along these >> >> > lines. >> >> > >> >> >> >> I did this, but mostly cut and paste: >> >> >> >> >> >> >> >> http://docs.scipy.org/scipy/docs/scipy.optimize.slsqp.fmin_slsqp/#scipy-optimize-fmin-slsqp >> >> >> >> >> >> http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/optimize.rst/#tutorial-sqlsp >> >> >> >> Maybe someone who knows more about fmin_slsqp can edit it. >> >> >> >> One more question: >> >> http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/optimize.rst >> >> does a literal include of some files in the examples subdirectory. I >> >> didn't find a way to display and edit them in the docs editor. >> >> If we include more sample scripts, it might be good, to find a way to >> >> add and edit them through the doc editor. Right now the relatively >> >> long example script that I pasted looks a bit ugly in the tutorial >> >> text. >> >> Thanks for checking. >> >> Do you have a brief description available? >> Without looking at it more carefully I got a bit confused about the >> relationship to least squares. From the description in the fortran >> file it looks like a minimizer of an objective function and not a >> least squares minimizer. Is iterative least squares just the algorithm >> or is there a closer relationship to LS? >> >> Josef >> >> PS we prefer bottom posting. >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > Sequential Least Squares is just the name of the algorithm.? Honestly I use > it as a black box and am not extremely familiar with the algorithm > implemented in Fortran. > I found the name a bit misleading. I think your examples will help to advertise this minimizer, since it wasn't clear that it can handle constraints in a flexible way, including nonlinear constraints. Josef From robfalck at gmail.com Tue May 5 16:19:24 2009 From: robfalck at gmail.com (Rob Falck) Date: Tue, 5 May 2009 16:19:24 -0400 Subject: [SciPy-user] examples in docs In-Reply-To: <1cd32cbb0905051254y63093189ya572b4c5b75d12c8@mail.gmail.com> References: <1cd32cbb0905050846v6b6b18cbmafe63e6c60d0c664@mail.gmail.com> <1cd32cbb0905051117i459dc715tc0127ace002e9c3c@mail.gmail.com> <1cd32cbb0905051206l61b20428t371a0aa467ebf9c7@mail.gmail.com> <1cd32cbb0905051254y63093189ya572b4c5b75d12c8@mail.gmail.com> Message-ID: On Tue, May 5, 2009 at 3:54 PM, wrote: > On Tue, May 5, 2009 at 3:25 PM, Rob Falck wrote: > > On Tue, May 5, 2009 at 3:06 PM, wrote: > >> > >> On Tue, May 5, 2009 at 2:43 PM, Rob Falck wrote: > >> > That looks good. I apologize for not adding the documentation myself. > >> > > >> > On Tue, May 5, 2009 at 2:17 PM, wrote: > >> >> > >> >> On Tue, May 5, 2009 at 12:27 PM, Pauli Virtanen wrote: > >> >> > Tue, 05 May 2009 11:46:01 -0400, josef.pktd wrote: > >> >> > > >> >> >> Rob Falck just pointed out a nice set of examples for fmin_slsqp > at > >> >> >> > http://projects.scipy.org/scipy/attachment/ticket/570/slsqp_test.py. > >> >> >> > >> >> >> What's the best way to include them in the docs? the tutorial? > >> >> > > >> >> > The tutorial, I'd say. > >> >> > > >> >> >> Is there a way to include or link to examples that are too long > for > >> >> >> a > >> >> >> docstring? > >> >> > > >> >> > Add a label like > >> >> > > >> >> > .. _tutorial-sqlsp: > >> >> > > >> >> > at the relevant point in the tutorial, and refer it to as > >> >> > > >> >> > More examples :ref:`in the tutorial ` > >> >> > > >> >> > or :ref:`tutorial-sqlsp` in the docstring, or something along these > >> >> > lines. > >> >> > > >> >> > >> >> I did this, but mostly cut and paste: > >> >> > >> >> > >> >> > >> >> > http://docs.scipy.org/scipy/docs/scipy.optimize.slsqp.fmin_slsqp/#scipy-optimize-fmin-slsqp > >> >> > >> >> > >> >> > http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/optimize.rst/#tutorial-sqlsp > >> >> > >> >> Maybe someone who knows more about fmin_slsqp can edit it. > >> >> > >> >> One more question: > >> >> http://docs.scipy.org/scipy/docs/scipy-docs/tutorial/optimize.rst > >> >> does a literal include of some files in the examples subdirectory. I > >> >> didn't find a way to display and edit them in the docs editor. > >> >> If we include more sample scripts, it might be good, to find a way to > >> >> add and edit them through the doc editor. Right now the relatively > >> >> long example script that I pasted looks a bit ugly in the tutorial > >> >> text. > >> > >> Thanks for checking. > >> > >> Do you have a brief description available? > >> Without looking at it more carefully I got a bit confused about the > >> relationship to least squares. From the description in the fortran > >> file it looks like a minimizer of an objective function and not a > >> least squares minimizer. Is iterative least squares just the algorithm > >> or is there a closer relationship to LS? > >> > >> Josef > >> > >> PS we prefer bottom posting. > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > Sequential Least Squares is just the name of the algorithm. Honestly I > use > > it as a black box and am not extremely familiar with the algorithm > > implemented in Fortran. > > > > I found the name a bit misleading. > I think your examples will help to advertise this minimizer, since it > wasn't clear that it can handle constraints in a flexible way, > including nonlinear constraints. > > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > I hope users put it to good use. I added it because it filled a gap in Scipy by allowing the use of both equality and inequality constraints. We just it in trajectory optimization codes where I work, and while it is not as fast as sparse solvers like SNOPT, it gets the job done. -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed May 6 01:08:32 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 06 May 2009 14:08:32 +0900 Subject: [SciPy-user] when will scipy win32 for Python2.6 be ready In-Reply-To: <5F76F80A436C1146A7C04C6F844026FBB726F6@VMXYVR2.ds.mda.ca> References: <5F76F80A436C1146A7C04C6F844026FBB726F6@VMXYVR2.ds.mda.ca> Message-ID: <4A011B50.4050003@ar.media.kyoto-u.ac.jp> Shawn GONG wrote: > > hi, > > I am running Python2.6 and MS VS 2005 on Windows XP. > I'd like to use scipy. When will scipy binary install for Python2.6 be > ready? > Hopefully, scipy 0.7.1 will have a python 2.6 binary. > I tried to run "python setup.py install", but got error message saying: > numpy.distutils.system_info.BlasNotFoundError: > Blas (http://www.netlib.org/blas/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [blas]) or by setting > the BLAS environment variable. > Yes, you need both BLAS and LAPACK to build scipy by yourself. Building them is a bit difficult on windows, cheers, David From tanja.gurzhiy at gmail.com Wed May 6 11:05:12 2009 From: tanja.gurzhiy at gmail.com (Tanja Gurzhiy) Date: Wed, 6 May 2009 17:05:12 +0200 Subject: [SciPy-user] SciPy autotester fails on pilutil Message-ID: <8377d7bd0905060805t1c56ee33pa1d3e9396c828cde@mail.gmail.com> Hi all, I am currently busy with SciPy 0.7.0 installation (I have NumPy v1.3.0 installed and PIL v). If I run the scipy autotester (scipy.test()), I get the failure in testcase from scipy.misc ERROR: Failure: ImportError (No module named Image) ---------------------------------------------------------------------- Traceback (most recent call last): File "/scratch/sources/numpy/nose-0.10.4-install/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/scratch/sources/numpy/nose-0.10.4-install/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/scratch/sources/numpy/nose-0.10.4-install/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/home/tgurzhiy/cadlib/scipy/misc/tests/test_pilutil.py", line 12, in import scipy.misc.pilutil as pilutil File "/cadappl/python_scipy/0.7.0/python_scipy/lib/python2.5/site-packages/scipy/misc/pilutil.py", line 10, in import Image ImportError: No module named Image In pilutil.py line 10 use import Image, the module Image is located in PIL. In my case PYTHONPATH includes only directory (~/cadlib) that includes PIL, NumPy, matloptlib and SciPy therefore module Image cannot be imported directly if scipy.test() is ran. As consequences of this error in pilutil, the functions from this file cannot be used (e.g., imread, imshow, etc.) >>> from scipy import misc >>> im = misc.lena() >>> im.shape (512, 512) >>> im1= misc.imresize(im,1.1) Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'imresize' Why scipy/misc/pilutil.py uses direct import Image and not import PIL.Image? The work-around could be to update my PYTHONPATH with ~/cadlib/PIL , but I think it is not what should be done. Correct me if I am wrong. Best regards, Tanja Gurzhiy Please find below the information about my system, platform, etc. 'from numpy.f2py.diagnose import run; run() os.name='posix' ------ sys.platform='sunos5' ------ sys.version: 2.5.1 (r251:54863, Aug 6 2008, 14:27:55) [GCC 3.4.3] ------ sys.prefix: /home/tgurzhiy/.caddata/python/python ------ sys.path='/usr/local/asm::/home/tgurzhiy/cadlib:/usr/local/asm/lib/python:/usr/local/asm/bin/python:/scratch/sources/numpy/nose-0.10.4-install/lib/python2.5/site-packages:/home/tgurzhiy/.caddata/python/python/lib/python25.zip:/home/tgurzhiy/.caddata/python/python/lib/python2.5:/home/tgurzhiy/.caddata/python/python/lib/python2.5/plat-sunos5:/home/tgurzhiy/.caddata/python/python/lib/python2.5/lib-tk:/home/tgurzhiy/.caddata/python/python/lib/python2.5/lib-dynload:/home/tgurzhiy/.caddata/python/python/lib/python2.5/site-packages' ------ Failed to import Numeric: No module named Numeric Failed to import numarray: No module named numarray Found new numpy version '1.3.0' in /home/tgurzhiy/cadlib/numpy/__init__.pyc Found f2py2e version '2' in /home/tgurzhiy/cadlib/numpy/f2py/f2py2e.pyc Found numpy.distutils version '0.4.0' in '/home/tgurzhiy/cadlib/numpy/distutils/__init__.pyc' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: GnuFCompiler instance properties: archiver = ['/home/tgurzhiy/cadbin/g77', '-cr'] compile_switch = '-c' compiler_f77 = ['/home/tgurzhiy/cadbin/g77', '-g', '-Wall', '-fno- second-underscore', '-fPIC', '-O3', '-funroll-loops'] compiler_f90 = None compiler_fix = None libraries = ['g2c'] library_dirs = ['/home/cadappl/gcc/3.4.3/gcc/bin/../lib/gcc/sparc-sun- solaris2.8/3.4.3'] linker_exe = ['/home/tgurzhiy/cadbin/g77', '-g', '-Wall', '-g', '- Wall'] linker_so = ['/home/tgurzhiy/cadbin/g77', '-g', '-Wall', '-g', '- Wall', '-shared', '-mimpure-text'] object_switch = '-o ' ranlib = ['/home/tgurzhiy/cadbin/g77'] version = LooseVersion ('3.4.3') version_cmd = ['/home/tgurzhiy/cadbin/g77', '--version'] Fortran compilers found: --fcompiler=gnu GNU Fortran 77 compiler (3.4.3) Compilers available for this platform, but not found: --fcompiler=g95 G95 Fortran Compiler --fcompiler=gnu95 GNU Fortran 95 compiler --fcompiler=sun Sun or Forte Fortran 95 Compiler Compilers not available on this platform: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for EM64T-based apps --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=none Fake Fortran compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: CPUInfoBase__get_nbits getNCPUs is_32bit is_cpusparcv9 is_sparcv9 is_sun4 ------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed May 6 15:52:54 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 May 2009 15:52:54 -0400 Subject: [SciPy-user] SciPy autotester fails on pilutil In-Reply-To: <8377d7bd0905060805t1c56ee33pa1d3e9396c828cde@mail.gmail.com> References: <8377d7bd0905060805t1c56ee33pa1d3e9396c828cde@mail.gmail.com> Message-ID: <1cd32cbb0905061252k6d7b8359rcf6d4f1e05531d03@mail.gmail.com> On Wed, May 6, 2009 at 11:05 AM, Tanja Gurzhiy wrote: > Hi all, > > > > I am currently busy with SciPy 0.7.0 installation (I have NumPy v1.3.0 > installed and PIL v). > > If I run the scipy autotester (scipy.test()), I get the failure in testcase > from scipy.misc > > ERROR: Failure: ImportError (No module named Image) > When I installed PIL, it added the file PIL.pth to the sitepackages directory, which adds the pil directory to the python path, and so ``import Image`` works in the standard install (I'm on Windows). Can you check if you have a PIL.pth? the content is just the word PIL unless it works differently with your operating system. Josef From Chris.Barker at noaa.gov Wed May 6 17:07:24 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 06 May 2009 14:07:24 -0700 Subject: [SciPy-user] SciPy autotester fails on pilutil In-Reply-To: <1cd32cbb0905061252k6d7b8359rcf6d4f1e05531d03@mail.gmail.com> References: <8377d7bd0905060805t1c56ee33pa1d3e9396c828cde@mail.gmail.com> <1cd32cbb0905061252k6d7b8359rcf6d4f1e05531d03@mail.gmail.com> Message-ID: <4A01FC0C.7010709@noaa.gov> josef.pktd at gmail.com wrote: > When I installed PIL, it added the file PIL.pth to the sitepackages > directory, which adds the pil directory to the python path, and so > ``import Image`` works in the standard install (I'm on Windows). > > Can you check if you have a PIL.pth? the content is just the word PIL > unless it works differently with your operating system. FWIW, I think "import Image" is deprecated, in favor of "from PIL import Image", which removes the need for the pth file. The PIL packge has a LOT of stuff in it, you really don't want all of it on your sys.path. "Namespaces are one honking great idea -- let's do more of those!" (which isn't an exact fit to the topic at hand, but it's close) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From josef.pktd at gmail.com Wed May 6 17:51:25 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 May 2009 17:51:25 -0400 Subject: [SciPy-user] SciPy autotester fails on pilutil In-Reply-To: <4A01FC0C.7010709@noaa.gov> References: <8377d7bd0905060805t1c56ee33pa1d3e9396c828cde@mail.gmail.com> <1cd32cbb0905061252k6d7b8359rcf6d4f1e05531d03@mail.gmail.com> <4A01FC0C.7010709@noaa.gov> Message-ID: <1cd32cbb0905061451l3c6d16a0ldc807b85fae15c91@mail.gmail.com> On Wed, May 6, 2009 at 5:07 PM, Christopher Barker wrote: > josef.pktd at gmail.com wrote: >> When I installed PIL, it added the file PIL.pth to the sitepackages >> directory, which adds the pil directory to the python path, and so >> ``import Image`` works in the standard install (I'm on Windows). >> >> Can you check if you have a PIL.pth? the content is just the word PIL >> unless it works differently with your operating system. > > FWIW, I think "import Image" is deprecated, in favor of "from PIL import > Image", which removes the need for the pth file. The PIL packge has a > LOT of stuff in it, you really don't want all of it on your sys.path. > > > "Namespaces are one honking great idea -- let's do more of those!" > > (which isn't an exact fit to the topic at hand, but it's close) > +1 When I looked this up, I also found a numpy.pth in sitepackages, but I don't know which installer put it there, or if I put it there myself. Is numpy.pth necessary for anything? Josef From robert.kern at gmail.com Wed May 6 18:01:55 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 6 May 2009 18:01:55 -0400 Subject: [SciPy-user] SciPy autotester fails on pilutil In-Reply-To: <1cd32cbb0905061451l3c6d16a0ldc807b85fae15c91@mail.gmail.com> References: <8377d7bd0905060805t1c56ee33pa1d3e9396c828cde@mail.gmail.com> <1cd32cbb0905061252k6d7b8359rcf6d4f1e05531d03@mail.gmail.com> <4A01FC0C.7010709@noaa.gov> <1cd32cbb0905061451l3c6d16a0ldc807b85fae15c91@mail.gmail.com> Message-ID: <3d375d730905061501v3772cd48mda0d2ed7c37b0b39@mail.gmail.com> On Wed, May 6, 2009 at 17:51, wrote: > On Wed, May 6, 2009 at 5:07 PM, Christopher Barker > wrote: >> josef.pktd at gmail.com wrote: >>> When I installed PIL, it added the file PIL.pth to the sitepackages >>> directory, which adds the pil directory to the python path, and so >>> ``import Image`` works in the standard install (I'm on Windows). >>> >>> Can you check if you have a PIL.pth? the content is just the word PIL >>> unless it works differently with your operating system. >> >> FWIW, I think "import Image" is deprecated, in favor of "from PIL import >> Image", which removes the need for the pth file. The PIL packge has a >> LOT of stuff in it, you really don't want all of it on your sys.path. >> >> >> "Namespaces are one honking great idea -- let's do more of those!" >> >> (which isn't an exact fit to the topic at hand, but it's close) >> > > +1 > When I looked this up, I also found a numpy.pth in sitepackages, but I > don't know which installer put it there, or if I put it there myself. > Is numpy.pth necessary for anything? What's inside it? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Wed May 6 18:06:38 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 06 May 2009 15:06:38 -0700 Subject: [SciPy-user] SciPy autotester fails on pilutil In-Reply-To: <1cd32cbb0905061451l3c6d16a0ldc807b85fae15c91@mail.gmail.com> References: <8377d7bd0905060805t1c56ee33pa1d3e9396c828cde@mail.gmail.com> <1cd32cbb0905061252k6d7b8359rcf6d4f1e05531d03@mail.gmail.com> <4A01FC0C.7010709@noaa.gov> <1cd32cbb0905061451l3c6d16a0ldc807b85fae15c91@mail.gmail.com> Message-ID: <4A0209EE.6010303@noaa.gov> josef.pktd at gmail.com wrote: > +1 > When I looked this up, I also found a numpy.pth in sitepackages, but I > don't know which installer put it there, or if I put it there myself. > Is numpy.pth necessary for anything? it shouldn't be, no. Way back when, Numeric used a pth file when it was installed, but it's been a long time. is there a "numpy" package in site_packages? There should be. If you installed it with easy_install then it might be buried in an egg, but that should be put in the path by easy_install.pth -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From josef.pktd at gmail.com Wed May 6 18:30:56 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 May 2009 18:30:56 -0400 Subject: [SciPy-user] SciPy autotester fails on pilutil In-Reply-To: <3d375d730905061501v3772cd48mda0d2ed7c37b0b39@mail.gmail.com> References: <8377d7bd0905060805t1c56ee33pa1d3e9396c828cde@mail.gmail.com> <1cd32cbb0905061252k6d7b8359rcf6d4f1e05531d03@mail.gmail.com> <4A01FC0C.7010709@noaa.gov> <1cd32cbb0905061451l3c6d16a0ldc807b85fae15c91@mail.gmail.com> <3d375d730905061501v3772cd48mda0d2ed7c37b0b39@mail.gmail.com> Message-ID: <1cd32cbb0905061530s5b8b2b8dp408c44414a69c2cf@mail.gmail.com> On Wed, May 6, 2009 at 6:01 PM, Robert Kern wrote: > On Wed, May 6, 2009 at 17:51, ? wrote: >> On Wed, May 6, 2009 at 5:07 PM, Christopher Barker >> wrote: >>> josef.pktd at gmail.com wrote: >>>> When I installed PIL, it added the file PIL.pth to the sitepackages >>>> directory, which adds the pil directory to the python path, and so >>>> ``import Image`` works in the standard install (I'm on Windows). >>>> >>>> Can you check if you have a PIL.pth? the content is just the word PIL >>>> unless it works differently with your operating system. >>> >>> FWIW, I think "import Image" is deprecated, in favor of "from PIL import >>> Image", which removes the need for the pth file. The PIL packge has a >>> LOT of stuff in it, you really don't want all of it on your sys.path. >>> >>> >>> "Namespaces are one honking great idea -- let's do more of those!" >>> >>> (which isn't an exact fit to the topic at hand, but it's close) >>> >> >> +1 >> When I looked this up, I also found a numpy.pth in sitepackages, but I >> don't know which installer put it there, or if I put it there myself. >> Is numpy.pth necessary for anything? > > What's inside it? > just ``numpy`` so I can do >>> import add_newdocs >>> add_newdocs.__file__ 'C:\\Programs\\Python25\\lib\\site-packages\\numpy\\add_newdocs.pyc' Just to verify, I reinstalled numpy-1.3.0-win32-superpack-python2.5.exe and it didn't add a numpy.pth file. Either it was from an earlier installer or when I installed my own build. So, I deleted numpy.pth. Josef From robert.kern at gmail.com Wed May 6 18:36:31 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 6 May 2009 18:36:31 -0400 Subject: [SciPy-user] SciPy autotester fails on pilutil In-Reply-To: <1cd32cbb0905061530s5b8b2b8dp408c44414a69c2cf@mail.gmail.com> References: <8377d7bd0905060805t1c56ee33pa1d3e9396c828cde@mail.gmail.com> <1cd32cbb0905061252k6d7b8359rcf6d4f1e05531d03@mail.gmail.com> <4A01FC0C.7010709@noaa.gov> <1cd32cbb0905061451l3c6d16a0ldc807b85fae15c91@mail.gmail.com> <3d375d730905061501v3772cd48mda0d2ed7c37b0b39@mail.gmail.com> <1cd32cbb0905061530s5b8b2b8dp408c44414a69c2cf@mail.gmail.com> Message-ID: <3d375d730905061536r1ef03539s5dcfeb3f21dd966a@mail.gmail.com> On Wed, May 6, 2009 at 18:30, wrote: > On Wed, May 6, 2009 at 6:01 PM, Robert Kern wrote: >> On Wed, May 6, 2009 at 17:51, ? wrote: >>> When I looked this up, I also found a numpy.pth in sitepackages, but I >>> don't know which installer put it there, or if I put it there myself. >>> Is numpy.pth necessary for anything? >> >> What's inside it? > > just ? ? ``numpy`` > > so I can do > >>>> import add_newdocs >>>> add_newdocs.__file__ > 'C:\\Programs\\Python25\\lib\\site-packages\\numpy\\add_newdocs.pyc' > > Just to verify, I reinstalled > numpy-1.3.0-win32-superpack-python2.5.exe and it didn't add a > numpy.pth file. > Either it was from an earlier installer or when I installed my own build. > > So, I deleted numpy.pth. Yup, it was definitely incorrect. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rob.patro at gmail.com Wed May 6 19:00:56 2009 From: rob.patro at gmail.com (Rob Patro) Date: Wed, 06 May 2009 19:00:56 -0400 Subject: [SciPy-user] sparse SVD In-Reply-To: References: Message-ID: <4A0216A8.8080903@gmail.com> Kenneth Arnold wrote: > 2009/4/9 Rob Patro : > >> Is there any implementation of sparse SVD available in scipy? If not, >> does anyone know of an implementation available in python at all? I'd >> like to port a project on which I'm working from Matlab to Python, but >> it is crucial that I am able to perform the SVD of large and *very* >> sparse matrices. >> > > > The Commonsense Computing Initiative at the MIT Media Lab > (http://csc.media.mit.edu but probably best known for > http://openmind.media.mit.edu) had a similar problem two years ago: we > wanted to run an SVD on a large, sparse semantic network. So we build > Divisi (http://divisi.media.mit.edu), which is based on numpy, but > also: > > * wraps SVDLIBC (first with SWIG, now with Cython) > (the SVD functionality is abstracted, so we could easily switch to > something like cvxopt or ARPACK which I hadn't heard of) > * has a data structure for sparse tensors (i.e., matrices with dim > 2) > * has a layered model of views enabling: > - labeling rows and columns with arbitrary Python objects > - various forms of normalization > - unfolding tensors into 2D for the higher-order SVD (HO-SVD) operation > * supports various math with the SVD results > * supports "blending" data from different sources > * (in progress) can reason by association as well as similarity > > The result, refined over almost 2 years of work (by grad students) has > powered nearly all of our group's research during this time. It's > released under GPL, but other licensing is possible especially if your > company sponsors the Media Lab. > > If you have the numpy headers, you should be able to just > `easy_install divisi`. We've recently been working on distribution, so > let us know if anything about that is broken. > > We think that significant chunks of this code would make a great > addition to numpy/scipy. We don't have the resources to push > integration ourselves, though, but we could certainly help anyone who > is interested in assimilating our code. And in the mean time it should > be useful to anyone wanting to run sparse SVDs. > > -Ken > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Ken, This looks very interesting indeed. In particular, the matlab implementation of sparse SVD has one feature of which I make extensive use; this is the ability to request singular values around a spectral shift, sigma, of my choosing. Does SVDLIBC offer such an option? How difficult for me would it be to expose such an option in divisi? Thanks, Rob From kenneth.arnold at gmail.com Wed May 6 19:52:32 2009 From: kenneth.arnold at gmail.com (Kenneth Arnold) Date: Wed, 6 May 2009 19:52:32 -0400 Subject: [SciPy-user] sparse SVD In-Reply-To: <9457e7c80905011058y91bd794s54823406399f07ba@mail.gmail.com> References: <9457e7c80905011058y91bd794s54823406399f07ba@mail.gmail.com> Message-ID: St?fan van der Walt: > Can you tell us a bit more about the sparse tensor representation you use? We've rolled our own. We first used a dictionary mapping tuples to keys, but now switched to a nested dictionary format. Those are of course internal representations; to the user it just looks like `tensor['dog','IsA','pet']` (which translates into numerical indices and then into nested dict lookups). Sidebar: Currently this wastes some RAM because the dicts aren't specialized to integer indices (they're just Python objects, which means an extra pointer at best). This would be a really helpful place to jump in and contribute, because we're hitting memory limits in our latest experiments! :) On the other hand, most of our experiments in practice only use two dimensions, for which we could specialize to use numpy or cvxopt's sparse matrix implementations. SVDLIBC uses a compressed-sparse-column representation. We have Cython code to convert our matrices into that, after unfolding tensors to matrices if necessary. And by the way: we use numpy's ndarray for dense tensor storage, but we need iteration to be dict-like (i.e., over tuples of keys), so we have to wrap ndarray. This works, but is tedious and slows things down. Any better ideas? About licensing: after some internal discussion, we concluded that staying GPL overall is necessary. We may be able to relicense some of the lower-level stuff (like the sparse tensor code), depending on interest. -Ken 2009/5/1 St?fan van der Walt : > Hi Ken > > 2009/5/1 Kenneth Arnold : >> We think that significant chunks of this code would make a great >> addition to numpy/scipy. We don't have the resources to push >> integration ourselves, though, but we could certainly help anyone who >> is interested in assimilating our code. And in the mean time it should >> be useful to anyone wanting to run sparse SVDs. > > We are always glad for new code contributions! ?SciPy and NumPy are > BSD licensed, so would your lab be able to relicense the code? ?I > think we could benefit from having both SVDLIBC and ARPACK sparse SVD > wrappers in SciPy. > > Can you tell us a bit more about the sparse tensor representation you use? > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From tpk at kraussfamily.org Wed May 6 21:33:55 2009 From: tpk at kraussfamily.org (Tom K.) Date: Wed, 6 May 2009 18:33:55 -0700 (PDT) Subject: [SciPy-user] firwin upgrades In-Reply-To: <23352236.post@talk.nabble.com> References: <23246480.post@talk.nabble.com> <9457e7c80905011046s674105ffj99099676b08be0db@mail.gmail.com> <23350618.post@talk.nabble.com> <9457e7c80905021733v3bf24777y9e66eedfa72b08d2@mail.gmail.com> <23351898.post@talk.nabble.com> <9457e7c80905021914i5e52630fj22d741183f8b145f@mail.gmail.com> <23352086.post@talk.nabble.com> <23352236.post@talk.nabble.com> Message-ID: <23418739.post@talk.nabble.com> OK, here's a stab at the new functionality. First the new routine (excerpt of filter_design.py), then the new test file (entire test_filter_design.py). What do y'all think about adding this functionality into the scipy repository? Can I check it in somewhere, or should I submit a patch (to where)? The only thing I can think of in addition is that the multiband API we came up with is a little bit non-intuitive (since it creates a passband starting from nyquist at the end of the cutoff list, then alternates between stopbands and passbands iterating backwards through the cutoff list). I would propose accepting a list of (left, right) tuples as a clearer, easier to use alternative. Any thoughts? class PassBand(object): def __init__(self, left, right): self.left = left self.right = right def __repr__(self): return ""%self.__dict__ def firwin(N, cutoff, width=None, window='hamming', btype='pass', scale=True): """FIR Filter Design using windowed ideal filter method. Examples (preferred syntax): firwin(N, [0, f], btype='pass') # low-pass from 0 to f firwin(N, [0, f], btype='pass', window='nuttall') # specific window firwin(N, [0, f], btype='stop') # stop from 0 to f --> high-pass firwin(N, [f1, f2], btype='pass') # band-pass filter firwin(N, [f1, f2], btype='stop') # band-stop filter firwin(N, [f1, f2, f3, f4], btype='pass') # multiband filter: starting from right, passes [f3,f4], stops [f2,f3], passes [f1,f2]; firwin(N, [f1, f2, f3, f4], btype='stop') # multiband filter: starting from right, stops [f3,f4], passes [f2,f3], stops [f1,f2] Also works: firwin(N, f) # low-pass from 0 to f firwin(N, f, btype='stop') # high-pass from f to 1 Parameters ---------- N -- length of filter (number of coefficients, = filter order + 1) cutoff -- cutoff frequency of filter (normalized so that 1 corresponds to Nyquist or pi radians / sample) OR a list (or array) of cutoff frequencies (that is, band edges). cutoff should be positive and monotonically increasing. width -- if width is not None, then assume it is the approximate width of the transition region (normalized so that 1 corresponds to pi) for use in kaiser FIR filter design. window -- desired window to use. See get_window function for allowed values. btype -- the band type of filter, either 'pass' or 'stop'; specifies whether the last band in the cutoff list is a passband or a stopband scale -- boolean, set to True to scale the coefficients so that the frequency response is exactly unity at a certain cutoff-dependent frequency. The frequency is either: 0 (DC) if the first passband starts at 0 1 (Nyquist) if the first passband ends at 1 center of first passband otherwise Returns ------- h -- coefficients of length N fir filter. """ assert btype == 'pass' or btype == 'stop' cutoff = numpy.atleast_1d(cutoff).tolist() # build up list of pass bands starting from the end (near Nyquist) bands = [] if btype == 'stop': cutoff.append(1.0) while cutoff: right = cutoff.pop() if cutoff: left = cutoff.pop() else: left = 0.0 if left != right: bands.insert(0, PassBand(left, right)) # build up the coefficients alpha = N//2 m = numpy.arange(0,N) - alpha # time indices of taps h = 0 for band in bands: h += band.right*special.sinc(band.right*m) h -= band.left*special.sinc(band.left*m) # get and apply window if isinstance(width,float): A = 2.285*N*width + 8 if (A < 21): beta = 0.0 elif (A <= 50): beta = 0.5842*(A-21)**0.4 + 0.07886*(A-21) else: beta = 0.1102*(A-8.7) window=('kaiser',beta) from signaltools import get_window win = get_window(window,N,fftbins=1) h *= win # Now handle scaling if desired if scale: firstBand = bands[0] if firstBand.left == 0: scale_frequency = 0.0 elif firstBand.right == 1: scale_frequency = 1.0 else: scale_frequency = .5*(firstBand.left + firstBand.right) h /= numpy.sum(h*exp(-1.j*pi*m*scale_frequency)) return h import warnings import numpy as np from numpy.testing import TestCase, assert_array_almost_equal from scipy.signal import tf2zpk, bessel, BadCoefficients, firwin, freqz class TestTf2zpk(TestCase): def test_simple(self): z_r = np.array([0.5, -0.5]) p_r = np.array([1.j / np.sqrt(2), -1.j / np.sqrt(2)]) # Sort the zeros/poles so that we don't fail the test if the order # changes z_r.sort() p_r.sort() b = np.poly(z_r) a = np.poly(p_r) z, p, k = tf2zpk(b, a) z.sort() p.sort() assert_array_almost_equal(z, z_r) assert_array_almost_equal(p, p_r) def test_bad_filter(self): """Regression test for #651: better handling of badly conditionned filter coefficients.""" b, a = bessel(20, 0.1) warnings.simplefilter("error", BadCoefficients) try: try: z, p, k = tf2zpk(b, a) raise AssertionError("tf2zpk did not warn about bad "\ "coefficients") except BadCoefficients: pass finally: warnings.simplefilter("always", BadCoefficients) class TestFirwin(TestCase): def check_response(self, h, expected_response, tol=.05): N = len(h) alpha = N//2 m = np.arange(0,N) - alpha # time indices of taps for freq, expected in expected_response: actual = abs(np.sum(h*np.exp(-1.j*np.pi*m*freq))) mse = abs(actual-expected)**2 self.assertTrue(mse %g'\ %(mse, tol)) def test_response(self): N = 51 f = .5 # increase length just to try even/odd h = firwin(N, [0, f], btype='pass') # low-pass from 0 to f self.check_response(h, [(.25,1), (.75,0)]) h = firwin(N+1, [0, f], btype='pass', window='nuttall') # specific window self.check_response(h, [(.25,1), (.75,0)]) h = firwin(N+2, [0, f], btype='stop') # stop from 0 to f --> high-pass self.check_response(h, [(.25,0), (.75,1)]) f1, f2, f3, f4 = .2, .4, .6, .8 h = firwin(N+3, [f1, f2], btype='pass') # band-pass filter self.check_response(h, [(.1,0), (.3,1), (.5,0)]) h = firwin(N+4, [f1, f2], btype='stop') # band-stop filter self.check_response(h, [(.1,1), (.3,0), (.5,1)]) h = firwin(N+5, [f1, f2, f3, f4], btype='pass', scale=False) self.check_response(h, [(.1,0), (.3,1), (.5,0), (.7,1), (.9,0)]) h = firwin(N+6, [f1, f2, f3, f4], btype='stop') # multiband filter self.check_response(h, [(.1,1), (.3,0), (.5,1), (.7,0), (.9,1)]) h = firwin(N+7, 0.1, width=.03) # low-pass self.check_response(h, [(.05,1), (.75,0)]) h = firwin(N+8, 0.1, btype='stop') # high-pass self.check_response(h, [(.05,0), (.75,1)]) def mse(self, h, bands): """Compute mean squared error versus ideal response across frequency band. h -- coefficients bands -- list of (left, right) tuples relative to 1==Nyquist of passbands """ w, H = freqz(h, worN=1024) f = w/np.pi passIndicator = np.zeros(len(w), bool) for left, right in bands: passIndicator |= (f>=left) & (f This filter is not linear phase: signal.firwin(4,.3) --> array([ 0.02051616, 0.23560318, 0.50827748, 0.23560318]) These two filters are identical except that the 2nd has zeros at either end, the 1st has a zero at the beginning - neither filter is actually the length requested (19 and 19 instead of 20 and 21). In [46]: signal.firwin(20,.3) Out[46]: array([ 9.34176168e-19, 2.92890318e-03, 6.34234732e-03, 3.78304197e-03, -1.23878481e-02, -3.43265728e-02, -3.18598611e-02, 2.65312166e-02, 1.37863165e-01, 2.51347679e-01, 2.99555858e-01, 2.51347679e-01, 1.37863165e-01, 2.65312166e-02, -3.18598611e-02, -3.43265728e-02, -1.23878481e-02, 3.78304197e-03, 6.34234732e-03, 2.92890318e-03]) In [47]: signal.firwin(21,.3) Out[47]: array([ 9.34176168e-19, 2.92890318e-03, 6.34234732e-03, 3.78304197e-03, -1.23878481e-02, -3.43265728e-02, -3.18598611e-02, 2.65312166e-02, 1.37863165e-01, 2.51347679e-01, 2.99555858e-01, 2.51347679e-01, 1.37863165e-01, 2.65312166e-02, -3.18598611e-02, -3.43265728e-02, -1.23878481e-02, 3.78304197e-03, 6.34234732e-03, 2.92890318e-03, 9.34176168e-19]) Can anyone explain this behavior? I don't think either filter should end in zero. I suspect something funny about the way the windows are defined - like, they are designed for spectral analysis, not FIR filter design. -- View this message in context: http://www.nabble.com/firwin-for-even-order-NOT-linear-phase--returns-0-coefficients-tp23419504p23419504.html Sent from the Scipy-User mailing list archive at Nabble.com. From kenneth.arnold at gmail.com Wed May 6 23:25:52 2009 From: kenneth.arnold at gmail.com (Kenneth Arnold) Date: Wed, 6 May 2009 23:25:52 -0400 Subject: [SciPy-user] sparse SVD In-Reply-To: <4A0216A8.8080903@gmail.com> References: <4A0216A8.8080903@gmail.com> Message-ID: On Wed, May 6, 2009 at 7:00 PM, Rob Patro wrote: > This looks very interesting indeed. ?In particular, the matlab > implementation of sparse SVD has one feature of which I make extensive > use; this is the ability to request singular values around a spectral > shift, sigma, of my choosing. ?Does SVDLIBC offer such an option? ?How > difficult for me would it be to expose such an option in divisi? I have not heard of such an operation; what does that accomplish for you? (i.e., should we try it?) I'm not familiar with SVDLIBC at all beyond the part that we wrap. I looked at the docs a bit and didn't see an obvious option for doing what you describe, but that could be as much because I don't know the thing you're describing! SVDLIBC is simply a C port (not done by us) of the SVDLIB Fortran module. But if you did find it in SVDLIBC, the Cython (or Pyrex; I don't think we use anything cython-specific) wrappers make it quite easy to expose those things to Python. -Ken From rob.patro at gmail.com Wed May 6 23:46:49 2009 From: rob.patro at gmail.com (Rob) Date: Wed, 06 May 2009 23:46:49 -0400 Subject: [SciPy-user] sparse SVD In-Reply-To: References: <4A0216A8.8080903@gmail.com> Message-ID: <4A0259A9.4030507@gmail.com> Kenneth Arnold wrote: > On Wed, May 6, 2009 at 7:00 PM, Rob Patro wrote: > >> This looks very interesting indeed. In particular, the matlab >> implementation of sparse SVD has one feature of which I make extensive >> use; this is the ability to request singular values around a spectral >> shift, sigma, of my choosing. Does SVDLIBC offer such an option? How >> difficult for me would it be to expose such an option in divisi? >> > > I have not heard of such an operation; what does that accomplish for > you? (i.e., should we try it?) > > I'm not familiar with SVDLIBC at all beyond the part that we wrap. I > looked at the docs a bit and didn't see an obvious option for doing > what you describe, but that could be as much because I don't know the > thing you're describing! SVDLIBC is simply a C port (not done by us) > of the SVDLIB Fortran module. > > But if you did find it in SVDLIBC, the Cython (or Pyrex; I don't think > we use anything cython-specific) wrappers make it quite easy to expose > those things to Python. > > -Ken > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Ken, Generally, such an operation is known as a spectral transform. The reason one might wish to do this is as follows. You wish to obtain enough singular value triplets from the decomposition of a large sparse matrix, M, to represent it to some particularly desired precision. In this case, computing the entire decomposition is very wasteful. However, one does not know, a priori, how many singular triplets will be required. So you obtain a small number (say, 50) of the highest energy singular triplets . If you can't represent M to the desired precision, you call the svd routine again, but this time passing the a spectral shift closest to the smallest singular value from the previous iteration. This returns 50 more singular triplets which, when composed with the first 50, constitute the first 100 singular triplets of M; as if you had called the SVD routine initially and requested 100 triplets. You can iterate this procedure, requesting a small number of triplets with a shifted spectrum each time until you achieve your desired precision. In addition to not having to compute singular triplets you don't need, this method usually has another benefit. Often, the sparse eigenvalue decomposition which power sparse SVD libraries are super-linear in the number of requested eigenpairs. Thus, it is often faster to request 50 eigenvectors (singular triplets) twice than it is to request 100 eigenvectors (singular triplets) once. The situation only gets worse when you are requesting many hundreds or thousands of triplets. For example, in our particular application, using the shift-invert spectral transform to iteratively obtain singular triplets, we obtained more than an order of magnitude speed up (we were doing SVDs on hundreds of moderately sized sparse matrices). --Rob From tanja.gurzhiy at gmail.com Thu May 7 05:32:37 2009 From: tanja.gurzhiy at gmail.com (Tanja Gurzhiy) Date: Thu, 7 May 2009 11:32:37 +0200 Subject: [SciPy-user] SciPy autotester fails on pilutil In-Reply-To: <8377d7bd0905060805t1c56ee33pa1d3e9396c828cde@mail.gmail.com> References: <8377d7bd0905060805t1c56ee33pa1d3e9396c828cde@mail.gmail.com> Message-ID: <8377d7bd0905070232u5a140dc1v401a58260e01d3a0@mail.gmail.com> Hi all, Thanks for your answers. Sorry, but I still a bit confused....I have PIL.pth in site-packages directory. And by the way, I also have SciPy version under the Windows where PIL works just fine with "import Image"... Coming back to my SciPy under SUN. I?ve checked the readme file for PIL (I use PIL 1.1.6) - till the PIL version 1.1.4 it is recommended to add the PIL in PYTHONPATH Then type (assuming a standard shell): $ PYTHONPATH=.:./PIL ; export PYTHONPATH $ python >>> import _imaging >>> import Image If both imports works, you've successfully added PIL to your Python environment. However, in PIL version 1.1.5-1.1.6 it is not mentioned about the requirement to add PIL to PYTHONPATH ( $ PYTHONPATH=.:./PIL ; export PYTHONPATH). So, what is the current situation - do I need to add PIL to my PYTHONPATH in order to make import Image direct? I think it is the only solution for me (because PYTHONPATH environment variable defines what directories the python interpreter searches for modules) otherwise the source of pilutil.py of needs to be updated with ?from PIL import Image?. Christopher Barker wrote: >FWIW, I think "import Image" is deprecated, in favor of "from PIL import >Image", which removes the need for the pth file. The PIL packge has a >LOT of stuff in it, you really don't want all of it on your sys.path. How can I see/find that ?import Inage? is deprecated? I use the newest version of SciPy and it is still uses ?import Image?. I also think the usage of "from PIL import Image" makes a code more readable and structured. Kind regards, Tanja Gurzhiy On Wed, May 6, 2009 at 5:05 PM, Tanja Gurzhiy wrote: > Hi all, > > > > I am currently busy with SciPy 0.7.0 installation (I have NumPy v1.3.0 > installed and PIL v). > > If I run the scipy autotester (scipy.test()), I get the failure in testcase > from scipy.misc > > ERROR: Failure: ImportError (No module named Image) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > "/scratch/sources/numpy/nose-0.10.4-install/lib/python2.5/site-packages/nose/loader.py", > line 364, in loadTestsFromName addr.filename, addr.module) > > File > "/scratch/sources/numpy/nose-0.10.4-install/lib/python2.5/site-packages/nose/importer.py", > line 39, in importFromPath return self.importFromDir(dir_path, fqname) > > File > "/scratch/sources/numpy/nose-0.10.4-install/lib/python2.5/site-packages/nose/importer.py", > line 84, in importFromDir mod = load_module(part_fqname, fh, filename, > desc) > > File "/home/tgurzhiy/cadlib/scipy/misc/tests/test_pilutil.py", line 12, > in > > import scipy.misc.pilutil as pilutil > > File > "/cadappl/python_scipy/0.7.0/python_scipy/lib/python2.5/site-packages/scipy/misc/pilutil.py", > line 10, in import Image > > ImportError: No module named Image > > > > > > In pilutil.py line 10 use import Image, the module Image is located in PIL. > In my case PYTHONPATH includes only directory (~/cadlib) that includes PIL, > NumPy, matloptlib and SciPy therefore module Image cannot be imported > directly if scipy.test() is ran. > > > > As consequences of this error in pilutil, the functions from this file > cannot be used (e.g., imread, imshow, etc.) > > > > >>> from scipy import misc > > >>> im = misc.lena() > > >>> im.shape > > (512, 512) > > >>> im1= misc.imresize(im,1.1) > > Traceback (most recent call last): > > File "", line 1, in > > AttributeError: 'module' object has no attribute 'imresize' > > > > Why scipy/misc/pilutil.py uses direct import Image and not import > PIL.Image? The work-around could be to update my PYTHONPATH with > ~/cadlib/PIL , but I think it is not what should be done. Correct me if I am > wrong. > > > > Best regards, > > Tanja Gurzhiy > > > > Please find below the information about my system, platform, etc. > > > > 'from numpy.f2py.diagnose import run; run() > > > > os.name='posix' > > ------ > > sys.platform='sunos5' > > ------ > > sys.version: > > 2.5.1 (r251:54863, Aug 6 2008, 14:27:55) > > [GCC 3.4.3] > > ------ > > sys.prefix: > > /home/tgurzhiy/.caddata/python/python > > ------ > > > sys.path='/usr/local/asm::/home/tgurzhiy/cadlib:/usr/local/asm/lib/python:/usr/local/asm/bin/python:/scratch/sources/numpy/nose-0.10.4-install/lib/python2.5/site-packages:/home/tgurzhiy/.caddata/python/python/lib/python25.zip:/home/tgurzhiy/.caddata/python/python/lib/python2.5:/home/tgurzhiy/.caddata/python/python/lib/python2.5/plat-sunos5:/home/tgurzhiy/.caddata/python/python/lib/python2.5/lib-tk:/home/tgurzhiy/.caddata/python/python/lib/python2.5/lib-dynload:/home/tgurzhiy/.caddata/python/python/lib/python2.5/site-packages' > > ------ > > Failed to import Numeric: No module named Numeric > > Failed to import numarray: No module named numarray > > Found new numpy version '1.3.0' in /home/tgurzhiy/cadlib/numpy/__init__.pyc > > Found f2py2e version '2' in /home/tgurzhiy/cadlib/numpy/f2py/f2py2e.pyc > > Found numpy.distutils version '0.4.0' in > '/home/tgurzhiy/cadlib/numpy/distutils/__init__.pyc' > > ------ > > Importing numpy.distutils.fcompiler ... ok > > ------ > > Checking availability of supported Fortran compilers: > > GnuFCompiler instance properties: > > archiver = ['/home/tgurzhiy/cadbin/g77', '-cr'] > > compile_switch = '-c' > > compiler_f77 = ['/home/tgurzhiy/cadbin/g77', '-g', '-Wall', '-fno- > > second-underscore', '-fPIC', '-O3', '-funroll-loops'] > > compiler_f90 = None > > compiler_fix = None > > libraries = ['g2c'] > > library_dirs = > ['/home/cadappl/gcc/3.4.3/gcc/bin/../lib/gcc/sparc-sun- > > solaris2.8/3.4.3'] > > linker_exe = ['/home/tgurzhiy/cadbin/g77', '-g', '-Wall', '-g', '- > > Wall'] > > linker_so = ['/home/tgurzhiy/cadbin/g77', '-g', '-Wall', '-g', '- > > Wall', '-shared', '-mimpure-text'] > > object_switch = '-o ' > > ranlib = ['/home/tgurzhiy/cadbin/g77'] > > version = LooseVersion ('3.4.3') > > version_cmd = ['/home/tgurzhiy/cadbin/g77', '--version'] > > Fortran compilers found: > > --fcompiler=gnu GNU Fortran 77 compiler (3.4.3) > > Compilers available for this platform, but not found: > > --fcompiler=g95 G95 Fortran Compiler > > --fcompiler=gnu95 GNU Fortran 95 compiler > > --fcompiler=sun Sun or Forte Fortran 95 Compiler > > Compilers not available on this platform: > > --fcompiler=absoft Absoft Corp Fortran Compiler > > --fcompiler=compaq Compaq Fortran Compiler > > --fcompiler=hpux HP Fortran 90 Compiler > > --fcompiler=ibm IBM XL Fortran Compiler > > --fcompiler=intel Intel Fortran Compiler for 32-bit apps > > --fcompiler=intele Intel Fortran Compiler for Itanium apps > > --fcompiler=intelem Intel Fortran Compiler for EM64T-based apps > > --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps > > --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps > > --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler > > --fcompiler=mips MIPSpro Fortran Compiler > > --fcompiler=nag NAGWare Fortran 95 Compiler > > --fcompiler=none Fake Fortran compiler > > --fcompiler=pg Portland Group Fortran Compiler > > --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler > > For compiler details, run 'config_fc --verbose' setup command. > > ------ > > Importing numpy.distutils.cpuinfo ... ok > > ------ > > CPU information: CPUInfoBase__get_nbits getNCPUs is_32bit is_cpusparcv9 > is_sparcv9 is_sun4 ------ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From SGONG at mdacorporation.com Thu May 7 08:35:12 2009 From: SGONG at mdacorporation.com (Shawn GONG) Date: Thu, 7 May 2009 05:35:12 -0700 Subject: [SciPy-user] when will scipy win32 for Python2.6 be ready References: <5F76F80A436C1146A7C04C6F844026FBB726F6@VMXYVR2.ds.mda.ca> <4A011B50.4050003@ar.media.kyoto-u.ac.jp> Message-ID: <5F76F80A436C1146A7C04C6F844026FBB726FD@VMXYVR2.ds.mda.ca> Thanks David Is there a time line for scipy 0.7.1? Shawn -----Original Message----- From: scipy-user-bounces at scipy.org on behalf of David Cournapeau Sent: Tue 5/5/2009 10:08 PM To: SciPy Users List Subject: Re: [SciPy-user] when will scipy win32 for Python2.6 be ready Shawn GONG wrote: > > hi, > > I am running Python2.6 and MS VS 2005 on Windows XP. > I'd like to use scipy. When will scipy binary install for Python2.6 be > ready? > Hopefully, scipy 0.7.1 will have a python 2.6 binary. > I tried to run "python setup.py install", but got error message saying: > numpy.distutils.system_info.BlasNotFoundError: > Blas (http://www.netlib.org/blas/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [blas]) or by setting > the BLAS environment variable. > Yes, you need both BLAS and LAPACK to build scipy by yourself. Building them is a bit difficult on windows, cheers, David _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3322 bytes Desc: not available URL: From Igor.Klubok at sac.com Thu May 7 09:07:54 2009 From: Igor.Klubok at sac.com (Klubok, Igor) Date: Thu, 7 May 2009 09:07:54 -0400 Subject: [SciPy-user] 'from scipy import linalg' fails Message-ID: <174589AD27660C47859425AF90C578FB04CE023C@MAILNYIS03.saccap.int> Good morning, I installed scipy 0.7.0 and numpy 1.3.0 on 64bit RedHat Linux 3 ES (kernel 2.4.21-37.Elsmp, AMD-based hardware) Since there was a known bug for incomplete blas packags on RedHat, I properly compiled lapack 3.1.1 and atlas 3.8.3 and installed the resultant files in /usr/local/lib/atlas dir. I set up an env var ATLAS=/usr/local/lib/atlas/lib prior to scipy build/install process. Here is an error message I get: % python2.5 >>> from scipy import linalg; Traceback (most recent call last): File "", line 1, in File "/usr/local/python2.5/lib/python2.5/site- packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/local/python2.5/lib/python2.5/site-packages/scipy/linalg/basic.py" , line 389, in import decomp File "/usr/local/python2.5/lib/python2.5/site-packages/scipy/linalg/decomp.py ", line 23, in from blas import get_blas_funcs File "/usr/local/python2.5/lib/python2.5/site-packages/scipy/linalg/blas.py", line 14, in from scipy.linalg import fblas ImportError: /usr/local/python2.5/lib/python2.5/site-packages/scipy/linalg/fblas.so: undefined symbol: srotmg_ I would appreciate your advice on how to address this issue. Thank you, Igor Klubok DISCLAIMER: This e-mail message and any attachments are intended solely for the use of the individual or entity to which it is addressed and may contain information that is confidential or legally privileged. If you are not the intended recipient, you are hereby notified that any dissemination, distribution, copying or other use of this message or its attachments is strictly prohibited. If you have received this message in error, please notify the sender immediately and permanently delete this message and any attachments. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu May 7 09:34:52 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 7 May 2009 22:34:52 +0900 Subject: [SciPy-user] 'from scipy import linalg' fails In-Reply-To: <174589AD27660C47859425AF90C578FB04CE023C@MAILNYIS03.saccap.int> References: <174589AD27660C47859425AF90C578FB04CE023C@MAILNYIS03.saccap.int> Message-ID: <5b8d13220905070634l4d9521a2o877b6413ef592914@mail.gmail.com> Hi Igor, On Thu, May 7, 2009 at 10:07 PM, Klubok, Igor wrote: > Good morning, > > ?I installed scipy 0.7.0 and numpy 1.3.0 on 64bit RedHat Linux 3 ES (kernel > 2.4.21-37.Elsmp, AMD-based hardware) > > ?Since there was a known bug for incomplete blas packags on RedHat, I > properly compiled lapack 3.1.1 and atlas 3.8.3 and installed the resultant > files in /usr/local/lib/atlas dir. I set up an env var > ATLAS=/usr/local/lib/atlas/lib prior to scipy build/install process. Your ATLAS is not correctly built. Unless you really need the extra speed, you should use the basic blas/lapack from netlib.org, they are much easier to build. cheers, David From mchandra at iitk.ac.in Thu May 7 11:04:50 2009 From: mchandra at iitk.ac.in (Mani chandra) Date: Thu, 07 May 2009 08:04:50 -0700 Subject: [SciPy-user] Size of the text in the legend box Message-ID: <4A02F892.2060502@iitk.ac.in> Hi, How do I adjust the size of the text in the legend box for matplotlib? Thanks, Mani chandra From jdh2358 at gmail.com Thu May 7 11:38:16 2009 From: jdh2358 at gmail.com (John Hunter) Date: Thu, 7 May 2009 10:38:16 -0500 Subject: [SciPy-user] Size of the text in the legend box In-Reply-To: <4A02F892.2060502@iitk.ac.in> References: <4A02F892.2060502@iitk.ac.in> Message-ID: <88e473830905070838y22275b2cxa763b2cd0a360998@mail.gmail.com> On Thu, May 7, 2009 at 10:04 AM, Mani chandra wrote: > > How do I adjust the size of the text in the legend box for matplotlib? matplotlib questions should be directed to matplotlib-users https://lists.sourceforge.net/lists/listinfo/matplotlib-users JDH From dave.hirschfeld at gmail.com Fri May 8 09:21:16 2009 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Fri, 8 May 2009 13:21:16 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?timeseries_-_mov=5Faverage=5Fexpw_alters_i?= =?utf-8?q?t=27s_input?= Message-ID: As demonstrated below the mov_average_expw function changes its input series. Is this known or expected behaviour or a bug? I'd venture to suggest it's a little surprising, especially for new users. -Dave from copy import deepcopy import numpy.ma as ma from numpy.random import rand import scikits.timeseries as ts from scikits.timeseries.lib.moving_funcs import mov_average_expw N = 256 series = ts.time_series(rand(N), ts.date_array(start_date=ts.Date('D','2008-01-01'),length=N)) series[96:128] = ma.masked original_series = deepcopy(series) filtered_series = mov_average_expw(series,16) assert (series.mask == original_series.mask).all() Traceback (most recent call last): File "", line 1, in AssertionError assert (filtered_series.mask == series.mask).all() ts.__version__ '0.91.1' import numpy as np; np.__version__ '1.4.0.dev6882' From josef.pktd at gmail.com Fri May 8 10:30:09 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 8 May 2009 10:30:09 -0400 Subject: [SciPy-user] timeseries - mov_average_expw alters it's input In-Reply-To: References: Message-ID: <1cd32cbb0905080730v125612a3o744705197ff4824d@mail.gmail.com> On Fri, May 8, 2009 at 9:21 AM, Dave Hirschfeld wrote: > As demonstrated below the mov_average_expw function changes its input series. > Is this known or expected behaviour or a bug? I'd venture to suggest it's a > little surprising, especially for new users. > > -Dave > > ?from copy import deepcopy > ?import numpy.ma as ma > ?from numpy.random import rand > ?import scikits.timeseries as ts > ?from scikits.timeseries.lib.moving_funcs import mov_average_expw > > ?N = 256 > ?series = ts.time_series(rand(N), > ? ? ? ? ? ? ? ? ts.date_array(start_date=ts.Date('D','2008-01-01'),length=N)) > ?series[96:128] = ma.masked > ?original_series = deepcopy(series) > ?filtered_series = mov_average_expw(series,16) > > ?assert (series.mask == original_series.mask).all() > Traceback (most recent call last): > ?File "", line 1, in > AssertionError > > ?assert (filtered_series.mask == series.mask).all() > > ?ts.__version__ > '0.91.1' > ?import numpy as np; np.__version__ > '1.4.0.dev6882' > I wouldn't be surprised. What would be the moving average of your observations starting at 129, when the previous observations are masked? Maybe you can try to change the ``tol`` parameter, to get the result you want? tol : {1e-6, float}, optional Tolerance for the definition of the mask. When data contains masked values, this parameter determines what points in the result should be masked. Values in the result that would not be ?significantly? impacted (as determined by this parameter) by the masked values are left unmasked. Josef From pgmdevlist at gmail.com Fri May 8 11:38:32 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 8 May 2009 11:38:32 -0400 Subject: [SciPy-user] timeseries - mov_average_expw alters it's input In-Reply-To: <1cd32cbb0905080730v125612a3o744705197ff4824d@mail.gmail.com> References: <1cd32cbb0905080730v125612a3o744705197ff4824d@mail.gmail.com> Message-ID: <56C464FD-8728-4CB1-A85B-754612106CDB@gmail.com> Dave, It looks like a bug indeed, the mask of the original series shouldn't be modified. I'm on it and will let you know when it's fixed. (BTW, I advise you to use numpy.ma.testutils.assert_equal to test the equality of 2 MaskedArrays, instead of the syntax you were using in the example) On May 8, 2009, at 10:30 AM, josef.pktd at gmail.com wrote: > On Fri, May 8, 2009 at 9:21 AM, Dave Hirschfeld > wrote: >> As demonstrated below the mov_average_expw function changes its >> input series. >> Is this known or expected behaviour or a bug? I'd venture to >> suggest it's a >> little surprising, especially for new users. >> >> -Dave >> >> from copy import deepcopy >> import numpy.ma as ma >> from numpy.random import rand >> import scikits.timeseries as ts >> from scikits.timeseries.lib.moving_funcs import mov_average_expw >> >> N = 256 >> series = ts.time_series(rand(N), >> >> ts.date_array(start_date=ts.Date('D','2008-01-01'),length=N)) >> series[96:128] = ma.masked >> original_series = deepcopy(series) >> filtered_series = mov_average_expw(series,16) >> >> assert (series.mask == original_series.mask).all() >> Traceback (most recent call last): >> File "", line 1, in >> AssertionError >> >> assert (filtered_series.mask == series.mask).all() >> >> ts.__version__ >> '0.91.1' >> import numpy as np; np.__version__ >> '1.4.0.dev6882' >> > > I wouldn't be surprised. What would be the moving average of your > observations starting at 129, when the previous observations are > masked? > > Maybe you can try to change the ``tol`` parameter, to get the result > you want? > > > tol : {1e-6, float}, optional > > Tolerance for the definition of the mask. When data contains > masked values, this parameter determines what points in the result > should be masked. Values in the result that would not be > ?significantly? impacted (as determined by this parameter) by the > masked values are left unmasked. > > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From pgmdevlist at gmail.com Fri May 8 12:47:27 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 8 May 2009 12:47:27 -0400 Subject: [SciPy-user] timeseries - mov_average_expw alters it's input In-Reply-To: References: Message-ID: Dave, That should be fixed in the SVN (r2187). Do you want to give it a try ? Thanks a lot again for reporting. P. On May 8, 2009, at 9:21 AM, Dave Hirschfeld wrote: > As demonstrated below the mov_average_expw function changes its > input series. > Is this known or expected behaviour or a bug? I'd venture to suggest > it's a > little surprising, especially for new users. > > -Dave > > from copy import deepcopy > import numpy.ma as ma > from numpy.random import rand > import scikits.timeseries as ts > from scikits.timeseries.lib.moving_funcs import mov_average_expw > > N = 256 > series = ts.time_series(rand(N), > > ts.date_array(start_date=ts.Date('D','2008-01-01'),length=N)) > series[96:128] = ma.masked > original_series = deepcopy(series) > filtered_series = mov_average_expw(series,16) > > assert (series.mask == original_series.mask).all() > Traceback (most recent call last): > File "", line 1, in > AssertionError > > assert (filtered_series.mask == series.mask).all() > > ts.__version__ > '0.91.1' > import numpy as np; np.__version__ > '1.4.0.dev6882' > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From Igor.Klubok at sac.com Fri May 8 13:15:40 2009 From: Igor.Klubok at sac.com (Klubok, Igor) Date: Fri, 8 May 2009 13:15:40 -0400 Subject: [SciPy-user] 'from scipy import linalg' fails In-Reply-To: <5b8d13220905070634l4d9521a2o877b6413ef592914@mail.gmail.com> References: <174589AD27660C47859425AF90C578FB04CE023C@MAILNYIS03.saccap.int> <5b8d13220905070634l4d9521a2o877b6413ef592914@mail.gmail.com> Message-ID: <174589AD27660C47859425AF90C578FB04CE024E@MAILNYIS03.saccap.int> Hi David, I am installing scipy for quant researchers and do need the most optimized version of blas there is. Therefore, I chose ATLAS. ATLAS offers an example of ATLAS/lapack combination on Red Hat. I follow every step of the way exactly as it's stated in the atlas_install.pdf document. I still run into the issue of "undefined symbol: srotmg_" I am wondering if the version of gcc 3.2.3 has anything to do with it. What details do I need to provide here for analysis? Thank you, Igor -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of David Cournapeau Sent: Thursday, May 07, 2009 9:35 AM To: SciPy Users List Subject: Re: [SciPy-user] 'from scipy import linalg' fails Hi Igor, On Thu, May 7, 2009 at 10:07 PM, Klubok, Igor wrote: > Good morning, > > I installed scipy 0.7.0 and numpy 1.3.0 on 64bit RedHat Linux 3 ES > (kernel 2.4.21-37.Elsmp, AMD-based hardware) > > Since there was a known bug for incomplete blas packags on RedHat, I > properly compiled lapack 3.1.1 and atlas 3.8.3 and installed the > resultant files in /usr/local/lib/atlas dir. I set up an env var > ATLAS=/usr/local/lib/atlas/lib prior to scipy build/install process. Your ATLAS is not correctly built. Unless you really need the extra speed, you should use the basic blas/lapack from netlib.org, they are much easier to build. cheers, David _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user DISCLAIMER: This e-mail message and any attachments are intended solely for the use of the individual or entity to which it is addressed and may contain information that is confidential or legally privileged. If you are not the intended recipient, you are hereby notified that any dissemination, distribution, copying or other use of this message or its attachments is strictly prohibited. If you have received this message in error, please notify the sender immediately and permanently delete this message and any attachments. From ndrukelly at gmail.com Fri May 8 23:53:24 2009 From: ndrukelly at gmail.com (Andrew Kelly) Date: Fri, 8 May 2009 20:53:24 -0700 Subject: [SciPy-user] Force a smooth spline through data points? Message-ID: Is it possible to force a smoothed spline (scipy.interpolate.splrep() or the parametric version) through specific data points? I am basically trying to draw a smooth curve with certain data points that must be included while other (less critical) points need only to be smoothed per the norm. Thanks in advance. -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sat May 9 05:04:49 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 09 May 2009 18:04:49 +0900 Subject: [SciPy-user] 'from scipy import linalg' fails In-Reply-To: <174589AD27660C47859425AF90C578FB04CE024E@MAILNYIS03.saccap.int> References: <174589AD27660C47859425AF90C578FB04CE023C@MAILNYIS03.saccap.int> <5b8d13220905070634l4d9521a2o877b6413ef592914@mail.gmail.com> <174589AD27660C47859425AF90C578FB04CE024E@MAILNYIS03.saccap.int> Message-ID: <4A054731.70704@ar.media.kyoto-u.ac.jp> Klubok, Igor wrote: > Hi David, > > I am installing scipy for quant researchers and do need the most optimized version of blas there is. Therefore, I chose ATLAS. > > ATLAS offers an example of ATLAS/lapack combination on Red Hat. > I follow every step of the way exactly as it's stated in the atlas_install.pdf document. > > I still run into the issue of "undefined symbol: srotmg_" > First, you should check that you built everything with g77 and not gfortran (atlas, scipy, lapack, everything should use g77 or gfortran, but no mix). Then, you should check that the symbol srotmg_ is indeed in your ATLAS library (the libf77blas.so file, usually). Then, you should check whether the atlas library is indeed linked to your scipy installation (using ldd on scipy/linalg/*.so). You could also give us the build log of scipy, cheers, David From zachary.pincus at yale.edu Sat May 9 08:20:04 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sat, 9 May 2009 08:20:04 -0400 Subject: [SciPy-user] Force a smooth spline through data points? In-Reply-To: References: Message-ID: <59731B7A-C8C2-44A2-8AA6-1AB4A17AF52B@yale.edu> > Is it possible to force a smoothed spline > (scipy.interpolate.splrep() or the parametric version) through > specific data points? I am basically trying to draw a smooth curve > with certain data points that must be included while other (less > critical) points need only to be smoothed per the norm. Thanks in > advance. First off, are the results unsatisfactory when you just force the spline to interpolate all data points (s=0)? If so, then perhaps the best bet is to weight the points (w parameter): give the critical points some very high weight and the noncritical points a lower weight. Presumably if the critical-point- weight is high enough, you should get pretty much exact interpolation for those points even with a nonzero smoothing parameter. Zach From tpk at kraussfamily.org Sat May 9 09:29:14 2009 From: tpk at kraussfamily.org (Tom K.) Date: Sat, 9 May 2009 06:29:14 -0700 (PDT) Subject: [SciPy-user] scipy.signal issues in trak Message-ID: <23460546.post@talk.nabble.com> I am happy to submit a patch to firwin for the new highpass, bandpass, bandstop, and multi-band functionality that I coded up. But it appears that it may languish in trac like several other submitted features that have yet to be integrated in scipy.signal. E.g., fir2, fftfilt (submitted 5/07 and 1/09 respectively). Would it be helpful if I tried to integrate some of these and write tests for them, and then submit a combined patch? -- View this message in context: http://www.nabble.com/scipy.signal-issues-in-trak-tp23460546p23460546.html Sent from the Scipy-User mailing list archive at Nabble.com. From pav at iki.fi Sat May 9 09:37:06 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 9 May 2009 13:37:06 +0000 (UTC) Subject: [SciPy-user] scipy.signal issues in trac References: <23460546.post@talk.nabble.com> Message-ID: Sat, 09 May 2009 06:29:14 -0700, Tom K. wrote: > I am happy to submit a patch to firwin for the new highpass, bandpass, > bandstop, and multi-band functionality that I coded up. > > But it appears that it may languish in trac like several other submitted > features that have yet to be integrated in scipy.signal. E.g., fir2, > fftfilt (submitted 5/07 and 1/09 respectively). > > Would it be helpful if I tried to integrate some of these and write > tests for them, and then submit a combined patch? The more finished you make it, the better chance it has in getting in. The point with tests is that if you submit a new feature that has no tests, someone else needs to find time to write them, to ensure that the code works and does what it says. -- Pauli Virtanen From rmay31 at gmail.com Sat May 9 10:18:00 2009 From: rmay31 at gmail.com (Ryan May) Date: Sat, 9 May 2009 09:18:00 -0500 Subject: [SciPy-user] scipy.signal issues in trak In-Reply-To: <23460546.post@talk.nabble.com> References: <23460546.post@talk.nabble.com> Message-ID: On Sat, May 9, 2009 at 8:29 AM, Tom K. wrote: > > I am happy to submit a patch to firwin for the new highpass, bandpass, > bandstop, and multi-band functionality that I coded up. > > But it appears that it may languish in trac like several other submitted > features that have yet to be integrated in scipy.signal. E.g., fir2, > fftfilt (submitted 5/07 and 1/09 respectively). > > Would it be helpful if I tried to integrate some of these and write tests > for them, and then submit a combined patch? I think actually one feature per patch is better. Cleaning up the other patches and adding tests would help the most in getting them incorporated. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma -------------- next part -------------- An HTML attachment was scrubbed... URL: From daelfin at gmail.com Sat May 9 14:21:29 2009 From: daelfin at gmail.com (David F) Date: Sat, 9 May 2009 18:21:29 +0000 (UTC) Subject: [SciPy-user] Maximum of spline? Message-ID: Hello all, Given the knot points and coefficients of a spline (obtained using scipy.interpolate.splrep or UnivariateSpline), how can I calculate the maximum of the interpolated curve? It seems that is should be fairly straightforward but I can't seem to figure it out -- can anyone point me in the right direction? Thanks, --D From s.mientki at ru.nl Sun May 10 06:02:24 2009 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 10 May 2009 12:02:24 +0200 Subject: [SciPy-user] fromfile, item, what other output than stdout / stderr is used ? Message-ID: <4A06A630.40607@ru.nl> hello, I'm using "fromfile" to read data from a file generated by another program. The data is read on the same operating system as where the file was created. I use on several locations, at the procedure seems to run very well, but ... ... when run the program from the normal window command box, I get an error message: "40 items requested but only 10 read" Normally I run all the programs with either a hidden command window or from some kind of IDE. In the IDE, stdout and stderr are redirected, and I see all print commands and errors in the redirection window, but I don't see the above error message from numpy. So it seems that numpy is using a third kind of output device. ??? Now from the documentation at http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html I read: numpy.fromfile(/file/, /dtype=float/, /count=-1/, /sep=''/) *count* : int Number of items to read. -1 means all items (i.e., the complete file). What is a item ? Is it in bytes, or as in my case, where dtype is an 32 bit integer, in integers (which I would read from the documentation) ? Well it must be a byte I guess, because if I use bytes as the count, the program is working well, but I get the above error message in the windows command window. If I use the integer count, the program crashes almost immediately, and thereby I can see sometimes the follwing error messages: 16 items requested but only 10 read 12 items requested but only 0 read .... self.data = fromfile ( self.Data_File, dtype=int ,count=new_bytes ) MemoryError Fatal Python error: (pygame parachute) Segmentation Fault thanks, Stef Mientki From stefan at sun.ac.za Sun May 10 09:21:13 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 10 May 2009 15:21:13 +0200 Subject: [SciPy-user] Force a smooth spline through data points? In-Reply-To: References: Message-ID: <9457e7c80905100621r532115jba5ab74e85898113@mail.gmail.com> Hi Andrew 2009/5/9 Andrew Kelly : > Is it possible to force a smoothed spline (scipy.interpolate.splrep() or the > parametric version) through specific data points? ?I am basically trying to > draw a smooth curve with certain data points that must be included while > other (less critical) points need only to be smoothed per the norm. ?Thanks > in advance. You can also take a look at subdivision schemes -- they were designed for this purpose. Regards St?fan From robert.kern at gmail.com Sun May 10 17:15:37 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 10 May 2009 16:15:37 -0500 Subject: [SciPy-user] fromfile, item, what other output than stdout / stderr is used ? In-Reply-To: <4A06A630.40607@ru.nl> References: <4A06A630.40607@ru.nl> Message-ID: <3d375d730905101415i34de317aw3b0a5728bdaf5fd2@mail.gmail.com> On Sun, May 10, 2009 at 05:02, Stef Mientki wrote: > hello, > > I'm using "fromfile" to read data from a file generated by another program. > The data is read on the same operating system as where the file was created. > I use on several locations, at the procedure seems to run very well, but ... > ... when run the program from the normal window command box, > I get an error message: > ? ?"40 items requested but only 10 read" > > Normally I run all the programs with either a hidden command window or > from some kind of IDE. > In the IDE, stdout and stderr are redirected, and I see all print > commands and errors in the redirection window, > but I don't see the above error message from numpy. > So it seems that numpy is using a third kind of output device. > ??? IDLE redirects sys.stdout and sys.stderr. It does nothing with the C stdout and stderr file handles. Presumably we are printing stuff out from the C level using printf(). We probably should be using the Python API function for printing to sys.std*. Patches welcome. > Now from the documentation at > ?http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html > I read: > > numpy.fromfile(/file/, /dtype=float/, /count=-1/, /sep=''/) > > *count* : int > > ? ?Number of items to read. -1 means all items (i.e., the complete file). > > > What is a item ? > Is it in bytes, or as in my case, where dtype is an 32 bit integer, in > integers (which I would read from the documentation) ? An item is one instance of the dtype. > Well it must be a byte I guess, > because if I use bytes as the count, the program is working well, > but I get the above error message in the windows command window. Since the number of bytes is greater than the number of items and you are getting errors suggesting that you have requested more items than there are available in the file, I really don't understand how this sentence can be true. In [1]: f = open('foo.dat', 'wb') In [2]: f.write('\x01' * 40) In [3]: f.close() In [4]: !ls -l foo.dat IPython system call: ls -l foo.dat -rw-r--r-- 1 rkern staff 40 May 10 16:14 foo.dat In [5]: from numpy import * In [6]: fromfile('foo.dat', dtype=int32, count=10) Out[6]: array([16843009, 16843009, 16843009, 16843009, 16843009, 16843009, 16843009, 16843009, 16843009, 16843009]) In [7]: fromfile('foo.dat', dtype=int32, count=40) 40 items requested but only 10 read Out[7]: array([16843009, 16843009, 16843009, 16843009, 16843009, 16843009, 16843009, 16843009, 16843009, 16843009]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Mon May 11 02:32:12 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 11 May 2009 06:32:12 +0000 (UTC) Subject: [SciPy-user] fromfile, item, what other output than stdout / stderr is used ? References: <4A06A630.40607@ru.nl> Message-ID: Sun, 10 May 2009 12:02:24 +0200, Stef Mientki kirjoitti: > I'm using "fromfile" to read data from a file generated by another > program. The data is read on the same operating system as where the file > was created. I use on several locations, at the procedure seems to run > very well, but ... ... when run the program from the normal window > command box, I get an error message: > "40 items requested but only 10 read" This is probably because fromfile prints to the C stdio stream. The correct fix is to make it raise warnings or exceptions instead. Fromstring raises "ValueError: string is smaller than requested size" if the string is too short to contain the requested data. Should fromfile do the same, or do we want to just raise a warning and return fewer items? Or maybe only return fewer items, without a warning? [clip] > Well it must be a byte I guess, > because if I use bytes as the count, the program is working well, but I > get the above error message in the windows command window. If I use the > integer count, the program crashes almost immediately, and thereby I can > see sometimes the follwing error messages: > 16 items requested but only 10 read > 12 items requested but only 0 read > .... > self.data = fromfile ( self.Data_File, dtype=int ,count=new_bytes ) > MemoryError > Fatal Python error: (pygame parachute) Segmentation Fault What version of Numpy is this? IIRC, there was a some bug that caused fromfile to crash earlier, but that was fixed. -- Pauli Virtanen From dave.hirschfeld at gmail.com Mon May 11 04:46:33 2009 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Mon, 11 May 2009 08:46:33 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?timeseries_-_mov=5Faverage=5Fexpw_alters_i?= =?utf-8?q?t=27s_input?= References: Message-ID: Pierre GM gmail.com> writes: > > Dave, > That should be fixed in the SVN (r2187). Do you want to give it a try ? > Thanks a lot again for reporting. > P. > >(BTW, I advise you to use numpy.ma.testutils.assert_equal to test the >equality of 2 MaskedArrays, instead of the syntax you were using in >the example) > Thanks for the fix (and the info!) That has indeed solved the problem. -Dave From alex.liberzon at gmail.com Mon May 11 06:27:54 2009 From: alex.liberzon at gmail.com (Alex) Date: Mon, 11 May 2009 03:27:54 -0700 (PDT) Subject: [SciPy-user] Maximum of spline? In-Reply-To: References: Message-ID: <2e2b37bc-b4cf-4948-9b46-fc8888a74ff9@s28g2000vbp.googlegroups.com> maybe, if you know the range of the values, you can use the derivative of the spline, provided by scipy.interpolate.splev(xtuple,yourspline,der=1) or even second derivative using der=2? HIH Alex On May 9, 9:21?pm, David F wrote: > Hello all, > > Given the knot points and coefficients of a spline (obtained using > scipy.interpolate.splrepor UnivariateSpline), how can I calculate the maximum > of the interpolated curve? > > It seems that is should be fairly straightforward but I can't seem to figure it > out -- can anyone point me in the right direction? > > Thanks, > --D > > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From daelfin at gmail.com Mon May 11 14:12:30 2009 From: daelfin at gmail.com (David F) Date: Mon, 11 May 2009 18:12:30 +0000 (UTC) Subject: [SciPy-user] Maximum of spline? References: <2e2b37bc-b4cf-4948-9b46-fc8888a74ff9@s28g2000vbp.googlegroups.com> Message-ID: Alex gmail.com> writes: > > maybe, if you know the range of the values, you can use the derivative > of the spline, provided by > scipy.interpolate.splev(xtuple,yourspline,der=1) or even second > derivative using der=2? Yes but that would still be approximate, and require evaluating the (derivative of the) spline on some grid of points. However since the splines are piecewise polynomials, I was looking for a way to get the maximum just from the coefficients... --D From s.mientki at ru.nl Mon May 11 14:41:04 2009 From: s.mientki at ru.nl (Stef Mientki) Date: Mon, 11 May 2009 20:41:04 +0200 Subject: [SciPy-user] fromfile, item, what other output than stdout / stderr is used ? In-Reply-To: <3d375d730905101415i34de317aw3b0a5728bdaf5fd2@mail.gmail.com> References: <4A06A630.40607@ru.nl> <3d375d730905101415i34de317aw3b0a5728bdaf5fd2@mail.gmail.com> Message-ID: <4A087140.6050002@ru.nl> thanks Paul, Robert, Robert Kern wrote: > On Sun, May 10, 2009 at 05:02, Stef Mientki wrote: > >> hello, >> >> I'm using "fromfile" to read data from a file generated by another program. >> The data is read on the same operating system as where the file was created. >> I use on several locations, at the procedure seems to run very well, but ... >> ... when run the program from the normal window command box, >> I get an error message: >> "40 items requested but only 10 read" >> >> Normally I run all the programs with either a hidden command window or >> from some kind of IDE. >> In the IDE, stdout and stderr are redirected, and I see all print >> commands and errors in the redirection window, >> but I don't see the above error message from numpy. >> So it seems that numpy is using a third kind of output device. >> ??? >> > > IDLE redirects sys.stdout and sys.stderr. It does nothing with the C > stdout and stderr file handles. Presumably we are printing stuff out > from the C level using printf(). We probably should be using the > Python API function for printing to sys.std*. Patches welcome. > > Sorry but I've chosen Python, because I found C too difficult. >> Now from the documentation at >> http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html >> I read: >> >> numpy.fromfile(/file/, /dtype=float/, /count=-1/, /sep=''/) >> >> *count* : int >> >> Number of items to read. -1 means all items (i.e., the complete file). >> >> >> What is a item ? >> Is it in bytes, or as in my case, where dtype is an 32 bit integer, in >> integers (which I would read from the documentation) ? >> > > An item is one instance of the dtype. > > >> Well it must be a byte I guess, >> because if I use bytes as the count, the program is working well, >> but I get the above error message in the windows command window. >> > > Since the number of bytes is greater than the number of items and you > are getting errors suggesting that you have requested more items than > there are available in the file, I really don't understand how this > sentence can be true. > Your absolutely right. The error was caused by using the requested count later on, which I've now replaced by the real fetched count. cheers. Stef From Chris.Barker at noaa.gov Mon May 11 14:43:03 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 11 May 2009 11:43:03 -0700 Subject: [SciPy-user] fromfile, item, what other output than stdout / stderr is used ? In-Reply-To: References: <4A06A630.40607@ru.nl> Message-ID: <4A0871B7.3030800@noaa.gov> Pauli Virtanen wrote: > Fromstring raises "ValueError: string is smaller than requested size" if > the string is too short to contain the requested data. Should fromfile do > the same, yes. Or some other exception. > or do we want to just raise a warning and return fewer items? maybe, but I don't like that -- you'd have to write code to catch it. > Or maybe only return fewer items, without a warning? absolutely not! Then we'd all have to write code to check the result every time -- yech! -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Mon May 11 15:18:44 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 11 May 2009 14:18:44 -0500 Subject: [SciPy-user] fromfile, item, what other output than stdout / stderr is used ? In-Reply-To: <4A0871B7.3030800@noaa.gov> References: <4A06A630.40607@ru.nl> <4A0871B7.3030800@noaa.gov> Message-ID: <3d375d730905111218i206c553l753ecad89015be27@mail.gmail.com> On Mon, May 11, 2009 at 13:43, Christopher Barker wrote: > Pauli Virtanen wrote: >> Fromstring raises "ValueError: string is smaller than requested size" if >> the string is too short to contain the requested data. Should fromfile do >> the same, > > yes. Or some other exception. > >> or do we want to just raise a warning and return fewer items? > > maybe, but I don't like that -- you'd have to write code to catch it. > >> Or maybe only return fewer items, without a warning? > > absolutely not! Then we'd all have to write code to check the result > every time -- yech! There is a long history of returning what bytes you can without raising an error. This helps a lot when writing code that reads a chunk at a time. E.g. file.read(nbytes) will return nbytes or fewer if you get within nbytes of the end of the file. I suggest using the warnings mechanism. This lets you either silence the warning or turn it into an exception depending on your use case. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Chris.Barker at noaa.gov Mon May 11 15:57:58 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 11 May 2009 12:57:58 -0700 Subject: [SciPy-user] fromfile, item, what other output than stdout / stderr is used ? In-Reply-To: <3d375d730905111218i206c553l753ecad89015be27@mail.gmail.com> References: <4A06A630.40607@ru.nl> <4A0871B7.3030800@noaa.gov> <3d375d730905111218i206c553l753ecad89015be27@mail.gmail.com> Message-ID: <4A088346.9040505@noaa.gov> Robert Kern wrote: > There is a long history of returning what bytes you can without > raising an error. This helps a lot when writing code that reads a > chunk at a time. E.g. file.read(nbytes) will return nbytes or fewer if > you get within nbytes of the end of the file. maybe so, but fromfile() already supports "read 'till the end of the file", if you don't know how many items you have. maybe a flag? I'm still convinced that we're setting up the users for bugs if they get fewer items than expected and don't get an exception. I see fromfile() as fundamentally higher level than file.read(). > I suggest using the warnings mechanism. This lets you either silence > the warning or turn it into an exception depending on your use case. The other question is what gets returned, I'm pretty sure that when you call fromfile() with a count, it pre-allocates the array, then fills it, so if you don't have enough items in your file, you'll get an array that is the size you expect, but with partially junk in it -- another way to ask for bugs. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Mon May 11 16:07:25 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 11 May 2009 15:07:25 -0500 Subject: [SciPy-user] fromfile, item, what other output than stdout / stderr is used ? In-Reply-To: <4A088346.9040505@noaa.gov> References: <4A06A630.40607@ru.nl> <4A0871B7.3030800@noaa.gov> <3d375d730905111218i206c553l753ecad89015be27@mail.gmail.com> <4A088346.9040505@noaa.gov> Message-ID: <3d375d730905111307m1b7af08ak215109255ef7495d@mail.gmail.com> On Mon, May 11, 2009 at 14:57, Christopher Barker wrote: > Robert Kern wrote: >> There is a long history of returning what bytes you can without >> raising an error. This helps a lot when writing code that reads a >> chunk at a time. E.g. file.read(nbytes) will return nbytes or fewer if >> you get within nbytes of the end of the file. > > maybe so, but fromfile() already supports "read 'till the end of the > file", if you don't know how many items you have. But that doesn't let you express, "read at most this number of items," which is really useful and is very common in file-reading APIs. > maybe a flag? I'm still convinced that we're setting up the users for > bugs if they get fewer items than expected and don't get an exception. I > see fromfile() as fundamentally higher level than file.read(). Just because it is "higher level" in one respect doesn't mean that it applies any or all particular "higher level" semantics you might want. In fact, I would suggest the opposite, that it should do precisely *one* thing higher level, which is to deal with dtypes. If it makes you feel better, you may consider the warnings mechanism to be just such a flag, only it uses a Python-standard mechanism for controlling such behavior. >> I suggest using the warnings mechanism. This lets you either silence >> the warning or turn it into an exception depending on your use case. > > The other question is what gets returned, I'm pretty sure that when you > call fromfile() with a count, it pre-allocates the array, then fills it, > so if you don't have enough items in your file, you'll get an array that > is the size you expect, but with partially junk in it -- another way to > ask for bugs. Currently, it just returns you an array sized for the bytes it could read; no junk. It's really easy to try it out instead of guessing. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Mon May 11 16:24:15 2009 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 11 May 2009 20:24:15 +0000 (UTC) Subject: [SciPy-user] fromfile, item, what other output than stdout / stderr is used ? References: <4A06A630.40607@ru.nl> <4A0871B7.3030800@noaa.gov> <3d375d730905111218i206c553l753ecad89015be27@mail.gmail.com> <4A088346.9040505@noaa.gov> <3d375d730905111307m1b7af08ak215109255ef7495d@mail.gmail.com> Message-ID: Mon, 11 May 2009 15:07:25 -0500, Robert Kern wrote: > On Mon, May 11, 2009 at 14:57, Christopher Barker [clip] >> The other question is what gets returned, I'm pretty sure that when you >> call fromfile() with a count, it pre-allocates the array, then fills >> it, so if you don't have enough items in your file, you'll get an array >> that is the size you expect, but with partially junk in it -- another >> way to ask for bugs. > > Currently, it just returns you an array sized for the bytes it could > read; no junk. It's really easy to try it out instead of guessing. Not completely true: if it can't read any items, it either raises MemoryError. Also, for sep != '' it returns read_count+1 items, the last one containing junk. A bug, methinks... -- Pauli Virtanen From Chris.Barker at noaa.gov Mon May 11 18:48:35 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 11 May 2009 15:48:35 -0700 Subject: [SciPy-user] fromfile, item, what other output than stdout / stderr is used ? In-Reply-To: References: <4A06A630.40607@ru.nl> <4A0871B7.3030800@noaa.gov> <3d375d730905111218i206c553l753ecad89015be27@mail.gmail.com> <4A088346.9040505@noaa.gov> <3d375d730905111307m1b7af08ak215109255ef7495d@mail.gmail.com> Message-ID: <4A08AB43.2010106@noaa.gov> Robert Kern wrote: > But that doesn't let you express, "read at most this number of items," > which is really useful and is very common in file-reading APIs. fair enough -- >> maybe a flag? > If it makes you feel better, you may consider the warnings mechanism > to be just such a flag, only it uses a Python-standard mechanism for > controlling such behavior. no, it doesn't -- I don't think warnings are designed for this sort of thing. >>> This lets you either silence >>> the warning or turn it into an exception depending on your use case. I've been a pythonista for years, and I have no idea how to turn a warning into an exception, and I DID just spend some time trying to figure it out -- it does not look easy. Maybe it is, but if it's hard to figure out, and it won't dawn on many users that they need to, there will be bugs. I think we agree that fromfile() needs a way to spell: "read at most this number of items," Why not spell that explicitly? we have "count" to specify whether or not you want a specific number of items. We could have max_count or whatever. I don't care how it's spelled but I do want to be able to explicitly spell which I want. In any case, having the warning printed with C stdout is not ideal. Pauli Virtanen wrote: > Not completely true: if it can't read any items, it either raises > MemoryError. Also, for sep != '' it returns read_count+1 items, the >last one containing junk. A bug, methinks... right -- for the record, I wasn't guessing, I was recalling problems I've had in the past. though I should have tested first. Anyway, fromfile needs attention -- we've had discussions about it in the past, but no one has found the time and inclination to give it the attention it needs (including me). oh well. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Mon May 11 18:52:01 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 11 May 2009 17:52:01 -0500 Subject: [SciPy-user] fromfile, item, what other output than stdout / stderr is used ? In-Reply-To: <4A08AB43.2010106@noaa.gov> References: <4A06A630.40607@ru.nl> <4A0871B7.3030800@noaa.gov> <3d375d730905111218i206c553l753ecad89015be27@mail.gmail.com> <4A088346.9040505@noaa.gov> <3d375d730905111307m1b7af08ak215109255ef7495d@mail.gmail.com> <4A08AB43.2010106@noaa.gov> Message-ID: <3d375d730905111552x35d003f6t555445ac5f03c548@mail.gmail.com> On Mon, May 11, 2009 at 17:48, Christopher Barker wrote: > > Robert Kern wrote: >> But that doesn't let you express, "read at most this number of items," >> which is really useful and is very common in file-reading APIs. > > fair enough -- > >>> maybe a flag? > >> If it makes you feel better, you may consider the warnings mechanism >> to be just such a flag, only it uses a Python-standard mechanism for >> controlling such behavior. > > no, it doesn't -- I don't think warnings are designed for this sort of > thing. > >>>> This lets you either silence >>>> the warning or turn it into an exception depending on your use case. > > I've been a pythonista for years, and I have no idea how to turn a > warning into an exception, and I DID just spend some time trying to > figure it out -- it does not look easy. Maybe it is, but if it's hard to > figure out, and it won't dawn on many users that they need to, there > will be bugs. http://docs.python.org/library/warnings#the-warnings-filter import warnings warnings.simplefilter('error', NotEnoughBytesWarning) > I think we agree that fromfile() needs a way to spell: > > "read at most this number of items," > > Why not spell that explicitly? we have "count" to specify whether or not > you want a specific number of items. We could have max_count or > whatever. I don't care how it's spelled but I do want to be able to > explicitly spell which I want. > > In any case, having the warning printed with C stdout is not ideal. Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mudit_19a at yahoo.com Mon May 11 19:12:14 2009 From: mudit_19a at yahoo.com (mudit sharma) Date: Tue, 12 May 2009 04:42:14 +0530 (IST) Subject: [SciPy-user] pytseries custom frequency Message-ID: <603197.21533.qm@web94915.mail.in2.yahoo.com> Is there a way to work with custom frequencies in pytseries? I have time series at freq 5 secs which I am looking to convert to 1 min, 5 min, 30 min. I have looked at the docs and could find the way to achieve this. If not in pytseries is there any other package. any suggestions will be much appreciated. Regards, Mudit From pgmdevlist at gmail.com Mon May 11 19:56:07 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 11 May 2009 19:56:07 -0400 Subject: [SciPy-user] pytseries custom frequency In-Reply-To: <603197.21533.qm@web94915.mail.in2.yahoo.com> References: <603197.21533.qm@web94915.mail.in2.yahoo.com> Message-ID: On May 11, 2009, at 7:12 PM, mudit sharma wrote: > > Is there a way to work with custom frequencies in pytseries? I have > time series at freq 5 secs which I am looking to convert to 1 min, 5 > min, 30 min. I have looked at the docs and could find the way to > achieve this. If not in pytseries is there any other package. any > suggestions will be much appreciated. Unfortunately not. The frequencies are hard-coded in C, and there's no way to define some custom one (for the moment). However, there might be some work around in your case that may not require tseries, as the frequencies you want are simply related one to the other: * If your series is regularly spaced, without missing data, you're good to go to the next step. Else, create a series with a _c.FR_SEC frequency from your data, fill it with fill_missing_dates, and take every 5 elements with something like series[::5]. * To convert to 1 minute, use series.reshape(-1,12) (12 periods of 5s in 1min), so that each row corresponds to a minute. if you work w/ a time series, just reshape the .series attribute (you don;t need the dates anymore). Make sure the size of the series is indeed divisible by 12. If not, create a masked array with a size of (initial_size// 12+1)*12 and fill it w/your data before reshaping. * To convert to 5 minutes intervals, use a .reshape(-1,60) * To convert to 30min intervals, use a .reshape(-1,360) From taste_of_r at yahoo.com Mon May 11 20:05:01 2009 From: taste_of_r at yahoo.com (Wei Su) Date: Mon, 11 May 2009 17:05:01 -0700 (PDT) Subject: [SciPy-user] How to run selected statements in Python? Message-ID: <738564.4429.qm@web43516.mail.sp1.yahoo.com> ? ? Hi, All: ? I am a really green user of Python. And I am now using IDLE. The most significant inconvenience so far is that I still cannot figure out how to run selected statements in the interactive mode. In SAS, I can do F3, in R, F5 and in Matlab F9. But I tried all these keys and still cannot figure out how to run selected statements/commands. ? Any help will be really great! ? Wei Su -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Tue May 12 01:05:17 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 12 May 2009 14:05:17 +0900 Subject: [SciPy-user] How to run selected statements in Python? In-Reply-To: <738564.4429.qm@web43516.mail.sp1.yahoo.com> References: <738564.4429.qm@web43516.mail.sp1.yahoo.com> Message-ID: <4A09038D.4040404@ar.media.kyoto-u.ac.jp> Hi Wei, Wei Su wrote: > > > Hi, All: > > I am a really green user of Python. And I am now using IDLE. The most > significant inconvenience so far is that I still cannot figure out how > to run selected statements in the interactive mode. In SAS, I can do > F3, in R, F5 and in Matlab F9. But I tried all these keys and still > cannot figure out how to run selected statements/commands. > You will have more luck on IDLE mailing lists. The answer to your question depends on the environment you are using, and not related to scipy, cheers, David From dug.armadale at googlemail.com Tue May 12 04:01:56 2009 From: dug.armadale at googlemail.com (Douglas Macdonald) Date: Tue, 12 May 2009 09:01:56 +0100 Subject: [SciPy-user] Trivial quadratic equation question Message-ID: <3ec88f300905120101u74b4561fnfbeddd3e76127919@mail.gmail.com> Hi, Does anyone know if scipy already has a direct quadratic (ax^2 + bx +c =0) equation solver? Thank you in advance. Kind regards, Douglas From cimrman3 at ntc.zcu.cz Tue May 12 04:24:50 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 12 May 2009 10:24:50 +0200 Subject: [SciPy-user] ANN: SfePy 2009.2 Message-ID: <4A093252.4070706@ntc.zcu.cz> I am pleased to announce the release of SfePy 2009.2. SfePy (simple finite elements in Python) is a software, distributed under the BSD license, for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. Mailing lists, issue tracking, git repository: http://sfepy.org Home page: http://sfepy.kme.zcu.cz People who contributed to this release: Vladimir Lukes. Major improvements: - new scripts: - isfepy (interactive sfepy) - customized IPython shell for quick "compute and visualize" work - postproc.py - a script to visualize (via mayavi2) results saved in result files - probe.py - a script to probe and plot results along geometrical objects (e.g. lines, rays) intersecting the mesh - automatic html documentation generation via doxygen - extended syntax of equations to allow boundary traces of variables - fixed live plotting via multiprocessing for multi-core machines - short input syntax for LCBC conditions, fields, integrals, materials and solvers - new solvers: - Broyden and Anderson nonlinear solvers (SciPy implementation) - new mesh readers: - Nastran (.bdf) format - Abaqus ascii (.inp) - new example problems, tests and terms Applications: - phononic materials: - plotting improved - caching of eigen-problem solution and Christoffel acoustic tensor - schroedinger.py: - choose and call DFT solver via solver interface For more information on this release, see http://sfepy.googlecode.com/svn/web/releases/2009.2_RELEASE_NOTES.txt Best regards, Robert Cimrman From meesters at gmx.de Tue May 12 04:52:34 2009 From: meesters at gmx.de (Christian Meesters) Date: Tue, 12 May 2009 10:52:34 +0200 Subject: [SciPy-user] Trivial quadratic equation question In-Reply-To: <3ec88f300905120101u74b4561fnfbeddd3e76127919@mail.gmail.com> References: <3ec88f300905120101u74b4561fnfbeddd3e76127919@mail.gmail.com> Message-ID: <1242118354.6312.5.camel@cm-laptop> Douglas, is this what you are looking for? http://www.scipy.org/Numpy_Example_List#head-d1f2bf93ea599de262de21cdff971704d7591122 HTH Christian On Tue, 2009-05-12 at 09:01 +0100, Douglas Macdonald wrote: > Hi, > > Does anyone know if scipy already has a direct quadratic (ax^2 + bx +c > =0) equation solver? > > Thank you in advance. > > Kind regards, > > Douglas > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From meesters at gmx.de Tue May 12 04:56:11 2009 From: meesters at gmx.de (Christian Meesters) Date: Tue, 12 May 2009 10:56:11 +0200 Subject: [SciPy-user] shortcut for weighted standard deviation? Message-ID: <1242118571.6312.9.camel@cm-laptop> Hoi, Is there a shortcut for a weighted standard deviation somewhere in scipy/numpy - like numpy.average, which returns a weighted average upon request? If so, please forgive my naive question: I made a somewhat longer break using Python ... TIA Christian From nwagner at iam.uni-stuttgart.de Tue May 12 05:23:38 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 12 May 2009 11:23:38 +0200 Subject: [SciPy-user] io.loadmat and mat_struct object Message-ID: Hi all, How can I obtain the contents of the mat_struct object ? >>> A['test_struct'] array([[]], dtype=object) Any pointer would be appreciated. Nils From mickael.paris at gmail.com Tue May 12 05:49:15 2009 From: mickael.paris at gmail.com (Mickael) Date: Tue, 12 May 2009 11:49:15 +0200 Subject: [SciPy-user] Re : io.loadmat and mat_struct object In-Reply-To: References: Message-ID: <5df0b3120905120249t67d75d02je1bd126a6a471d99@mail.gmail.com> 2009/5/12, Nils Wagner : > Hi all, > > How can I obtain the contents of the mat_struct object ? > >>>> A['test_struct'] > array([[ 0x2a9a0fb150>]], dtype=object) > > > Any pointer would be appreciated. > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi Nils, have you seen the informations inside the cookbook : http://www.scipy.org/Cookbook/Reading_mat_files Mickael. From nwagner at iam.uni-stuttgart.de Tue May 12 05:57:16 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 12 May 2009 11:57:16 +0200 Subject: [SciPy-user] Re : io.loadmat and mat_struct object In-Reply-To: <5df0b3120905120249t67d75d02je1bd126a6a471d99@mail.gmail.com> References: <5df0b3120905120249t67d75d02je1bd126a6a471d99@mail.gmail.com> Message-ID: On Tue, 12 May 2009 11:49:15 +0200 Mickael wrote: > 2009/5/12, Nils Wagner : >> Hi all, >> >> How can I obtain the contents of the mat_struct object ? >> >>>>> A['test_struct'] >> array([[> 0x2a9a0fb150>]], dtype=object) >> >> >> Any pointer would be appreciated. >> >> Nils >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > Hi Nils, > > have you seen the informations inside the cookbook : > > http://www.scipy.org/Cookbook/Reading_mat_files > > Mickael. Hi Mickael, Thank you for the pointer. However the cookbook is very short. How do I access arrays of structures ? An example would be appreciated. Cheers, Nils From alex.liberzon at gmail.com Tue May 12 06:35:04 2009 From: alex.liberzon at gmail.com (Alex) Date: Tue, 12 May 2009 03:35:04 -0700 (PDT) Subject: [SciPy-user] Maximum of spline? In-Reply-To: References: <2e2b37bc-b4cf-4948-9b46-fc8888a74ff9@s28g2000vbp.googlegroups.com> Message-ID: <20ef8396-ffaf-4f9f-8280-9d00d9a22ca0@n21g2000vba.googlegroups.com> but spline is not a polynom - even using SymPy (symbolic computation) and deriving analytically the spline coefficients you'll remain with the "another" spline, i.e. set of coefficients that you need to evaluate. I believe there's not such a thing 'roots of the spline'. but maybe i'm wrong. On May 11, 9:12?pm, David F wrote: > Alex gmail.com> writes: > > > > > maybe, if you know the range of the values, you can use the derivative > > of the spline, provided by > > scipy.interpolate.splev(xtuple,yourspline,der=1) or even second > > derivative using der=2? > > Yes but that would still be approximate, and require evaluating > the (derivative of the) spline on some grid of points. However > since the splines are piecewise polynomials, I was looking for a > way to get the maximum just from the coefficients... > > --D > > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From mickael.paris at gmail.com Tue May 12 06:36:36 2009 From: mickael.paris at gmail.com (Mickael) Date: Tue, 12 May 2009 12:36:36 +0200 Subject: [SciPy-user] Re : Re : io.loadmat and mat_struct object In-Reply-To: References: <5df0b3120905120249t67d75d02je1bd126a6a471d99@mail.gmail.com> Message-ID: <5df0b3120905120336p2a1515damdd24838278e9c1df@mail.gmail.com> 2009/5/12, Nils Wagner : > On Tue, 12 May 2009 11:49:15 +0200 > Mickael wrote: >> 2009/5/12, Nils Wagner : >>> Hi all, >>> >>> How can I obtain the contents of the mat_struct object ? >>> >>>>>> A['test_struct'] >>> array([[>> 0x2a9a0fb150>]], dtype=object) >>> >>> >>> Any pointer would be appreciated. >>> >>> Nils >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> Hi Nils, >> >> have you seen the informations inside the cookbook : >> >> http://www.scipy.org/Cookbook/Reading_mat_files >> >> Mickael. > > Hi Mickael, > > Thank you for the pointer. However the cookbook is very > short. How do I access arrays of structures ? > I've just seen that your question is about struct in mat... > An example would be appreciated. you can specified the format when you load a Matlab structure with this option : struct_as_record : {False, True} optional temp = io.loadmat('test.mat',struct_as_record:True) (False is the default for the moment: thus structures are loaded as objects. With True, structures are laoded as array) by the way, you must be able to get access to yours arrays) > Cheers, > Nils > Mickael. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Tue May 12 07:42:09 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 12 May 2009 13:42:09 +0200 Subject: [SciPy-user] Re : Re : io.loadmat and mat_struct object In-Reply-To: <5df0b3120905120336p2a1515damdd24838278e9c1df@mail.gmail.com> References: <5df0b3120905120249t67d75d02je1bd126a6a471d99@mail.gmail.com> <5df0b3120905120336p2a1515damdd24838278e9c1df@mail.gmail.com> Message-ID: On Tue, 12 May 2009 12:36:36 +0200 Mickael wrote: > 2009/5/12, Nils Wagner : >> On Tue, 12 May 2009 11:49:15 +0200 >> Mickael wrote: >>> 2009/5/12, Nils Wagner : >>>> Hi all, >>>> >>>> How can I obtain the contents of the mat_struct object ? >>>> >>>>>>> A['test_struct'] >>>> array([[>>> 0x2a9a0fb150>]], dtype=object) >>>> >>>> >>>> Any pointer would be appreciated. >>>> >>>> Nils >>>> >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> Hi Nils, >>> >>> have you seen the informations inside the cookbook : >>> >>> http://www.scipy.org/Cookbook/Reading_mat_files >>> >>> Mickael. >> >> Hi Mickael, >> >> Thank you for the pointer. However the cookbook is very >> short. How do I access arrays of structures ? >> > > I've just seen that your question is about struct in >mat... > >> An example would be appreciated. > > you can specified the format when you load a Matlab >structure with this option : > > struct_as_record : {False, True} optional > > temp = io.loadmat('test.mat',struct_as_record:True) >(False is the > default for the moment: thus structures are loaded as >objects. With > True, structures are laoded as array) by the way, you >must be able to > get access to yours arrays) temp = io.loadmat('test.mat',struct_as_record=True) How can I obtain the number of arrays within a struct ? How do I access the different arrays ? An illustrative example would be appreciated. Thanks in advance. Nils From josef.pktd at gmail.com Tue May 12 08:48:52 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 12 May 2009 08:48:52 -0400 Subject: [SciPy-user] shortcut for weighted standard deviation? In-Reply-To: <1242118571.6312.9.camel@cm-laptop> References: <1242118571.6312.9.camel@cm-laptop> Message-ID: <1cd32cbb0905120548w325614ffq7fd63d96686e7de7@mail.gmail.com> On Tue, May 12, 2009 at 4:56 AM, Christian Meesters wrote: > Hoi, > > Is there a shortcut for a weighted standard deviation somewhere in > scipy/numpy - like numpy.average, which returns a weighted average upon > request? > > If so, please forgive my naive question: I made a somewhat longer break > using Python ... > Not that I know off, you have to calculate it yourself, an implementation is attached to trac ticket http://projects.scipy.org/scipy/ticket/604 More of the basic statistics using weights should make their way into numpy/scipy, but are not included yet. Josef From sebastian.walter at gmail.com Tue May 12 10:07:38 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Tue, 12 May 2009 16:07:38 +0200 Subject: [SciPy-user] [ANN][Automatic Differentiation] Beta Version of PYADOLC In-Reply-To: References: Message-ID: I am pleased to announce the release of PYADOLC (beta version). Homepage: http://github.com/b45ch1/pyadolc/ For download and instructions check the homepage. About the package ================= PYADOLC is a wrapper of the C++ software ADOL-C. It computes derivatives of arbitrarily complex algorithms (with loops and if then else) efficiently on the C++ side. 0) easy and pythonic user interface 1) efficient computation of _gradients_ g, _Hessians_ H and _higher_ order tensors T 2) efficient computation of products dot(u.T, H), dot(H,v) as they are needed in optimization algorithms 3) well documented by docstrings. For more information one can read the C++ documentation. 4) extensive unit test and many examples, including constrained optimization by projected gradients, etc ... 5) should be suitable for derivative generation of rather large scale optimization problems. E.g. optimal control problems, inverse problems, This is not tested though. 6) Sparse Jacobian support. Suggestions and Bugs: =================== Please report any bugs or inconveniences that you encounter! E.g. just write me if you have troubles with the installation. Everything *should* work as you expect. Sparse Jacobian support is experimental and the build process needs a little user assistance but should work. The API is not completely fixed. However, changes to the API will be backward compatible. Hope someone can make use of it. regards, Sebastian From cimrman3 at ntc.zcu.cz Tue May 12 10:35:10 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 12 May 2009 16:35:10 +0200 Subject: [SciPy-user] [ANN][Automatic Differentiation] Beta Version of PYADOLC In-Reply-To: References: Message-ID: <4A09891E.7010805@ntc.zcu.cz> Hi Sebastian! the topic of automatic differentiation is very interesting for me (also in light of my announcement here not so long ago...). Does ADOL-C derive the code so that analytical formulas for the jacobians are obtained, or does it use some finite differencing scheme? I am not familiar with AD, so pardon my ignorance. regards, r. Sebastian Walter wrote: > I am pleased to announce the release of PYADOLC (beta version). > > Homepage: http://github.com/b45ch1/pyadolc/ > For download and instructions check the homepage. > > About the package > ================= > PYADOLC is a wrapper of the C++ software ADOL-C. > It computes derivatives of arbitrarily complex algorithms (with loops > and if then else) efficiently on the C++ side. > > > 0) easy and pythonic user interface > 1) efficient computation of _gradients_ g, _Hessians_ H and _higher_ > order tensors T > 2) efficient computation of products dot(u.T, H), dot(H,v) as they > are needed in optimization algorithms > 3) well documented by docstrings. For more information one can read > the C++ documentation. > 4) extensive unit test and many examples, including constrained > optimization by projected gradients, etc ... > 5) should be suitable for derivative generation of rather large scale > optimization problems. > E.g. optimal control problems, inverse problems, This is not tested though. > 6) Sparse Jacobian support. > > > Suggestions and Bugs: > =================== > Please report any bugs or inconveniences that you encounter! > E.g. just write me if you have troubles with the installation. > > Everything *should* work as you expect. > Sparse Jacobian support is experimental and the build process needs a > little user assistance but should work. > > The API is not completely fixed. However, changes to the API will be > backward compatible. > > > > Hope someone can make use of it. > > regards, > Sebastian > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From Andrew.G.York+scipy at gmail.com Tue May 12 11:41:43 2009 From: Andrew.G.York+scipy at gmail.com (Andrew York) Date: Tue, 12 May 2009 11:41:43 -0400 Subject: [SciPy-user] Maximum of spline? In-Reply-To: <20ef8396-ffaf-4f9f-8280-9d00d9a22ca0@n21g2000vba.googlegroups.com> References: <2e2b37bc-b4cf-4948-9b46-fc8888a74ff9@s28g2000vbp.googlegroups.com> <20ef8396-ffaf-4f9f-8280-9d00d9a22ca0@n21g2000vba.googlegroups.com> Message-ID: <744bb3c80905120841i3ea788c1h701eed1e6424062d@mail.gmail.com> I know this is not exactly what you asked about, but I recently had a similar problem. I approached it by using parabolas for interpolation, since I know the location and value of the maximum/minimum of a parabola. For example: from scipy import array, arange, poly1d, polyfit, take, linspace data_X = array([0, 1, 2, 3, 4]) data_Y = array([0, 5, 6, 4, 0]) sort_order = data_Y.argsort() #Interpolate the three points closest to the data maximum. #(assuming your maximum isn't at the edge of the dataset): interp_points = sort_order[-1] + arange(-1, 2) my_fit = poly1d(polyfit( take(data_X, interp_points), take(data_Y, interp_points), deg = 2 )) my_fit_data_X = linspace(0,4,20) my_fit_data_Y = my_fit(my_fit_data_X) #Since we used a parabola to interpolate, we know # where the maximum is, and its value. extremum_X = -my_fit[1]/(2*my_fit[2]) extremum_Y = my_fit(extremum_X) #Now let's make sure the fit looks good. from matplotlib.pyplot import figure, plot, hold, show, close figure() plot(data_X, data_Y) hold(True) plot(my_fit_data_X, my_fit_data_Y) plot([extremum_X], [extremum_Y], 'rx') show() print "Hit enter to continue..." raw_input() close('all') On Tue, May 12, 2009 at 6:35 AM, Alex wrote: > but spline is not a polynom - even using SymPy (symbolic computation) > and deriving analytically the spline coefficients you'll remain with > the "another" spline, i.e. set of coefficients that you need to > evaluate. I believe there's not such a thing 'roots of the spline'. > but maybe i'm wrong. > > > > On May 11, 9:12?pm, David F wrote: >> Alex gmail.com> writes: >> >> >> >> > maybe, if you know the range of the values, you can use the derivative >> > of the spline, provided by >> > scipy.interpolate.splev(xtuple,yourspline,der=1) or even second >> > derivative using der=2? >> >> Yes but that would still be approximate, and require evaluating >> the (derivative of the) spline on some grid of points. However >> since the splines are piecewise polynomials, I was looking for a >> way to get the maximum just from the coefficients... >> >> --D >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-u... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Tue May 12 12:41:54 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 12 May 2009 16:41:54 +0000 (UTC) Subject: [SciPy-user] [ANN][Automatic Differentiation] Beta Version of PYADOLC References: <4A09891E.7010805@ntc.zcu.cz> Message-ID: Tue, 12 May 2009 16:35:10 +0200, Robert Cimrman wrote: > the topic of automatic differentiation is very interesting for me (also > in light of my announcement here not so long ago...). Does ADOL-C derive > the code so that analytical formulas for the jacobians are obtained, or > does it use some finite differencing scheme? I am not familiar with AD, > so pardon my ignorance. See http://en.wikipedia.org/wiki/Automatic_differentiation AD typically builds an "implicit" graph expression corresponding to the computation, and constructs the Jacobian based on that. So it's not symbolic or numerical differentiation. -- Pauli Virtanen From cimrman3 at ntc.zcu.cz Tue May 12 12:50:08 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 12 May 2009 18:50:08 +0200 Subject: [SciPy-user] [ANN][Automatic Differentiation] Beta Version of PYADOLC In-Reply-To: References: <4A09891E.7010805@ntc.zcu.cz> Message-ID: <4A09A8C0.8040806@ntc.zcu.cz> Pauli Virtanen wrote: > Tue, 12 May 2009 16:35:10 +0200, Robert Cimrman wrote: >> the topic of automatic differentiation is very interesting for me (also >> in light of my announcement here not so long ago...). Does ADOL-C derive >> the code so that analytical formulas for the jacobians are obtained, or >> does it use some finite differencing scheme? I am not familiar with AD, >> so pardon my ignorance. > > See http://en.wikipedia.org/wiki/Automatic_differentiation > > AD typically builds an "implicit" graph expression corresponding to the > computation, and constructs the Jacobian based on that. So it's not > symbolic or numerical differentiation. Thank you Pauli. Note to myself: always ask search engines first. cheers, r. From daelfin at gmail.com Tue May 12 13:15:21 2009 From: daelfin at gmail.com (David F) Date: Tue, 12 May 2009 17:15:21 +0000 (UTC) Subject: [SciPy-user] Maximum of spline? References: <2e2b37bc-b4cf-4948-9b46-fc8888a74ff9@s28g2000vbp.googlegroups.com> <20ef8396-ffaf-4f9f-8280-9d00d9a22ca0@n21g2000vba.googlegroups.com> <744bb3c80905120841i3ea788c1h701eed1e6424062d@mail.gmail.com> Message-ID: Andrew York gmail.com> writes: > I know this is not exactly what you asked about, but I recently had a > similar problem. I approached it by using parabolas for interpolation, > since I know the location and value of the maximum/minimum of a > parabola. For example: > > [...] Thank you, actually this works great for what I needed it for! --D From dwf at cs.toronto.edu Tue May 12 16:25:21 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 12 May 2009 16:25:21 -0400 Subject: [SciPy-user] [ANN][Automatic Differentiation] Beta Version of PYADOLC In-Reply-To: References: <4A09891E.7010805@ntc.zcu.cz> Message-ID: On 12-May-09, at 12:41 PM, Pauli Virtanen wrote: > AD typically builds an "implicit" graph expression corresponding to > the > computation, and constructs the Jacobian based on that. So it's not > symbolic or numerical differentiation. I've never quite understood the difference between what AD does and the 'symbolic' way, but from what I'm reading on Wikipedia it's just a way of *implementing* the chain rule cleverly using graph operations. Is that what you mean Pauli? So it is exact differentiation (to the extent the floating point hardware can provide) rather than an approximation such as finite differences will yield, and thus the resulting code is equivalent in function to what you'd get if you symbolically differentiated and then coded it up, is that right? Cheers, David From rob.patro at gmail.com Tue May 12 16:33:22 2009 From: rob.patro at gmail.com (Rob Patro) Date: Tue, 12 May 2009 16:33:22 -0400 Subject: [SciPy-user] [ANN][Automatic Differentiation] Beta Version of PYADOLC In-Reply-To: References: <4A09891E.7010805@ntc.zcu.cz> Message-ID: <4A09DD12.6080707@gmail.com> Hey guys; I thought I'd chime in here. If you're interested in learning about automatic differentiation, Justin Domke, who's a grad student in my department, has written a series of posts on his blog that are really informative. You can just check out http://justindomke.wordpress.com/. Cheers, Rob David Warde-Farley wrote: > On 12-May-09, at 12:41 PM, Pauli Virtanen wrote: > > >> AD typically builds an "implicit" graph expression corresponding to >> the >> computation, and constructs the Jacobian based on that. So it's not >> symbolic or numerical differentiation. >> > > I've never quite understood the difference between what AD does and > the 'symbolic' way, but from what I'm reading on Wikipedia it's just a > way of *implementing* the chain rule cleverly using graph operations. > Is that what you mean Pauli? > > So it is exact differentiation (to the extent the floating point > hardware can provide) rather than an approximation such as finite > differences will yield, and thus the resulting code is equivalent in > function to what you'd get if you symbolically differentiated and then > coded it up, is that right? > > Cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From zhangchensong at gmail.com Tue May 12 19:25:34 2009 From: zhangchensong at gmail.com (Chensong Zhang) Date: Tue, 12 May 2009 19:25:34 -0400 Subject: [SciPy-user] Error when install SciPy on OS X 10.5.6 Message-ID: <541772AA-CA32-4303-8383-1BED998E3103@math.umd.edu> Mac OS X 10.5.6 svn current version of scipy gcc --version i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5490) gfortran --version GNU Fortran (GCC) 4.3.3 Warning: No configuration returned, assuming unavailable. blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/ vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-msse3'] umfpack_info: libraries umfpack not found in /System/Library/Frameworks/ Python.framework/Versions/2.5/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib libraries umfpack not found in /sw/lib /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/ python/numpy/distutils/system_info.py:401: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/ ) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE running build_src building py_modules sources building library "dfftpack" sources building library "fftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "dop" sources building library "fitpack" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "superlu_src" sources building library "arpack" sources building library "sc_c_misc" sources building library "sc_cephes" sources building library "sc_mach" sources building library "sc_toms" sources building library "sc_amos" sources building library "sc_cdf" sources building library "sc_specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.cluster._hierarchy_wrap" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.integrate.dop" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/scipy/interpolate/src/ dfitpack-f2pywrappers.f' to sources. building extension "scipy.interpolate._interpolate" sources building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- i386-2.5/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/lib/lapack/ clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- i386-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- i386-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.optimize._slsqp" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.optimize._nnls" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.signal.sigtools" sources conv_template:> build/src.macosx-10.5-i386-2.5/scipy/signal/lfilter.inc Traceback (most recent call last): File "setup.py", line 158, in setup_package() File "setup.py", line 150, in setup_package configuration=configuration ) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_src.py", line 87, in run self.build_sources() File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_src.py", line 106, in build_sources self.build_extension_sources(ext) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_src.py", line 214, in build_extension_sources sources = self.template_sources(sources, ext) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_src.py", line 322, in template_sources outstr = process_c_file(source) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/conv_template.py", line 191, in process_file % (sourcefile, process_str(''.join(lines)))) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/conv_template.py", line 156, in process_str newstr[sub[0]:sub[1]], sub[4]) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/conv_template.py", line 120, in expand_sub for k in range(numsubs): TypeError: range() integer end argument expected, got NoneType. Chensong ==================================== Mathematics Department Penn State University, State College, PA email: zhangchensong at gmail.com web: zhangcs.wordpress.com ==================================== From eads at soe.ucsc.edu Tue May 12 20:35:41 2009 From: eads at soe.ucsc.edu (Damian Eads) Date: Tue, 12 May 2009 17:35:41 -0700 Subject: [SciPy-user] 2D clustering question In-Reply-To: <49FF74DF.2070500@mac.com> References: <49FF74DF.2070500@mac.com> Message-ID: <91b4b1ab0905121735h5f27632ep78ac879d916ef714@mail.gmail.com> Hi Hazen, Sorry for getting back to you so late. I was traveling a lot, and I'm just now catching up on my e-mail. Without knowing much about the details of your problem, I imagine a lot of the slowness is caused by the allocation of an exceptionally large distance matrix, which grows n^2 in the number of points n. In some cases, there is no need for a distance matrix representation since many points will be too far from each other (sparsity in the sense that many entries will be large) because they belong to different clusters. Agglomerative clustering may not be helpful when the distance between points is too large. One solution is first run k-means, and then run agglomerative clustering individually on each cluster generated by k-means. This is less expensive because you are allocating several smaller distance matrices over one big one. Optionally, as a third step, you can run agglomerative clustering on the centroids of these clusters to get a coarse approximation of how the clusters relate to one another. Please keep in mind this idea may be completely inappropriate for your data. In other cases, there will be many points close to one another (sparsity in that many entries are small or close to zero), in which case it may be worth to filter out points. Someone suggested a while back adding a feature to the hierarchical clustering code so that distance matrices aren't used and distances are only computed when they're needed. I don't know if anyone has made any progress on that. Cheers, Damian On Mon, May 4, 2009 at 4:06 PM, Hazen Babcock wrote: > > Hello, > > I've been using scipy.cluster.hierarchy.fclusterdata() to cluster groups > of points based on their x and y position. This works well for data sets > without out too many points, but seems to get pretty slow as the number > of points gets into the high thousands (i.e. 6000+). Does anyone know of > a more specialized clustering algorithm that might be able to handle > even larger numbers of points, i.e. up to 10e6 or so? The points are > spread out over 0 - 200 or so in X and Y and I'm clustering with a 0.5 > cutoff. One approach is to break the data set down into smaller sections > based on X,Y coordinate, but perhaps something like this already exists? > > thanks, > -Hazen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > ----------------------------------------------------- Damian Eads Ph.D. Candidate Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From eads at soe.ucsc.edu Tue May 12 21:38:08 2009 From: eads at soe.ucsc.edu (Damian Eads) Date: Tue, 12 May 2009 18:38:08 -0700 Subject: [SciPy-user] Looking for a way to cluster data In-Reply-To: <49F3D2A3.3060002@bigpond.net.au> References: <49F3D2A3.3060002@bigpond.net.au> Message-ID: <91b4b1ab0905121838k28b8dabdn4685f014bd8282c4@mail.gmail.com> Hi Gary, On Sat, Apr 25, 2009 at 8:18 PM, Gary Ruben wrote: > Hi all, > > I'm looking for some advice on how to order data points so that I can > visualise them. I've been looking at scipy.cluster for this purpose but > I'm not sure whether it is suitable so I thought I'd see whether anyone > had suggestions for a simpler suggestion of how to order the coordinates. With the dendrogram function, the order nodes appear from left-to-right can be change with the distance_sort or count_sort functions. > I have a binary 3D array containing 1's that form a shape in a 3D volume > against a background of 0's - they form a skeleton of a connected, > branched structure. Furthermore, the points are all 26-connected to each > other, i.e. there are no gaps in the skeleton. The longest chains may be > 1000's of points long. > It would be nice to visualise these using the mayavi mlab plot3d > function, which draws tubes and which requires ordered coordinates as > input, so I need to get ordered coordinate lists that traverse the > points along the branches of the skeleton. It would also be nice to > preferentially cluster long chains since then I can cull very short > chains from the visualisation. > > scipy.cluster seems to be able to cluster the points but I'm not sure > how to get the x,y,z coordinates of the original points out of its > linkage data. This may not be possible. The rows of the linkage matrix are the clusters and the first two columns of the linkage matrix are the indices of the left and right node, respectively. If the index is less than the number of points clustered (i < N), it's a leaf node (original point/singleton cluster), otherwise it's a non-singleton cluster (i >= N). Note, that there are always (N-1) non-singleton clusters, so the linkage matrix will always have N-1 rows. > Maybe the scipy.spatial module > is a better match to my problem. I haven't had the chance to read this part of the discussion but I hope my answer to your question helps. Cheers, Damian ----------------------------------------------------- Damian Eads Ph.D. Candidate Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From david at ar.media.kyoto-u.ac.jp Tue May 12 21:51:40 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 13 May 2009 10:51:40 +0900 Subject: [SciPy-user] Error when install SciPy on OS X 10.5.6 In-Reply-To: <541772AA-CA32-4303-8383-1BED998E3103@math.umd.edu> References: <541772AA-CA32-4303-8383-1BED998E3103@math.umd.edu> Message-ID: <4A0A27AC.3040709@ar.media.kyoto-u.ac.jp> Chensong Zhang wrote: > Mac OS X 10.5.6 > > svn current version of scipy > > gcc --version > i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5490) > > gfortran --version > GNU Fortran (GCC) 4.3.3 > > Warning: No configuration returned, assuming unavailable. > blas_opt_info: > FOUND: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > define_macros = [('NO_ATLAS_INFO', 3)] > extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/ > vecLib.framework/Headers'] > > lapack_opt_info: > FOUND: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > define_macros = [('NO_ATLAS_INFO', 3)] > extra_compile_args = ['-msse3'] > > umfpack_info: > libraries umfpack not found in /System/Library/Frameworks/ > Python.framework/Versions/2.5/lib > libraries umfpack not found in /usr/local/lib > libraries umfpack not found in /usr/lib > libraries umfpack not found in /sw/lib > /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/ > python/numpy/distutils/system_info.py:401: UserWarning: > UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/ > ) > not found. Directories to search for the libraries can be > specified in the > numpy/distutils/site.cfg file (section [umfpack]) or by setting > the UMFPACK environment variable. > warnings.warn(self.notfounderror.__doc__) > NOT AVAILABLE > > running build_src > building py_modules sources > building library "dfftpack" sources > building library "fftpack" sources > building library "linpack_lite" sources > building library "mach" sources > building library "quadpack" sources > building library "odepack" sources > building library "dop" sources > building library "fitpack" sources > building library "odrpack" sources > building library "minpack" sources > building library "rootfind" sources > building library "superlu_src" sources > building library "arpack" sources > building library "sc_c_misc" sources > building library "sc_cephes" sources > building library "sc_mach" sources > building library "sc_toms" sources > building library "sc_amos" sources > building library "sc_cdf" sources > building library "sc_specfun" sources > building library "statlib" sources > building extension "scipy.cluster._vq" sources > building extension "scipy.cluster._hierarchy_wrap" sources > building extension "scipy.fftpack._fftpack" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.fftpack.convolve" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.integrate._quadpack" sources > building extension "scipy.integrate._odepack" sources > building extension "scipy.integrate.vode" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.integrate.dop" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.interpolate._fitpack" sources > building extension "scipy.interpolate.dfitpack" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > adding 'build/src.macosx-10.5-i386-2.5/scipy/interpolate/src/ > dfitpack-f2pywrappers.f' to sources. > building extension "scipy.interpolate._interpolate" sources > building extension "scipy.io.numpyio" sources > building extension "scipy.lib.blas.fblas" sources > f2py options: ['skip:', ':'] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- > i386-2.5/scipy/lib/blas/fblas-f2pywrappers.f' to sources. > building extension "scipy.lib.blas.cblas" sources > adding 'build/src.macosx-10.5-i386-2.5/scipy/lib/blas/cblas.pyf' to > sources. > f2py options: ['skip:', ':'] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.lib.lapack.flapack" sources > f2py options: ['skip:', ':'] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.lib.lapack.clapack" sources > adding 'build/src.macosx-10.5-i386-2.5/scipy/lib/lapack/ > clapack.pyf' to sources. > f2py options: ['skip:', ':'] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.lib.lapack.calc_lwork" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.lib.lapack.atlas_version" sources > building extension "scipy.linalg.fblas" sources > adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/fblas.pyf' to > sources. > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- > i386-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. > building extension "scipy.linalg.cblas" sources > adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/cblas.pyf' to > sources. > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.linalg.flapack" sources > adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/flapack.pyf' to > sources. > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- > i386-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. > building extension "scipy.linalg.clapack" sources > adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/clapack.pyf' to > sources. > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.linalg._flinalg" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.linalg.calc_lwork" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.linalg.atlas_version" sources > building extension "scipy.odr.__odrpack" sources > building extension "scipy.optimize._minpack" sources > building extension "scipy.optimize._zeros" sources > building extension "scipy.optimize._lbfgsb" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.optimize.moduleTNC" sources > building extension "scipy.optimize._cobyla" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.optimize.minpack2" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.optimize._slsqp" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.optimize._nnls" sources > f2py options: [] > adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. > adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. > building extension "scipy.signal.sigtools" sources > conv_template:> build/src.macosx-10.5-i386-2.5/scipy/signal/lfilter.inc > Traceback (most recent call last): > File "setup.py", line 158, in > setup_package() > File "setup.py", line 150, in setup_package > configuration=configuration ) > File "/System/Library/Frameworks/Python.framework/Versions/2.5/ > Extras/lib/python/numpy/distutils/core.py", line 174, in setup > return old_setup(**new_attr) > File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/distutils/core.py", line 151, in setup > dist.run_commands() > File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/distutils/dist.py", line 974, in run_commands > self.run_command(cmd) > File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/distutils/dist.py", line 994, in run_command > cmd_obj.run() > File "/System/Library/Frameworks/Python.framework/Versions/2.5/ > Extras/lib/python/numpy/distutils/command/build_src.py", line 87, in run > self.build_sources() > File "/System/Library/Frameworks/Python.framework/Versions/2.5/ > Extras/lib/python/numpy/distutils/command/build_src.py", line 106, in > build_sources > self.build_extension_sources(ext) > File "/System/Library/Frameworks/Python.framework/Versions/2.5/ > Extras/lib/python/numpy/distutils/command/build_src.py", line 214, in > build_extension_sources > sources = self.template_sources(sources, ext) > File "/System/Library/Frameworks/Python.framework/Versions/2.5/ > Extras/lib/python/numpy/distutils/command/build_src.py", line 322, in > template_sources > outstr = process_c_file(source) > File "/System/Library/Frameworks/Python.framework/Versions/2.5/ > Extras/lib/python/numpy/distutils/conv_template.py", line 191, in > process_file > % (sourcefile, process_str(''.join(lines)))) > File "/System/Library/Frameworks/Python.framework/Versions/2.5/ > Extras/lib/python/numpy/distutils/conv_template.py", line 156, in > process_str > newstr[sub[0]:sub[1]], sub[4]) > File "/System/Library/Frameworks/Python.framework/Versions/2.5/ > Extras/lib/python/numpy/distutils/conv_template.py", line 120, in > expand_sub > for k in range(numsubs): > TypeError: range() integer end argument expected, got NoneType. > > You are more than likely using an old numpy. Scipy svn requires numpy 1.3.0 cheers, David From zhangchensong at gmail.com Wed May 13 02:04:40 2009 From: zhangchensong at gmail.com (Chensong Zhang) Date: Wed, 13 May 2009 02:04:40 -0400 Subject: [SciPy-user] Error when install SciPy on OS X 10.5.6 In-Reply-To: <4A0A27AC.3040709@ar.media.kyoto-u.ac.jp> References: <541772AA-CA32-4303-8383-1BED998E3103@math.umd.edu> <4A0A27AC.3040709@ar.media.kyoto-u.ac.jp> Message-ID: <92BC90A7-E9D5-42E6-BE7A-B2A1ADCB0009@gmail.com> Thanks for your comment. But I installed numpy from svn also. Best Chensong ==================================== Mathematics Department Penn State University, State College, PA email: zhangchensong at gmail.com web: zhangcs.wordpress.com ==================================== On May 12, 2009, at 9:51 PM, David Cournapeau wrote: > Chensong Zhang wrote: >> Mac OS X 10.5.6 >> >> svn current version of scipy >> >> gcc --version >> i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5490) >> >> gfortran --version >> GNU Fortran (GCC) 4.3.3 >> >> Warning: No configuration returned, assuming unavailable. >> blas_opt_info: >> FOUND: >> extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] >> define_macros = [('NO_ATLAS_INFO', 3)] >> extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/ >> vecLib.framework/Headers'] >> >> lapack_opt_info: >> FOUND: >> extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] >> define_macros = [('NO_ATLAS_INFO', 3)] >> extra_compile_args = ['-msse3'] >> >> umfpack_info: >> libraries umfpack not found in /System/Library/Frameworks/ >> Python.framework/Versions/2.5/lib >> libraries umfpack not found in /usr/local/lib >> libraries umfpack not found in /usr/lib >> libraries umfpack not found in /sw/lib >> /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/ >> python/numpy/distutils/system_info.py:401: UserWarning: >> UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/ >> ) >> not found. Directories to search for the libraries can be >> specified in the >> numpy/distutils/site.cfg file (section [umfpack]) or by setting >> the UMFPACK environment variable. >> warnings.warn(self.notfounderror.__doc__) >> NOT AVAILABLE >> >> running build_src >> building py_modules sources >> building library "dfftpack" sources >> building library "fftpack" sources >> building library "linpack_lite" sources >> building library "mach" sources >> building library "quadpack" sources >> building library "odepack" sources >> building library "dop" sources >> building library "fitpack" sources >> building library "odrpack" sources >> building library "minpack" sources >> building library "rootfind" sources >> building library "superlu_src" sources >> building library "arpack" sources >> building library "sc_c_misc" sources >> building library "sc_cephes" sources >> building library "sc_mach" sources >> building library "sc_toms" sources >> building library "sc_amos" sources >> building library "sc_cdf" sources >> building library "sc_specfun" sources >> building library "statlib" sources >> building extension "scipy.cluster._vq" sources >> building extension "scipy.cluster._hierarchy_wrap" sources >> building extension "scipy.fftpack._fftpack" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.fftpack.convolve" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.integrate._quadpack" sources >> building extension "scipy.integrate._odepack" sources >> building extension "scipy.integrate.vode" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.integrate.dop" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.interpolate._fitpack" sources >> building extension "scipy.interpolate.dfitpack" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> adding 'build/src.macosx-10.5-i386-2.5/scipy/interpolate/src/ >> dfitpack-f2pywrappers.f' to sources. >> building extension "scipy.interpolate._interpolate" sources >> building extension "scipy.io.numpyio" sources >> building extension "scipy.lib.blas.fblas" sources >> f2py options: ['skip:', ':'] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- >> i386-2.5/scipy/lib/blas/fblas-f2pywrappers.f' to sources. >> building extension "scipy.lib.blas.cblas" sources >> adding 'build/src.macosx-10.5-i386-2.5/scipy/lib/blas/cblas.pyf' to >> sources. >> f2py options: ['skip:', ':'] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.lib.lapack.flapack" sources >> f2py options: ['skip:', ':'] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.lib.lapack.clapack" sources >> adding 'build/src.macosx-10.5-i386-2.5/scipy/lib/lapack/ >> clapack.pyf' to sources. >> f2py options: ['skip:', ':'] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.lib.lapack.calc_lwork" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.lib.lapack.atlas_version" sources >> building extension "scipy.linalg.fblas" sources >> adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/fblas.pyf' to >> sources. >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- >> i386-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. >> building extension "scipy.linalg.cblas" sources >> adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/cblas.pyf' to >> sources. >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.linalg.flapack" sources >> adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/flapack.pyf' to >> sources. >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- >> i386-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. >> building extension "scipy.linalg.clapack" sources >> adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/clapack.pyf' to >> sources. >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.linalg._flinalg" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.linalg.calc_lwork" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.linalg.atlas_version" sources >> building extension "scipy.odr.__odrpack" sources >> building extension "scipy.optimize._minpack" sources >> building extension "scipy.optimize._zeros" sources >> building extension "scipy.optimize._lbfgsb" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.optimize.moduleTNC" sources >> building extension "scipy.optimize._cobyla" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.optimize.minpack2" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.optimize._slsqp" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.optimize._nnls" sources >> f2py options: [] >> adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. >> adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. >> building extension "scipy.signal.sigtools" sources >> conv_template:> build/src.macosx-10.5-i386-2.5/scipy/signal/ >> lfilter.inc >> Traceback (most recent call last): >> File "setup.py", line 158, in >> setup_package() >> File "setup.py", line 150, in setup_package >> configuration=configuration ) >> File "/System/Library/Frameworks/Python.framework/Versions/2.5/ >> Extras/lib/python/numpy/distutils/core.py", line 174, in setup >> return old_setup(**new_attr) >> File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/distutils/core.py", line 151, in setup >> dist.run_commands() >> File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/distutils/dist.py", line 974, in run_commands >> self.run_command(cmd) >> File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/distutils/dist.py", line 994, in run_command >> cmd_obj.run() >> File "/System/Library/Frameworks/Python.framework/Versions/2.5/ >> Extras/lib/python/numpy/distutils/command/build_src.py", line 87, >> in run >> self.build_sources() >> File "/System/Library/Frameworks/Python.framework/Versions/2.5/ >> Extras/lib/python/numpy/distutils/command/build_src.py", line 106, in >> build_sources >> self.build_extension_sources(ext) >> File "/System/Library/Frameworks/Python.framework/Versions/2.5/ >> Extras/lib/python/numpy/distutils/command/build_src.py", line 214, in >> build_extension_sources >> sources = self.template_sources(sources, ext) >> File "/System/Library/Frameworks/Python.framework/Versions/2.5/ >> Extras/lib/python/numpy/distutils/command/build_src.py", line 322, in >> template_sources >> outstr = process_c_file(source) >> File "/System/Library/Frameworks/Python.framework/Versions/2.5/ >> Extras/lib/python/numpy/distutils/conv_template.py", line 191, in >> process_file >> % (sourcefile, process_str(''.join(lines)))) >> File "/System/Library/Frameworks/Python.framework/Versions/2.5/ >> Extras/lib/python/numpy/distutils/conv_template.py", line 156, in >> process_str >> newstr[sub[0]:sub[1]], sub[4]) >> File "/System/Library/Frameworks/Python.framework/Versions/2.5/ >> Extras/lib/python/numpy/distutils/conv_template.py", line 120, in >> expand_sub >> for k in range(numsubs): >> TypeError: range() integer end argument expected, got NoneType. >> >> > > You are more than likely using an old numpy. Scipy svn requires > numpy 1.3.0 > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From david at ar.media.kyoto-u.ac.jp Wed May 13 02:12:38 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 13 May 2009 15:12:38 +0900 Subject: [SciPy-user] Error when install SciPy on OS X 10.5.6 In-Reply-To: <92BC90A7-E9D5-42E6-BE7A-B2A1ADCB0009@gmail.com> References: <541772AA-CA32-4303-8383-1BED998E3103@math.umd.edu> <4A0A27AC.3040709@ar.media.kyoto-u.ac.jp> <92BC90A7-E9D5-42E6-BE7A-B2A1ADCB0009@gmail.com> Message-ID: <4A0A64D6.1070700@ar.media.kyoto-u.ac.jp> Chensong Zhang wrote: > Thanks for your comment. But I installed numpy from svn also. > But that's not the version you are using to install scipy. No expand_sub function is to be found in conv_template.py. To check which version of numpy you are actually using, you could use: python -c "import numpy; print numpy.version.version" David From sebastian.walter at gmail.com Wed May 13 03:51:26 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Wed, 13 May 2009 09:51:26 +0200 Subject: [SciPy-user] [ANN][Automatic Differentiation] Beta Version of PYADOLC In-Reply-To: References: <4A09891E.7010805@ntc.zcu.cz> Message-ID: On Tue, May 12, 2009 at 10:25 PM, David Warde-Farley wrote: > On 12-May-09, at 12:41 PM, Pauli Virtanen wrote: > >> AD typically builds an "implicit" graph expression corresponding to >> the >> computation, and constructs the Jacobian based on that. So it's not >> symbolic or numerical differentiation. > > I've never quite understood the difference between what AD does and > the 'symbolic' way, but from what I'm reading on Wikipedia it's just a > way of *implementing* the chain rule cleverly using graph operations. > Is that what you mean Pauli? Yes, this is basically it. > > So it is exact differentiation (to the extent the floating point > hardware can provide) rather than an approximation such as finite > differences will yield, and thus the resulting code is equivalent in > function to what you'd get if you symbolically differentiated and then > coded it up, is that right? Yes, it is exact up to the usual floating point error. No, not necessarily. If you have a function f: R^N --> R with N rather large and you want the gradient f' \in R^N then symbolic differentation would give you N symbolic expressions for each element f'_n. That means that with symbolic differentiation the cost to compute the gradient scales with N. In AD it is possible to avoid this scaling with N. The gradient is only as expensive as a small multiple of the function itself. That means if N = 1000 and you need 1 second to evaluate the function, you would need about 1000 seconds to compute the gradient if you use symbolic differentiation but only a couple of seconds with AD (in the so-called reverse mode). Also, AD works on arbitrary algorithms, that means you can also provide functions with loops in the body. Recursions is typically a no-go for symbolic differentiation. > > Cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sebastian.walter at gmail.com Wed May 13 04:15:29 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Wed, 13 May 2009 10:15:29 +0200 Subject: [SciPy-user] [ANN][Automatic Differentiation] Beta Version of PYADOLC In-Reply-To: <4A09DD12.6080707@gmail.com> References: <4A09891E.7010805@ntc.zcu.cz> <4A09DD12.6080707@gmail.com> Message-ID: On Tue, May 12, 2009 at 10:33 PM, Rob Patro wrote: > Hey guys; I thought I'd chime in here. If you're interested in learning > about automatic differentiation, Justin Domke, who's a grad student in > my department, has written a series of posts on his blog that are really > informative. You can just check out http://justindomke.wordpress.com/. The tutorials are very nice indeed: concise, informative and an interesting read in general. > > Cheers, > Rob > > David Warde-Farley wrote: >> On 12-May-09, at 12:41 PM, Pauli Virtanen wrote: >> >> >>> AD typically builds an "implicit" graph expression corresponding to >>> the >>> computation, and constructs the Jacobian based on that. So it's not >>> symbolic or numerical differentiation. >>> >> >> I've never quite understood the difference between what AD does and >> the 'symbolic' way, but from what I'm reading on Wikipedia it's just a >> way of *implementing* the chain rule cleverly using graph operations. >> Is that what you mean Pauli? >> >> So it is exact differentiation (to the extent the floating point >> hardware can provide) rather than an approximation such as finite >> differences will yield, and thus the resulting code is equivalent in >> function to what you'd get if you symbolically differentiated and then >> coded it up, is that right? >> >> Cheers, >> >> David >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From magnusp at astro.su.se Wed May 13 06:01:18 2009 From: magnusp at astro.su.se (n.l.o) Date: Wed, 13 May 2009 03:01:18 -0700 (PDT) Subject: [SciPy-user] ginput() causes error with TkAgg and imshow() Message-ID: <23518869.post@talk.nabble.com> Hello I am trying to do a ginput() on a imshow() of a image (fits-file, 512x512). When doing it the first time since invoking python I get the error below. I get it when using the TkAgg AND Qt4(Agg) backends, but NOT with the WX backend. Anyone that knows what to do? Is it a bug, if so where do I post it? Cheers Magnus In [6]: pl.ginput() /usr/lib/python2.6/dist-packages/matplotlib/backend_bases.py:1557: DeprecationWarning: functions overriding warnings.showwarning() must support the 'line' argument warnings.warn(str,DeprecationWarning) ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (70, 0)) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/magnusp/msc/reduction/ in () /usr/lib/python2.6/dist-packages/matplotlib/pyplot.pyc in ginput(*args, **kwargs) 356 If *timeout* is negative, does not timeout. 357 """ --> 358 return gcf().ginput(*args, **kwargs) 359 if Figure.ginput.__doc__ is not None: 360 ginput.__doc__ = dedent(Figure.ginput.__doc__) /usr/lib/python2.6/dist-packages/matplotlib/figure.pyc in ginput(self, n, timeout, show_clicks) 1071 blocking_mouse_input = BlockingMouseInput(self) 1072 return blocking_mouse_input(n=n, timeout=timeout, -> 1073 show_clicks=show_clicks) 1074 1075 def waitforbuttonpress(self, timeout=-1): /usr/lib/python2.6/dist-packages/matplotlib/blocking_input.pyc in __call__(self, n, timeout, show_clicks) 256 self.clicks = [] 257 self.marks = [] --> 258 BlockingInput.__call__(self,n=n,timeout=timeout) 259 260 return self.clicks /usr/lib/python2.6/dist-packages/matplotlib/blocking_input.pyc in __call__(self, n, timeout) 102 try: 103 # Start event loop --> 104 self.fig.canvas.start_event_loop(timeout=timeout) 105 finally: # Run even on exception like ctrl-c 106 # Disconnect the callbacks /usr/lib/python2.6/dist-packages/matplotlib/backends/backend_tkagg.pyc in start_event_loop(self, timeout) 320 321 def start_event_loop(self,timeout): --> 322 FigureCanvasBase.start_event_loop_default(self,timeout) 323 start_event_loop.__doc__=FigureCanvasBase.start_event_loop_default.__doc__ 324 /usr/lib/python2.6/dist-packages/matplotlib/backend_bases.pyc in start_event_loop_default(self, timeout) 1555 str = "Using default event loop until function specific" 1556 str += " to this GUI is implemented" -> 1557 warnings.warn(str,DeprecationWarning) 1558 1559 if timeout <= 0: timeout = np.inf /var/lib/python-support/python2.6/pyfits/NP_pyfits.pyc in showwarning(message, category, filename, lineno, file) 74 if file is None: 75 file = sys.stdout ---> 76 _showwarning(message, category, filename, lineno, file) 77 78 def formatwarning(message, category, filename, lineno): /usr/lib/python2.6/warnings.pyc in _show_warning(message, category, filename, lineno, file, line) 27 file = sys.stderr 28 try: ---> 29 file.write(formatwarning(message, category, filename, lineno, line)) 30 except IOError: 31 pass # the file (probably stderr) is invalid - this warning gets lost. TypeError: formatwarning() takes exactly 4 arguments (5 given) -- View this message in context: http://www.nabble.com/ginput%28%29-causes-error-with-TkAgg-and-imshow%28%29-tp23518869p23518869.html Sent from the Scipy-User mailing list archive at Nabble.com. From joshua.stults at gmail.com Wed May 13 07:19:56 2009 From: joshua.stults at gmail.com (Joshua Stults) Date: Wed, 13 May 2009 07:19:56 -0400 Subject: [SciPy-user] Docs for Krogh Interpolator Message-ID: Hello, The docs for doing Hermite polynomial interpolation (specifying function and derivative values at points), seem a little lacking: http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.KroghInterpolator.html I appreciate the explanation it provides and the reference, but it doesn't actually say how to call the function. Are the values specified in several different 1D arrays, or one multidimensional array, in what order, and how do you specify which derivative you're giving? Thanks. -- Joshua Stults Website: http://j-stults.blogspot.com From pav at iki.fi Wed May 13 07:26:38 2009 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 13 May 2009 11:26:38 +0000 (UTC) Subject: [SciPy-user] Docs for Krogh Interpolator References: Message-ID: Wed, 13 May 2009 07:19:56 -0400, Joshua Stults kirjoitti: > The docs for doing Hermite polynomial interpolation (specifying function > and derivative values at points), seem a little lacking: > > http://docs.scipy.org/doc/scipy/reference/generated/ scipy.interpolate.KroghInterpolator.html [clip] This is a problem in how our Sphinx documentation is organized. The documentation is available in the docstrings, but does not end up in the HTML docs. Needs fixing... -- Pauli Virtanen From joshua.stults at gmail.com Wed May 13 07:33:04 2009 From: joshua.stults at gmail.com (Joshua Stults) Date: Wed, 13 May 2009 07:33:04 -0400 Subject: [SciPy-user] Docs for Krogh Interpolator In-Reply-To: References: Message-ID: By docstrings do you mean the text from: import scipy.interpolate print scipy.interpolate.KroghInterpolator.__doc__ That gives me the same two paragraphs that are in the html documentation page. I'm using Python version 2.5.2 and scipy as packaged for Fedora 10. On Wed, May 13, 2009 at 7:26 AM, Pauli Virtanen wrote: > Wed, 13 May 2009 07:19:56 -0400, Joshua Stults kirjoitti: >> The docs for doing Hermite polynomial interpolation (specifying function >> and derivative values at points), seem a little lacking: >> >> http://docs.scipy.org/doc/scipy/reference/generated/ > scipy.interpolate.KroghInterpolator.html > [clip] > > This is a problem in how our Sphinx documentation is organized. The > documentation is available in the docstrings, but does not end up in the > HTML docs. Needs fixing... > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Joshua Stults Website: http://j-stults.blogspot.com From peridot.faceted at gmail.com Wed May 13 07:41:07 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 13 May 2009 07:41:07 -0400 Subject: [SciPy-user] Docs for Krogh Interpolator In-Reply-To: References: Message-ID: 2009/5/13 Joshua Stults : > By docstrings do you mean the text from: > > import scipy.interpolate > print scipy.interpolate.KroghInterpolator.__doc__ > > That gives me the same two paragraphs that are in the html > documentation page. ?I'm using Python version 2.5.2 and scipy as > packaged for Fedora 10. Actually the information is in scipy.interpolate.KroghInterpolator.__init__.__doc__, that is, the documentation on how to construct an object of class KroghInterpolator is in the constructor docstring, rather than the class docstring. This is generally confusing and frustrating, but putting it all in the class docstring is not really a solution, since the docstring really does describe the constructor. Even more unfortunately, this docstring does not appear to be present in the online doc editor. But you can see it by doing help(scipy.interpolate.KroghInterpolator), which will show the class docstring, list the methods, and show all their docstrings. For convenience, here is the content of the constructor docstring: """Construct an interpolator passing through the specified points The polynomial passes through all the pairs (xi,yi). One may additionally specify a number of derivatives at each point xi; this is done by repeating the value xi and specifying the derivatives as successive yi values. Parameters ---------- xi : array-like, length N known x-coordinates yi : array-like, N by R known y-coordinates, interpreted as vectors of length R, or scalars if R=1 Example ------- To produce a polynomial that is zero at 0 and 1 and has derivative 2 at 0, call >>> KroghInterpolator([0,0,1],[0,2,0]) """ Could be better, I admit, but it is there. Anne > On Wed, May 13, 2009 at 7:26 AM, Pauli Virtanen wrote: >> Wed, 13 May 2009 07:19:56 -0400, Joshua Stults kirjoitti: >>> The docs for doing Hermite polynomial interpolation (specifying function >>> and derivative values at points), seem a little lacking: >>> >>> http://docs.scipy.org/doc/scipy/reference/generated/ >> scipy.interpolate.KroghInterpolator.html >> [clip] >> >> This is a problem in how our Sphinx documentation is organized. The >> documentation is available in the docstrings, but does not end up in the >> HTML docs. Needs fixing... >> >> -- >> Pauli Virtanen >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Joshua Stults > Website: http://j-stults.blogspot.com > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cimrman3 at ntc.zcu.cz Wed May 13 08:58:07 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 13 May 2009 14:58:07 +0200 Subject: [SciPy-user] [ANN][Automatic Differentiation] Beta Version of PYADOLC In-Reply-To: References: <4A09891E.7010805@ntc.zcu.cz> <4A09DD12.6080707@gmail.com> Message-ID: <4A0AC3DF.1080408@ntc.zcu.cz> Sebastian Walter wrote: > On Tue, May 12, 2009 at 10:33 PM, Rob Patro wrote: >> Hey guys; I thought I'd chime in here. If you're interested in learning >> about automatic differentiation, Justin Domke, who's a grad student in >> my department, has written a series of posts on his blog that are really >> informative. You can just check out http://justindomke.wordpress.com/. > > The tutorials are very nice indeed: concise, informative and an > interesting read in general. Yes, thanks for the link! r. From dug at armadaletechnologies.co.uk Wed May 13 11:54:31 2009 From: dug at armadaletechnologies.co.uk (Douglas Macdonald) Date: Wed, 13 May 2009 16:54:31 +0100 Subject: [SciPy-user] Trivial quadratic equation question In-Reply-To: <1242118354.6312.5.camel@cm-laptop> References: <3ec88f300905120101u74b4561fnfbeddd3e76127919@mail.gmail.com> <1242118354.6312.5.camel@cm-laptop> Message-ID: <3ec88f300905130854j7bda66d2ve4facbf100d7aeb5@mail.gmail.com> Thanks Christian. This looks like it will do the job. No point in reinventing the wheel. Best, Douglas 2009/5/12 Christian Meesters : > Douglas, > > is this what you are looking for? > http://www.scipy.org/Numpy_Example_List#head-d1f2bf93ea599de262de21cdff971704d7591122 > > HTH > Christian > > > On Tue, 2009-05-12 at 09:01 +0100, Douglas Macdonald wrote: >> Hi, >> >> Does anyone know if scipy already has a direct quadratic (ax^2 + bx +c >> =0) equation solver? >> >> Thank you in advance. >> >> Kind regards, >> >> Douglas >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Dr Douglas Macdonald Armadale Technologies Ltd Tel: +44 (0)141 339 2484 Email: dug at armadaletechnologies.co.uk 3/1, 25 Regent Moray Street Glasgow G3 8AL Scotland, UK Web: http://tinyurl.com/armatec Armadale Technologies Ltd is registered in Scotland, Company Number 313925. Registered Office: 3/1, 25 Regent Moray Street, Glasgow G3 8AL. From graham.enos at gmail.com Wed May 13 12:40:58 2009 From: graham.enos at gmail.com (Graham Enos) Date: Wed, 13 May 2009 12:40:58 -0400 Subject: [SciPy-user] OS x 10.5 Problems with Scipy Installation Message-ID: <5F6D8D01-B8CF-4D29-832C-74025FBDF835@gmail.com> Hey all, I'm trying to install scipy today, and am having trouble on my intel macbook. I installed numpy, but scipy doesn't wanna go. Relevant details: OS X version 10.5 gcc version 4.2.1 gfortran version 4.2.3 I exported MACOSX_DEPLOYMENT_TARGET=10.5 and then ran py2.5 setup.py build_src build_clib --fcompiler=gfortran build_ext --fcompiler=gfortran build (py2.5 is my .bash_profile alias for python version 2.5.1) and got the following traceback (and previous lines of info): running build_src building py_modules sources building library "dfftpack" sources building library "fftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "dop" sources building library "fitpack" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "superlu_src" sources building library "arpack" sources building library "sc_c_misc" sources building library "sc_cephes" sources building library "sc_mach" sources building library "sc_toms" sources building library "sc_amos" sources building library "sc_cdf" sources building library "sc_specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.cluster._hierarchy_wrap" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.integrate.dop" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/scipy/interpolate/src/ dfitpack-f2pywrappers.f' to sources. building extension "scipy.interpolate._interpolate" sources building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- i386-2.5/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/lib/lapack/ clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- i386-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- i386-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.optimize._slsqp" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.optimize._nnls" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.signal.sigtools" sources conv_template:> build/src.macosx-10.5-i386-2.5/scipy/signal/lfilter.inc Traceback (most recent call last): File "setup.py", line 158, in setup_package() File "setup.py", line 150, in setup_package configuration=configuration ) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_src.py", line 87, in run self.build_sources() File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_src.py", line 106, in build_sources self.build_extension_sources(ext) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_src.py", line 214, in build_extension_sources sources = self.template_sources(sources, ext) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_src.py", line 322, in template_sources outstr = process_c_file(source) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/conv_template.py", line 191, in process_file % (sourcefile, process_str(''.join(lines)))) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/conv_template.py", line 156, in process_str newstr[sub[0]:sub[1]], sub[4]) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/conv_template.py", line 120, in expand_sub for k in range(numsubs): TypeError: range() integer end argument expected, got NoneType. Looks like build_src is failing for some reason, though I don't know why. Any help would be greatly appreciated! Thanks, Graham From dwf at cs.toronto.edu Wed May 13 18:56:34 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 13 May 2009 18:56:34 -0400 Subject: [SciPy-user] going through a lot of plots Message-ID: I wonder if anyone has a good pattern they'd like to share for how to page through a lot of plots (mayavi.mlab or matplotlib or Chaco or whatever). I find myself in this situation a lot: I'm looking at a sequence of plots, one for each piece of data in a collection. I usually find myself writing a loop with a plot command followed by raw_input()so that I hit enter in the terminal window IPython session to move to the next item. I usually make this conditional so that I can process in batch without looking at the plots if I choose. This has the effect of producing a newline in the terminal every time I want to move on to the next plot, which is far from ideal, especially in the situation where I'm not printing anything else in that window. I figure there probably is a general way of solving this problem satisfactorily that I just haven't thought of, but any toolkit- specific ideas would be appreciated too. I'd rather avoid mucking with event-handlers since it would force control flow to depend on the plotting toolkit, removing the ability to just "turn off" plot-n-wait. Any thoughts? David From robert.kern at gmail.com Wed May 13 19:18:31 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 May 2009 18:18:31 -0500 Subject: [SciPy-user] going through a lot of plots In-Reply-To: References: Message-ID: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> On Wed, May 13, 2009 at 17:56, David Warde-Farley wrote: > I wonder if anyone has a good pattern they'd like to share for how to > page through a lot of plots (mayavi.mlab or matplotlib or Chaco or > whatever). > > I find myself in this situation a lot: I'm looking at a sequence of > plots, one for each piece of data in a collection. I usually find > myself writing a loop with a plot command followed by raw_input()so > that I hit enter in the terminal window IPython session to move to the > next item. ?I usually make this conditional so that I can process in > batch without looking at the plots if I choose. > > This has the effect of producing a newline in the terminal every time > I want to move on to the next plot, which is far from ideal, > especially in the situation where I'm not printing anything else in > that window. > > I figure there probably is a general way of solving this problem > satisfactorily that I just haven't thought of, but any toolkit- > specific ideas would be appreciated too. I'd rather avoid mucking with > event-handlers since it would force control flow to depend on the > plotting toolkit, removing the ability to just "turn off" plot-n-wait. I usually write up a quick Traits UI that embeds the Chaco Plot with a slider or whatever to select the dataset. This lets me move forwards and backwards and abort in the middle much more naturally. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zachary.pincus at yale.edu Wed May 13 21:27:21 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 13 May 2009 21:27:21 -0400 Subject: [SciPy-user] going through a lot of plots In-Reply-To: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> Message-ID: <844DAAC1-1738-4412-A5B7-06C72638743A@yale.edu> >> I find myself in this situation a lot: I'm looking at a sequence of >> plots, one for each piece of data in a collection. I usually find >> myself writing a loop with a plot command followed by raw_input()so >> that I hit enter in the terminal window IPython session to move to >> the >> next item. I usually make this conditional so that I can process in >> batch without looking at the plots if I choose. >> >> This has the effect of producing a newline in the terminal every time >> I want to move on to the next plot, which is far from ideal, >> especially in the situation where I'm not printing anything else in >> that window. Old-school alternative is to put the TTY into cbreak (aka "rare" mode, between "raw" and "cooked"), and capture a single key-hit. (Except that ^C still breaks, which is handy.) For windows, the C runtime has a similar getkey function. Here's windows / posix code for that that I've assembled from various snippets online; note that the latter uses the well-known decorator module. I've also included an "iskeydown" function which I find useful in various situations... Zach import os if os.name == 'nt': import msvcrt def getkey(): c = msvcrt.getch() if c == '\x00' or c == '\xE0': #functions keys msvcrt.getch() return c def iskeydown(): return msvcrt.kbhit() elif os.name == 'posix': import tty, sys, select import decorator @decorator def _in_cbreak(func, *args, **kws): fd = sys.stdin.fileno() old = tty.tcgetattr(fd) tty.setcbreak(fd, tty.TCSANOW) try: return func(*args, **kws) finally: tty.tcsetattr(fd, tty.TCSAFLUSH, old) @_in_cbreak def getkey(): return sys.stdin.read(1) @_in_cbreak def iskeydown(): if select.select([sys.stdin], [], [], 0) == ([sys.stdin], [], []): return sys.stdin.read(1) else: return False From david at ar.media.kyoto-u.ac.jp Wed May 13 21:44:34 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 14 May 2009 10:44:34 +0900 Subject: [SciPy-user] OS x 10.5 Problems with Scipy Installation In-Reply-To: <5F6D8D01-B8CF-4D29-832C-74025FBDF835@gmail.com> References: <5F6D8D01-B8CF-4D29-832C-74025FBDF835@gmail.com> Message-ID: <4A0B7782.3090009@ar.media.kyoto-u.ac.jp> Graham Enos wrote: > Hey all, > > I'm trying to install scipy today, and am having trouble on my intel > macbook. I installed numpy, but scipy doesn't wanna go. > See the following discussion: http://mail.scipy.org/pipermail/scipy-user/2009-May/021030.html cheers, David From gael.varoquaux at normalesup.org Thu May 14 01:31:18 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 14 May 2009 07:31:18 +0200 Subject: [SciPy-user] going through a lot of plots In-Reply-To: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> Message-ID: <20090514053118.GA16348@phare.normalesup.org> On Wed, May 13, 2009 at 06:18:31PM -0500, Robert Kern wrote: > I usually write up a quick Traits UI that embeds the Chaco Plot with a > slider or whatever to select the dataset. This lets me move forwards > and backwards and abort in the middle much more naturally. Same thing with Mayavi's mlab (I look at 3D data). I modify in place the objects plotted, for speed. Check out https://svn.enthought.com/enthought/browser/Mayavi/trunk/examples/mayavi/interactive/mlab_interactive_dialog.py for some hints on how to do this. Ga?l From cimrman3 at ntc.zcu.cz Thu May 14 02:30:15 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 14 May 2009 08:30:15 +0200 Subject: [SciPy-user] going through a lot of plots In-Reply-To: <844DAAC1-1738-4412-A5B7-06C72638743A@yale.edu> References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> <844DAAC1-1738-4412-A5B7-06C72638743A@yale.edu> Message-ID: <4A0BBA77.8040602@ntc.zcu.cz> Zachary Pincus wrote: >>> I find myself in this situation a lot: I'm looking at a sequence of >>> plots, one for each piece of data in a collection. I usually find >>> myself writing a loop with a plot command followed by raw_input()so >>> that I hit enter in the terminal window IPython session to move to >>> the >>> next item. I usually make this conditional so that I can process in >>> batch without looking at the plots if I choose. >>> >>> This has the effect of producing a newline in the terminal every time >>> I want to move on to the next plot, which is far from ideal, >>> especially in the situation where I'm not printing anything else in >>> that window. > > > Old-school alternative is to put the TTY into cbreak (aka "rare" mode, > between "raw" and "cooked"), and capture a single key-hit. (Except > that ^C still breaks, which is handy.) For windows, the C runtime has > a similar getkey function. > > Here's windows / posix code for that that I've assembled from various > snippets online; note that the latter uses the well-known decorator > module. I've also included an "iskeydown" function which I find useful > in various situations... I have collected some useful snippets to do a similar thing too - now I merged the functionality, added some sugar (pause functions), stirred and cooked - see the attachment. $ python getch.py Are you ok with me using it in my (BSD) project? thanks, r. -------------- next part -------------- A non-text attachment was scrubbed... Name: getch.py Type: text/x-python Size: 3026 bytes Desc: not available URL: From cimrman3 at ntc.zcu.cz Thu May 14 02:34:46 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 14 May 2009 08:34:46 +0200 Subject: [SciPy-user] going through a lot of plots In-Reply-To: <20090514053118.GA16348@phare.normalesup.org> References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> <20090514053118.GA16348@phare.normalesup.org> Message-ID: <4A0BBB86.3060209@ntc.zcu.cz> Hi Gael, Gael Varoquaux wrote: > On Wed, May 13, 2009 at 06:18:31PM -0500, Robert Kern wrote: >> I usually write up a quick Traits UI that embeds the Chaco Plot with a >> slider or whatever to select the dataset. This lets me move forwards >> and backwards and abort in the middle much more naturally. > > Same thing with Mayavi's mlab (I look at 3D data). > > I modify in place the objects plotted, for speed. Check out > https://svn.enthought.com/enthought/browser/Mayavi/trunk/examples/mayavi/interactive/mlab_interactive_dialog.py > for some hints on how to do this. Just to let you know that the example does not work right away with ets-3.2.0 $ ./mlab_interactive_dialog.py Traceback (most recent call last): File "./mlab_interactive_dialog.py", line 36, in from enthought.mayavi.core.api import PipelineBase ImportError: No module named api It works perfectly after this small change: from enthought.mayavi.core.api import PipelineBase -> from enthought.mayavi.core.pipeline_base import PipelineBase cheers, r. From gael.varoquaux at normalesup.org Thu May 14 03:04:13 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 14 May 2009 09:04:13 +0200 Subject: [SciPy-user] going through a lot of plots In-Reply-To: <4A0BBB86.3060209@ntc.zcu.cz> References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> <20090514053118.GA16348@phare.normalesup.org> <4A0BBB86.3060209@ntc.zcu.cz> Message-ID: <20090514070413.GA32437@phare.normalesup.org> On Thu, May 14, 2009 at 08:34:46AM +0200, Robert Cimrman wrote: > Just to let you know that the example does not work right away with > ets-3.2.0 > $ ./mlab_interactive_dialog.py > Traceback (most recent call last): > File "./mlab_interactive_dialog.py", line 36, in > from enthought.mayavi.core.api import PipelineBase > ImportError: No module named api > It works perfectly after this small change: > from enthought.mayavi.core.api import PipelineBase > -> > from enthought.mayavi.core.pipeline_base import PipelineBase Thanks for pointing this out. I should never point to examples in the trunk, but only in the tags. Pointing to examples in the trunk leads to examples not works on people's boxes... Ga?l From zachary.pincus at yale.edu Thu May 14 06:46:03 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 14 May 2009 06:46:03 -0400 Subject: [SciPy-user] going through a lot of plots In-Reply-To: <4A0BBA77.8040602@ntc.zcu.cz> References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> <844DAAC1-1738-4412-A5B7-06C72638743A@yale.edu> <4A0BBA77.8040602@ntc.zcu.cz> Message-ID: > > I have collected some useful snippets to do a similar thing too - > now I merged the functionality, added some sugar (pause functions), > stirred and cooked - see the attachment. > > Are you ok with me using it in my (BSD) project? Feel free to use it for anything! From aisaac at american.edu Thu May 14 11:20:53 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 14 May 2009 11:20:53 -0400 Subject: [SciPy-user] going through a lot of plots In-Reply-To: References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> <844DAAC1-1738-4412-A5B7-06C72638743A@yale.edu> <4A0BBA77.8040602@ntc.zcu.cz> Message-ID: <4A0C36D5.7070309@american.edu> On 5/14/2009 6:46 AM Zachary Pincus apparently wrote: > Feel free to use it for anything! Just a reminder that these days it is much safer to be explicit. E.g., "I release this code into the public domain" or "I release this code under the 3 clause BSD license." Alan Isaac From zachary.pincus at yale.edu Thu May 14 11:34:31 2009 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 14 May 2009 11:34:31 -0400 Subject: [SciPy-user] going through a lot of plots In-Reply-To: <4A0C36D5.7070309@american.edu> References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> <844DAAC1-1738-4412-A5B7-06C72638743A@yale.edu> <4A0BBA77.8040602@ntc.zcu.cz> <4A0C36D5.7070309@american.edu> Message-ID: <20C6A627-2AB0-40C5-BA17-030FA04DAE04@yale.edu> > On 5/14/2009 6:46 AM Zachary Pincus apparently wrote: >> Feel free to use it for anything! > > Just a reminder that these days it is much safer to be > explicit. E.g., "I release this code into the public > domain" or "I release this code under the 3 clause > BSD license." Thanks Alan. I release that code into the public domain. Question, though: safer for whom? I presume it's safer for the person using the code... are there any considerations for the original author of the code, though? Zach From aisaac at american.edu Thu May 14 11:46:13 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 14 May 2009 11:46:13 -0400 Subject: [SciPy-user] going through a lot of plots In-Reply-To: <20C6A627-2AB0-40C5-BA17-030FA04DAE04@yale.edu> References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> <844DAAC1-1738-4412-A5B7-06C72638743A@yale.edu> <4A0BBA77.8040602@ntc.zcu.cz> <4A0C36D5.7070309@american.edu> <20C6A627-2AB0-40C5-BA17-030FA04DAE04@yale.edu> Message-ID: <4A0C3CC5.3000806@american.edu> On 5/14/2009 11:34 AM Zachary Pincus apparently wrote: > Question, though: safer for whom? I presume it's safer for the person > using the code... are there any considerations for the original author > of the code, though? Only the following: the author's actual intent is more likely to be realized. Cheers, Alan From dwf at cs.toronto.edu Thu May 14 11:47:07 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 14 May 2009 11:47:07 -0400 Subject: [SciPy-user] going through a lot of plots In-Reply-To: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> Message-ID: On 13-May-09, at 7:18 PM, Robert Kern wrote: > I usually write up a quick Traits UI that embeds the Chaco Plot with a > slider or whatever to select the dataset. This lets me move forwards > and backwards and abort in the middle much more naturally. Thanks for the idea. I followed Gael's tutorial example pretty closely and did the same with my existing matplotlib code (not that Chaco isn't great and all, just taking the path of least resistance at this point). This has a slight problem when running ipython with -wthread, in that if I want to view several disparate groups (i.e. I have many images, and I'm processing them one by one, so I've set it up so I can look at all the objects in an image) - there is no way (as far as I can tell) to get my function to wait on configure_traits() before moving on to the next image (I suppose I could embed another slider or something to select the image...). Is there any way to tell a GUI to take the interpreter thread hostage? David From dwf at cs.toronto.edu Thu May 14 11:49:34 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 14 May 2009 11:49:34 -0400 Subject: [SciPy-user] going through a lot of plots In-Reply-To: <844DAAC1-1738-4412-A5B7-06C72638743A@yale.edu> References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> <844DAAC1-1738-4412-A5B7-06C72638743A@yale.edu> Message-ID: On 13-May-09, at 9:27 PM, Zachary Pincus wrote: > Old-school alternative is to put the TTY into cbreak (aka "rare" mode, > between "raw" and "cooked"), and capture a single key-hit. (Except > that ^C still breaks, which is handy.) For windows, the C runtime has > a similar getkey function. > > Here's windows / posix code for that that I've assembled from various > snippets online; note that the latter uses the well-known decorator > module. I've also included an "iskeydown" function which I find useful > in various situations... Awesome, thanks! David From aisaac at american.edu Thu May 14 12:24:35 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 14 May 2009 12:24:35 -0400 Subject: [SciPy-user] going through a lot of plots In-Reply-To: References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> Message-ID: <4A0C45C3.10605@american.edu> On 5/14/2009 11:47 AM David Warde-Farley apparently wrote: > Thanks for the idea. I followed Gael's tutorial example pretty closely > and did the same with my existing matplotlib code Could you add this to the Matplotlib cookbook? http://www.scipy.org/Cookbook/Matplotlib Thanks, Alan Isaac From dwf at cs.toronto.edu Thu May 14 15:30:35 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 14 May 2009 15:30:35 -0400 Subject: [SciPy-user] going through a lot of plots In-Reply-To: <4A0C45C3.10605@american.edu> References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> <4A0C45C3.10605@american.edu> Message-ID: Done. Although that cookbook could use some serious TLC... For one thing, maybe that mplot3d stuff ought to be removed? I didn't but I made the warning explicit on the main cookbook page. David On 14-May-09, at 12:24 PM, Alan G Isaac wrote: > On 5/14/2009 11:47 AM David Warde-Farley apparently wrote: >> Thanks for the idea. I followed Gael's tutorial example pretty >> closely >> and did the same with my existing matplotlib code > > Could you add this to the Matplotlib cookbook? > http://www.scipy.org/Cookbook/Matplotlib > > Thanks, > Alan Isaac > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Thu May 14 16:17:09 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 14 May 2009 15:17:09 -0500 Subject: [SciPy-user] going through a lot of plots In-Reply-To: References: <3d375d730905131618sfe160abg994224b3ccc40f6a@mail.gmail.com> Message-ID: <3d375d730905141317j5c863e2ax4f8c8b5f0ca3611c@mail.gmail.com> On Thu, May 14, 2009 at 10:47, David Warde-Farley wrote: > On 13-May-09, at 7:18 PM, Robert Kern wrote: > >> I usually write up a quick Traits UI that embeds the Chaco Plot with a >> slider or whatever to select the dataset. This lets me move forwards >> and backwards and abort in the middle much more naturally. > > Thanks for the idea. I followed Gael's tutorial example pretty closely > and did the same with my existing matplotlib code (not that Chaco > isn't great and all, just taking the path of least resistance at this > point). > > This has a slight problem when running ipython with -wthread, in that > if I want to view several disparate groups (i.e. I have many images, > and I'm processing them one by one, so I've set it up so I can look at > all the objects in an image) - there is no way (as far as I can tell) > to get my function to wait on configure_traits() before moving on to > the next image (I suppose I could embed another slider or something to > select the image...). The latter would probably be preferable for much the same reasons as using a slider for selecting each object. But it cold be more code. > Is there any way to tell a GUI to take the > interpreter thread hostage? obj.edit_traits(kind='livemodal') -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ross.wilson at ga.gov.au Fri May 15 03:37:39 2009 From: ross.wilson at ga.gov.au (Ross Wilson) Date: Fri, 15 May 2009 07:37:39 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Is_numpy=27s_argsort_lying_about_its=09num?= =?utf-8?b?cHkuaW50MzIJdHlwZXM/?= References: <4626A9B3.40906@ee.byu.edu> <4626E946.2090107@ieee.org> Message-ID: Travis Oliphant ieee.org> writes: > > Rob Clewley wrote: > > Fair enough, but it does cause a *real* problem when I extract the > > values from aa and pass them on to other functions which try to > > compare their types to the integer types int and int32 that I can > > import from numpy. Since the values I'm testing could equally have > > been generated by functions that return the regular int type I can't > > guarantee that those values will have a dtype attribute! > > > You don't have to use the bit-width names (which can be confusing) in > such cases. There is a regular name for every C-like type > > You can use the names byte, short, intc, int_, longlong (and > corresponding unsigned names prefixed with u) > > > I have some initialization code for a big class that has to set up > > some state differently depending on the type of the input. So, I was > > trying to do something like this > > > > if type(x) in [int, int32]: > > ## do stuff specific to integer x > > > > but now it seems like I'll need > > > > try: > > isint = x.dtype == dtype('int32') > > except AttributeError: > > isint = type(x) == int > > if isint: > > ## do stuff specific to integer x > > > > try: > > if isinstance(x, (int, integer)) > > integer is the super-class of all c-like integer types. > > > -- which is a mess! Is there a better way to do this test cleanly and > > robustly? And why couldn't c_long always correspond to a unique numpy > > name (i.e., not shared with int32) regardless of how it's implemented? > > > There is a unique numpy name for all of them. The bit-width names just > can't be unique. > > > Either way it would be helpful to have a name for this "other" int32 > > that I can test against using the all-purpose type() ... so that I > > could test something like > > > > type(x) in [int, int32_c_long, int32_c_int] > > > > isinstance(x, (int, intc, int_)) > > is what you want. > > -Travis > In a slightly different context, we have found a situation with the type comparison of two *general* objects (ie, we don't know if they are numpy objects or something else) that confused us mightily. For example, in a recursive general object comparison function we have: def compare(A, B): if type(A) is not type(B): return False where the <...> code may call compare() recursively after splitting complex objects into less complex objects. The confusion comes when we debug a comparison that should return *equal* but doesn't due to the type comparison saying the objects have different types. Printing the type() of A and B shows both as numpy.int32 but they are *not* equal types (the id(type()) values differ). That is confusing. Wouldn't it be better if numpy types that have the same underlying bit representation (integer, 32bit) use the same type object. Or if that can't be done, arrange for the different object types to display different representation strings? That would remove the confusion we experience when we see the same type string for objects that aren't the same type. Ross From mudit_19a at yahoo.com Fri May 15 21:10:48 2009 From: mudit_19a at yahoo.com (mudit sharma) Date: Sat, 16 May 2009 06:40:48 +0530 (IST) Subject: [SciPy-user] concave and convex function Message-ID: <892795.73286.qm@web94915.mail.in2.yahoo.com> I have following sample (actual dataset is much bigger): [ 0.48 0.64 0.69 0.67 0.67 0.65 0.68 0.63 0.62 0.61 0.61 0.6 0.58 0.62 0.64 0.63 0.63 0.61 0.60 0.61 0.62 0.65 0.67 0.67 0.67 0.68 0.68 0.66 0.68 0.65 0.65 0.65 0.65 0.64 0.64 0.65 0.65 0.66 0.66 0.64 0.65 0.64 0.68 0.69 0.70 0.69 0.7 0.68 0.64 0.64 0.65 0.67 0.68 0.67 0.67 0.66 0.66 0.64 0.64 0.58 0.53 0.53 0.52] I am looking to scan for all M & W curve formation and cycles. For this I need to detect all concave and convex points on curve. Is there function available in scipy or mlab to fit data into concave and convex function? If there's any other straightforward way to achieve this, pleas suggest. Thanks! M From gruben at bigpond.net.au Fri May 15 22:10:46 2009 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 16 May 2009 12:10:46 +1000 Subject: [SciPy-user] Looking for a way to cluster data In-Reply-To: <91b4b1ab0905121838k28b8dabdn4685f014bd8282c4@mail.gmail.com> References: <49F3D2A3.3060002@bigpond.net.au> <91b4b1ab0905121838k28b8dabdn4685f014bd8282c4@mail.gmail.com> Message-ID: <4A0E20A6.6090705@bigpond.net.au> Hi Damian, Thanks for taking the time to reply. I ended up with a solution for now that doesn't use scipy.cluster and I won't have the time to revisit this, but I think that with the information you provided, I could probably have used the dendrogram function and not taken a graph-theory approach. Gary Damian Eads wrote: > Hi Gary, > > On Sat, Apr 25, 2009 at 8:18 PM, Gary Ruben wrote: >> Hi all, >> >> I'm looking for some advice on how to order data points so that I can >> visualise them. I've been looking at scipy.cluster for this purpose but >> I'm not sure whether it is suitable so I thought I'd see whether anyone >> had suggestions for a simpler suggestion of how to order the coordinates. > > With the dendrogram function, the order nodes appear from > left-to-right can be change with the distance_sort or count_sort > functions. > >> I have a binary 3D array containing 1's that form a shape in a 3D volume >> against a background of 0's - they form a skeleton of a connected, >> branched structure. Furthermore, the points are all 26-connected to each >> other, i.e. there are no gaps in the skeleton. The longest chains may be >> 1000's of points long. >> It would be nice to visualise these using the mayavi mlab plot3d >> function, which draws tubes and which requires ordered coordinates as >> input, so I need to get ordered coordinate lists that traverse the >> points along the branches of the skeleton. It would also be nice to >> preferentially cluster long chains since then I can cull very short >> chains from the visualisation. >> >> scipy.cluster seems to be able to cluster the points but I'm not sure >> how to get the x,y,z coordinates of the original points out of its >> linkage data. This may not be possible. > > The rows of the linkage matrix are the clusters and the first two > columns of the linkage matrix are the indices of the left and right > node, respectively. If the index is less than the number of points > clustered (i < N), it's a leaf node (original point/singleton > cluster), otherwise it's a non-singleton cluster (i >= N). Note, that > there are always (N-1) non-singleton clusters, so the linkage matrix > will always have N-1 rows. > > >> Maybe the scipy.spatial module >> is a better match to my problem. > > I haven't had the chance to read this part of the discussion but I > hope my answer to your question helps. > > Cheers, > > Damian From roger.herikstad at gmail.com Sat May 16 02:45:30 2009 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Sat, 16 May 2009 14:45:30 +0800 Subject: [SciPy-user] numpy 64 bit build on mac os x Message-ID: Hi all, I'm trying to build numpy from svn (rev 6997) on Mac OS X 10.5.7. I've managed to build a 4-way universal of Python 2.6.2. When I try using that to build numpy, I get a bunch of warnings from ld saying certain files are not of the required architecture. I traced one of these files, _sortmodule.c, and it seems that, when building the extensions, the required arch flags are not transmitted properly, causing the files to built using only the active architecture. Here are the lines in the build log I think are relevant: 925 building 'numpy.core._sort' extension 926 compiling C sources 927 C compiler: gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/usr/local/include -I/usr/local/include 928 929 compile options: '-Inumpy/core/include -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/include/numpy -Inumpy/core/src -Inumpy/core/src/multiarray -Inumpy/core/s rc/umath -Inumpy/core/include -I/Library/Frameworks/Python64.framework/Versions/2.6/include/python2.6 -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/src/multiarray -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/src/umath -c' 930 gcc: build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.c 931 gcc -arch i386 -arch ppc -arch ppc64 -arch x86_64 -isysroot / -L/usr/local/lib -bundle -undefined dynamic_lookup -L/usr/local/lib -I/usr/local/include -I/usr/loc al/include build/temp.macosx-10.5-universal-2.6/build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.o -Lbuild/temp.macosx-10.5-universal-2.6 -o build/ lib.macosx-10.5-universal-2.6/numpy/core/_sort.so 932 ld warning: in build/temp.macosx-10.5-universal-2.6/build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.o, file is not of required architecture 933 ld warning: in build/temp.macosx-10.5-universal-2.6/build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.o, file is not of required architecture 934 ld warning: in build/temp.macosx-10.5-universal-2.6/build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.o, file is not of required architecture The compile options do not contain the arch flags, and I'm not sure how I should go about forcing the build process to use them. Any suggestions anyone? Is there something obvious I'm missing here? Thanks! ~ Roger From david at ar.media.kyoto-u.ac.jp Sat May 16 02:45:50 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 16 May 2009 15:45:50 +0900 Subject: [SciPy-user] numpy 64 bit build on mac os x In-Reply-To: References: Message-ID: <4A0E611E.2060907@ar.media.kyoto-u.ac.jp> Roger Herikstad wrote: > Hi all, > I'm trying to build numpy from svn (rev 6997) on Mac OS X 10.5.7. > I've managed to build a 4-way universal of Python 2.6.2. When I try > using that to build numpy, I get a bunch of warnings from ld saying > certain files are not of the required architecture. I traced one of > these files, _sortmodule.c, and it seems that, when building the > extensions, the required arch flags are not transmitted properly, > causing the files to built using only the active architecture. Here > are the lines in the build log I think are relevant: > You could try something like: CFLAGS="-O3 -Wall -DNDEBUG -g -fwrapv -Wstrict-prototypes -arch ppc -arch x86_64 -arch ppc64 -arch i386" python setup.py build David From roger.herikstad at gmail.com Sat May 16 03:23:36 2009 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Sat, 16 May 2009 15:23:36 +0800 Subject: [SciPy-user] numpy 64 bit build on mac os x In-Reply-To: <4A0E611E.2060907@ar.media.kyoto-u.ac.jp> References: <4A0E611E.2060907@ar.media.kyoto-u.ac.jp> Message-ID: Hi, Thanks for your quick reply. Adding those CFLAGS revealed another problem, though. I now get the following error: building 'numpy.core._sort' extension 1571 compiling C sources 1572 C compiler: gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -O3 -Wall -DNDEBUG -g -fwrapv -Wstrict-prototypes -arch ppc -arch x86_64 -arch ppc64 -arch i386 -I/usr/local/include 1573 1574 compile options: '-Inumpy/core/include -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/include/numpy -Inumpy/core/src -Inumpy/core/src/multiarray -Inumpy/core/s rc/umath -Inumpy/core/include -I/Library/Frameworks/Python64.framework/Versions/2.6/include/python2.6 -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/src/multia rray -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/src/umath -c' 1575 gcc: build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.c 1576 In file included from numpy/core/include/numpy/ndarrayobject.h:33, 1577 from numpy/core/include/numpy/noprefix.h:7, 1578 from numpy/core/src/_sortmodule.c.src:29: 1579 numpy/core/include/numpy/npy_endian.h:33:10: error: #error Unknown CPU: can not set endianness Is this an issue of the mixed endianness between ppc and intel macs? ~ Roger On Sat, May 16, 2009 at 2:45 PM, David Cournapeau wrote: > Roger Herikstad wrote: >> Hi all, >> ?I'm trying to build numpy from svn (rev 6997) on Mac OS X 10.5.7. >> I've managed to build a 4-way universal of Python 2.6.2. When I try >> using that to build numpy, I get a bunch of warnings from ld saying >> certain files are not of the required architecture. I traced one of >> these files, _sortmodule.c, and it seems that, when building the >> extensions, the required arch flags are not transmitted properly, >> causing the files to built using only the active architecture. Here >> are the lines in the build log I think are relevant: >> > > You could try something like: > > CFLAGS="-O3 -Wall -DNDEBUG -g -fwrapv -Wstrict-prototypes -arch ppc > -arch x86_64 -arch ppc64 -arch i386" python setup.py build > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From david at ar.media.kyoto-u.ac.jp Sat May 16 03:21:02 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 16 May 2009 16:21:02 +0900 Subject: [SciPy-user] numpy 64 bit build on mac os x In-Reply-To: References: <4A0E611E.2060907@ar.media.kyoto-u.ac.jp> Message-ID: <4A0E695E.7020200@ar.media.kyoto-u.ac.jp> Roger Herikstad wrote: > Hi, > Thanks for your quick reply. Adding those CFLAGS revealed another > problem, though. I now get the following error: > > building 'numpy.core._sort' extension > 1571 compiling C sources > 1572 C compiler: gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes > -O3 -Wall -DNDEBUG -g -fwrapv -Wstrict-prototypes -arch ppc -arch > x86_64 -arch ppc64 -arch i386 -I/usr/local/include > 1573 > 1574 compile options: '-Inumpy/core/include > -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/include/numpy > -Inumpy/core/src -Inumpy/core/src/multiarray -Inumpy/core/s > rc/umath -Inumpy/core/include > -I/Library/Frameworks/Python64.framework/Versions/2.6/include/python2.6 > -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/src/multia rray > -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/src/umath -c' > 1575 gcc: build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.c > 1576 In file included from numpy/core/include/numpy/ndarrayobject.h:33, > 1577 from numpy/core/include/numpy/noprefix.h:7, > 1578 from numpy/core/src/_sortmodule.c.src:29: > 1579 numpy/core/include/numpy/npy_endian.h:33:10: error: #error > Unknown CPU: can not set endianness > > Is this an issue of the mixed endianness between ppc and intel macs? > Each -arch should imply a new read of the header (AFAIK, -arch is just a convenience to run the target specific compiler corresponding to each architecture), so this is strange. I will look into it, David From cycomanic at gmail.com Sat May 16 21:02:45 2009 From: cycomanic at gmail.com (Jochen Schroeder) Date: Sun, 17 May 2009 13:02:45 +1200 Subject: [SciPy-user] Chaco question Message-ID: <20090517010244.GA13804@jochen.schroeder.phy.auckland.ac.nz> Hi all, I'm trying to write a small application using chaco to check some of my simulation results. It's actually also a bit of an exercise to learn chaco. Currently what I have is something like this: class ContainerTest(HasTraits): plot = Instance(VPlotContainer) traits_view = View(Item('plot', editor=ComponentEditor(), show_label=False), width=800, height=800, resizable=True, title='spectrum') def __init__(self): super(ContainerTest, self).__init__() I,f,z = load_field('test.h5') plotdata = ArrayPlotData(spectrum=Is) imageplot = Plot(self.plotdata) im = imageplot.img_plot("spectrum", xbounds=t, ybounds=z, colormap=jet)[0] LI = LineInspector(component=self.imageplot, write_metadata=True, inspect_mode='indexed', axis='index_y') imageplot.overlays.append(LI) line = ArrayPlotData(field=I[0],frequency=f) lp = Plot(self.line) lineplot = lp.plot(("frequency","field"), type="line") container = VPlotContainer(imageplot,lp) self.plot = container I is a spectrum of a field depending on frequency f and propagation distance z. So on top i have an imageplot of the spectral evolution, while at the bottom I have just the spectrum at one point of the evolution. What I'm trying to do with the LineInspector is that I'd like to be able to move it up and down and choose which spectrum to display in the bottom plot. I've looked at some of the examples and I understand I need to at an event handler using on_trait_change, however I can't find which object gets written the metadata to. Another (slight) problem I found is that if I change the interpolation to bilinear I have a white gradient overlaying at the top and bottom of the image. Can anyone point me in the right direction? As I'm a bit stuck atm. Cheers Jochen From robert.kern at gmail.com Sat May 16 21:11:16 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 16 May 2009 20:11:16 -0500 Subject: [SciPy-user] Chaco question In-Reply-To: <20090517010244.GA13804@jochen.schroeder.phy.auckland.ac.nz> References: <20090517010244.GA13804@jochen.schroeder.phy.auckland.ac.nz> Message-ID: <3d375d730905161811v30fa5791g21836e5212a01872@mail.gmail.com> On Sat, May 16, 2009 at 20:02, Jochen Schroeder wrote: > Hi all, > > I'm trying to write a small application using chaco to check some of my > simulation results. It's actually also a bit of an exercise to learn > chaco. Currently what I have is something like this: Chaco questions should be directed to chaco-users or enthought-dev: https://mail.enthought.com/mailman/listinfo/chaco-users https://mail.enthought.com/mailman/listinfo/enthought-dev -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cycomanic at gmail.com Sat May 16 21:16:46 2009 From: cycomanic at gmail.com (Jochen Schroeder) Date: Sun, 17 May 2009 13:16:46 +1200 Subject: [SciPy-user] Chaco question In-Reply-To: <3d375d730905161811v30fa5791g21836e5212a01872@mail.gmail.com> References: <20090517010244.GA13804@jochen.schroeder.phy.auckland.ac.nz> <3d375d730905161811v30fa5791g21836e5212a01872@mail.gmail.com> Message-ID: <20090517011642.GB13804@jochen.schroeder.phy.auckland.ac.nz> On 16/05/09 20:11, Robert Kern wrote: > On Sat, May 16, 2009 at 20:02, Jochen Schroeder wrote: > > Hi all, > > > > I'm trying to write a small application using chaco to check some of my > > simulation results. It's actually also a bit of an exercise to learn > > chaco. Currently what I have is something like this: > > Chaco questions should be directed to chaco-users or enthought-dev: > > https://mail.enthought.com/mailman/listinfo/chaco-users > https://mail.enthought.com/mailman/listinfo/enthought-dev > Ah, sorry somehow I missed the chaco mailing list Cheers Jochen > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Sun May 17 03:32:17 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 17 May 2009 03:32:17 -0400 Subject: [SciPy-user] concave and convex function In-Reply-To: <892795.73286.qm@web94915.mail.in2.yahoo.com> References: <892795.73286.qm@web94915.mail.in2.yahoo.com> Message-ID: <1cd32cbb0905170032h685d781s67086670081e9e80@mail.gmail.com> On Fri, May 15, 2009 at 9:10 PM, mudit sharma wrote: > > I have following sample (actual dataset is much bigger): > > [ 0.48 ?0.64 ?0.69 ?0.67 ?0.67 ?0.65 ?0.68 ?0.63 ?0.62 ?0.61 ?0.61 ?0.6 > ?0.58 ?0.62 ?0.64 ?0.63 ?0.63 ?0.61 ?0.60 ?0.61 ?0.62 ?0.65 ?0.67 ?0.67 > ?0.67 ?0.68 ?0.68 ?0.66 ?0.68 ?0.65 ?0.65 ?0.65 ?0.65 ?0.64 ?0.64 ?0.65 > ?0.65 ?0.66 ?0.66 ?0.64 ?0.65 ?0.64 ?0.68 ?0.69 ?0.70 ? 0.69 ?0.7 ? 0.68 > ?0.64 ?0.64 ?0.65 ?0.67 ?0.68 ?0.67 ?0.67 ?0.66 ?0.66 ?0.64 ?0.64 ?0.58 > ?0.53 ?0.53 ?0.52] > > > I am looking to scan for all M & W curve formation and cycles. For this I need to detect all concave and convex points on curve. Is there function available in scipy or mlab to fit data into concave and convex function? If there's any other straightforward way to achieve this, pleas suggest. > I'm not sure what you need, I don't know what M & W curve formation and cycles are. But for a 1d array, you can just check the second derivative with np.diff, something like x=np.linspace(0.,5.) y = np.sin(x) z=np.diff(np.diff(y))>0 # isconvex z1=np.diff(y,2) # 2nd derivative np.array(z1>0,int) -np.array(z1<0,int) # concave indicator but your example array has lots of ups and downs. If it is a noisy dataset, you might need to smooth it first? Josef From mudit_19a at yahoo.com Sun May 17 09:50:23 2009 From: mudit_19a at yahoo.com (mudit sharma) Date: Sun, 17 May 2009 19:20:23 +0530 (IST) Subject: [SciPy-user] concave and convex function In-Reply-To: <1cd32cbb0905170032h685d781s67086670081e9e80@mail.gmail.com> References: <892795.73286.qm@web94915.mail.in2.yahoo.com> <1cd32cbb0905170032h685d781s67086670081e9e80@mail.gmail.com> Message-ID: <253017.92520.qm@web94903.mail.in2.yahoo.com> Thanks for your response. By M & W curve I meant M & W shape curves( subset ) and by cycle I meant wave cycle. ----- Original Message ---- From: "josef.pktd at gmail.com" To: SciPy Users List Sent: Sunday, 17 May, 2009 8:32:17 Subject: Re: [SciPy-user] concave and convex function On Fri, May 15, 2009 at 9:10 PM, mudit sharma wrote: > > I have following sample (actual dataset is much bigger): > > [ 0.48 0.64 0.69 0.67 0.67 0.65 0.68 0.63 0.62 0.61 0.61 0.6 > 0.58 0.62 0.64 0.63 0.63 0.61 0.60 0.61 0.62 0.65 0.67 0.67 > 0.67 0.68 0.68 0.66 0.68 0.65 0.65 0.65 0.65 0.64 0.64 0.65 > 0.65 0.66 0.66 0.64 0.65 0.64 0.68 0.69 0.70 0.69 0.7 0.68 > 0.64 0.64 0.65 0.67 0.68 0.67 0.67 0.66 0.66 0.64 0.64 0.58 > 0.53 0.53 0..52] > > > I am looking to scan for all M & W curve formation and cycles. For this I need to detect all concave and convex points on curve. Is there function available in scipy or mlab to fit data into concave and convex function? If there's any other straightforward way to achieve this, pleas suggest. > I'm not sure what you need, I don't know what M & W curve formation and cycles are. But for a 1d array, you can just check the second derivative with np.diff, something like x=np.linspace(0.,5.) y = np.sin(x) z=np.diff(np.diff(y))>0 # isconvex z1=np.diff(y,2) # 2nd derivative np.array(z1>0,int) -np.array(z1<0,int) # concave indicator but your example array has lots of ups and downs. If it is a noisy dataset, you might need to smooth it first? Josef _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From Ross.Williamson at usap.gov Sun May 17 20:52:04 2009 From: Ross.Williamson at usap.gov (Ross Williamson) Date: Mon, 18 May 2009 12:52:04 +1200 Subject: [SciPy-user] addressing 1d array Message-ID: <4A10B134.3060208@usap.gov> Hi everyone I'm currently writing some code where I read in a data file - Generally there are > 1 entry and I check certain parameters for example if data[0]['date'] == something: Which is fine until I get a single element and the above gives: '0-d arrays cannot be indexed' which makes sense , however, is there any easy way of just using the above code without having to check each time to see if it is only a single element - i.e. transparent (similar to idl) and just assumes that data[0] is the only element if there is only one in there. Cheers Ross From roger.herikstad at gmail.com Sun May 17 20:56:21 2009 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Mon, 18 May 2009 08:56:21 +0800 Subject: [SciPy-user] addressing 1d array In-Reply-To: <4A10B134.3060208@usap.gov> References: <4A10B134.3060208@usap.gov> Message-ID: Hi, You could take a look at numpy.atleast_1d. ~ Roger ~ Roger On Mon, May 18, 2009 at 8:52 AM, Ross Williamson wrote: > Hi everyone > > I'm currently writing some code where I read in a data file - Generally > there are > 1 entry and I check certain parameters for example > > if data[0]['date'] == something: > > Which is fine until I get a single element and the above gives: > > '0-d arrays cannot be indexed' > > which makes sense , however, is there any easy way of just using the > above code without having to check each time to see if it is only a > single element - i.e. transparent (similar to idl) and just assumes that > data[0] is the only element if there is only one in there. > > Cheers > > Ross > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From Ross.Williamson at usap.gov Sun May 17 21:07:44 2009 From: Ross.Williamson at usap.gov (Ross Williamson) Date: Mon, 18 May 2009 13:07:44 +1200 Subject: [SciPy-user] addressing 1d array In-Reply-To: References: <4A10B134.3060208@usap.gov> Message-ID: <4A10B4E0.8030502@usap.gov> Hi Roger Thanks At the moment I check the length with if data.size == 1: data = data.reshape(1,) Which works so I shouldn't complain too much :) Just thought there might be a more obvious way of doing it Ross Roger Herikstad wrote: > Hi, > You could take a look at numpy.atleast_1d. > > ~ Roger > > ~ Roger > > On Mon, May 18, 2009 at 8:52 AM, Ross Williamson > wrote: > >> Hi everyone >> >> I'm currently writing some code where I read in a data file - Generally >> there are > 1 entry and I check certain parameters for example >> >> if data[0]['date'] == something: >> >> Which is fine until I get a single element and the above gives: >> >> '0-d arrays cannot be indexed' >> >> which makes sense , however, is there any easy way of just using the >> above code without having to check each time to see if it is only a >> single element - i.e. transparent (similar to idl) and just assumes that >> data[0] is the only element if there is only one in there. >> >> Cheers >> >> Ross >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From roger.herikstad at gmail.com Sun May 17 21:06:35 2009 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Mon, 18 May 2009 09:06:35 +0800 Subject: [SciPy-user] addressing 1d array In-Reply-To: <4A10B4E0.8030502@usap.gov> References: <4A10B134.3060208@usap.gov> <4A10B4E0.8030502@usap.gov> Message-ID: Hi Ross, I've come across the same problem myself, and I found numpy.atleast_1d to be handy. At the very least, it saves me one extra line of code : ) ~ Roger On Mon, May 18, 2009 at 9:07 AM, Ross Williamson wrote: > Hi Roger > > Thanks > > At the moment I check the length with > > if data.size == 1: > ? ?data = data.reshape(1,) > > Which works so I shouldn't complain too much :) Just thought there might > be a more obvious way of doing it > > Ross > > Roger Herikstad wrote: >> Hi, >> ?You could take a look at numpy.atleast_1d. >> >> ~ Roger >> >> ~ Roger >> >> On Mon, May 18, 2009 at 8:52 AM, Ross Williamson >> wrote: >> >>> Hi everyone >>> >>> I'm currently writing some code where I read in a data file - Generally >>> there are > 1 entry and I check certain parameters for example >>> >>> if data[0]['date'] == something: >>> >>> Which is fine until I get a single element and the above gives: >>> >>> '0-d arrays cannot be indexed' >>> >>> which makes sense , however, is there any easy way of just using the >>> above code without having to check each time to see if it is only a >>> single element - i.e. transparent (similar to idl) and just assumes that >>> data[0] is the only element if there is only one in there. >>> >>> Cheers >>> >>> Ross >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sebastian.walter at gmail.com Mon May 18 03:57:42 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Mon, 18 May 2009 09:57:42 +0200 Subject: [SciPy-user] concave and convex function In-Reply-To: <253017.92520.qm@web94903.mail.in2.yahoo.com> References: <892795.73286.qm@web94915.mail.in2.yahoo.com> <1cd32cbb0905170032h685d781s67086670081e9e80@mail.gmail.com> <253017.92520.qm@web94903.mail.in2.yahoo.com> Message-ID: On Sun, May 17, 2009 at 3:50 PM, mudit sharma wrote: > > Thanks for your response. > > By M & W curve I meant M & W shape curves( subset ) and by cycle I meant wave cycle. Is that supposed to describe what is meant by M & W? No offense, but if you want help, you should state your problem in a way that other ppl understand... > > > > > ----- Original Message ---- > From: "josef.pktd at gmail.com" > To: SciPy Users List > Sent: Sunday, 17 May, 2009 8:32:17 > Subject: Re: [SciPy-user] concave and convex function > > On Fri, May 15, 2009 at 9:10 PM, mudit sharma wrote: >> >> I have following sample (actual dataset is much bigger): >> >> [ 0.48 0.64 0.69 0.67 0.67 0.65 0.68 0.63 0.62 0.61 0.61 0.6 >> 0.58 0.62 0.64 0.63 0.63 0.61 0.60 0.61 0.62 0.65 0.67 0.67 >> 0.67 0.68 0.68 0.66 0.68 0.65 0.65 0.65 0.65 0.64 0.64 0.65 >> 0.65 0.66 0.66 0.64 0.65 0.64 0.68 0.69 0.70 0.69 0.7 0.68 >> 0.64 0.64 0.65 0.67 0.68 0.67 0.67 0.66 0.66 0.64 0.64 0.58 >> 0.53 0.53 0..52] >> >> >> I am looking to scan for all M & W curve formation and cycles. For this I need to detect all concave and convex points on curve. Is there function available in scipy or mlab to fit data into concave and convex function? If there's any other straightforward way to achieve this, pleas suggest. >> > > I'm not sure what you need, I don't know what M & W curve formation > and cycles are. > > But for a 1d array, you can just check the second derivative with > np.diff, something like > > x=np.linspace(0.,5.) > y = np.sin(x) > z=np.diff(np.diff(y))>0 # isconvex > z1=np.diff(y,2) # 2nd derivative > np.array(z1>0,int) -np.array(z1<0,int) # concave indicator > > but your example array has lots of ups and downs. If it is a noisy > dataset, you might need to smooth it first? > > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Mon May 18 05:57:16 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 18 May 2009 11:57:16 +0200 Subject: [SciPy-user] Sundials Message-ID: FWIW, a new release of Sundials is available. See below for details. Nils From: Radu Serban Date: Thu, 14 May 2009 15:07:36 -0400 Subject: Sundials 2.4.0 release Announcing the release of Sundials version 2.4.0. The suite includes the following five solvers: - CVODE (v. 2.6.0), for integration of ODE initial value problems; - CVODES (v. 2.6.0), for integration and sensitivity analysis of ODE IVP; - IDA (v. 2.6.0), for integration of DAE initial value problems; - IDAS (v. 1.0.0), for integration and sensitivity analysis of DAE IVP; - KINSOL (v. 2.6.0), for nonlinear algebraic systems. The Sundials solvers provide robust time integrators (with optional sensitivity analysis capabilities) and nonlinear solvers that can easily be incorporated into existing simulation codes. The solvers are independent of the data representation and can be used both on serial and parallel computers. They are written in ANSI C, with CVODE, IDA, and KINSOL also providing a Fortran interface. In addition, sundialsTB provides a Matlab interface to CVODES, IDAS, and KINSOL. Sundials is freely available, under a BSD license, at http://www.llnl.gov/casc/sundials For the Sundials team, Radu Serban From millman at berkeley.edu Mon May 18 10:48:29 2009 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 18 May 2009 07:48:29 -0700 Subject: [SciPy-user] SciPy 2009 Call for Papers Message-ID: ========================== SciPy 2009 Call for Papers ========================== SciPy 2009, the 8th Python in Science conference, will be held from August 18-23, 2009 at Caltech in Pasadena, CA, USA. Each year SciPy attracts leading figures in research and scientific software development with Python from a wide range of scientific and engineering disciplines. The focus of the conference is both on scientific libraries and tools developed with Python and on scientific or engineering achievements using Python. We welcome contributions from the industry as well as the academic world. Indeed, industrial research and development as well academic research face the challenge of mastering IT tools for exploration, modeling and analysis. We look forward to hearing your recent breakthroughs using Python! Submission of Papers ==================== The program features tutorials, contributed papers, lightning talks, and bird-of-a-feather sessions. We are soliciting talks and accompanying papers (either formal academic or magazine-style articles) that discuss topics which center around scientific computing using Python. These include applications, teaching, future development directions, and research. A collection of peer-reviewed articles will be published as part of the proceedings. Proposals for talks are submitted as extended abstracts. There are two categories of talks: Paper presentations These talks are 35 minutes in duration (including questions). A one page abstract of no less than 500 words (excluding figures and references) should give an outline of the final paper. Proceeding papers are due two weeks after the conference, and may be in a formal academic style, or in a more relaxed magazine-style format. Rapid presentations These talks are 10 minutes in duration. An abstract of between 300 and 700 words should describe the topic and motivate its relevance to scientific computing. In addition, there will be an open session for lightning talks during which any attendee willing to do so is invited to do a couple-of-minutes-long presentation. If you wish to present a talk at the conference, please create an account on the website (http://conference.scipy.org). You may then submit an abstract by logging in, clicking on your profile and following the "Submit an abstract" link. Submission Guidelines --------------------- * Submissions should be uploaded via the online form. * Submissions whose main purpose is to promote a commercial product or service will be refused. * All accepted proposals must be presented at the SciPy conference by at least one author. * Authors of an accepted proposal can provide a final paper for publication in the conference proceedings. Final papers are limited to 7 pages, including diagrams, figures, references, and appendices. The papers will be reviewed to help ensure the high-quality of the proceedings. For further information, please visit the conference homepage: http://conference.scipy.org. Important Dates =============== * Friday, June 26: Abstracts Due * Saturday, July 4: Announce accepted talks, post schedule * Friday, July 10: Early Registration ends * Tuesday-Wednesday, August 18-19: Tutorials * Thursday-Friday, August 20-21: Conference * Saturday-Sunday, August 22-23: Sprints * Friday, September 4: Papers for proceedings due Tutorials ========= Two days of tutorials to the scientific Python tools will precede the conference. There will be two tracks: one for introduction of the basic tools to beginners and one for more advanced tools. Tutorials will be announced later. Birds of a Feather Sessions =========================== If you wish to organize a birds-of-a-feather session to discuss some specific area of scientific development with Python, please contact the organizing committee. Executive Committee =================== * Jarrod Millman, UC Berkeley, USA (Conference Chair) * Ga?l Varoquaux, INRIA Saclay, France (Program Co-Chair) * St?fan van der Walt, University of Stellenbosch, South Africa (Program Co-Chair) * Fernando P?rez, UC Berkeley, USA (Tutorial Chair) From robert.kern at gmail.com Mon May 18 18:20:24 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 18 May 2009 17:20:24 -0500 Subject: [SciPy-user] concave and convex function In-Reply-To: References: <892795.73286.qm@web94915.mail.in2.yahoo.com> <1cd32cbb0905170032h685d781s67086670081e9e80@mail.gmail.com> <253017.92520.qm@web94903.mail.in2.yahoo.com> Message-ID: <3d375d730905181520n7311f516o292bdb18b71b385d@mail.gmail.com> On Mon, May 18, 2009 at 02:57, Sebastian Walter wrote: > On Sun, May 17, 2009 at 3:50 PM, mudit sharma wrote: >> >> Thanks for your response. >> >> By M & W curve I meant M & W shape curves( subset ) and by cycle I meant wave cycle. > Is that supposed to describe what is meant by M & W? Peak-trough-peak and trough-peak-trough patterns, respectively, like the shapes of the letters. >?No offense, but > if you want help, you should > state your problem in a way that other ppl understand... His actual question is reasonably well-worded (he wants to classify the signal into convex and concave portions), but you got distracted by the irrelevant portion. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jan.rauberg at gmx.de Tue May 19 02:24:23 2009 From: jan.rauberg at gmx.de (Jan Rauberg) Date: Tue, 19 May 2009 06:24:23 +0000 (UTC) Subject: [SciPy-user] scipy.signal.firwin References: Message-ID: Jan Rauberg gmx.de> writes: > > I'm missing the functionality of firwin like in matlab/octave fir1, so that I > can give a 'low', 'high' and 'stop' option. I don't know how to create a FIR > window based high pass filter. Or is there something planned for the future? > > Thank you > Jan > I've determined that the results of 'firwin' for a low pass filter doesn't give me the same values as octave or R. So why not simply use the original octave code of Paul Kienzle and built a scipy variant? The peoples of R have done exactly the same and you get on both environments the same results, so why not in python? Best regards Jan From pav at iki.fi Tue May 19 03:12:45 2009 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 19 May 2009 07:12:45 +0000 (UTC) Subject: [SciPy-user] scipy.signal.firwin References: Message-ID: Tue, 19 May 2009 06:24:23 +0000, Jan Rauberg kirjoitti: [clip] > I've determined that the results of 'firwin' for a low pass filter > doesn't give me the same values as octave or R. So why not simply use > the original octave code of Paul Kienzle and built a scipy variant? The > peoples of R have done exactly the same and you get on both environments > the same results, so why not in python? Octave and R are GPL-licensed and Scipy is BSD, so unfortunately we can't share code in that direction. -- Pauli Virtanen From sebastian.walter at gmail.com Tue May 19 03:48:47 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Tue, 19 May 2009 09:48:47 +0200 Subject: [SciPy-user] Sundials In-Reply-To: References: Message-ID: Hello Nils, hello Radu, This package is basically what I could use for my research (optimal experimental design with underlying DAE dynamics). Scipy is really missing integrators that also provide sensitivities: nowadays, simulation is regarded as predictive and therefore it is typical, that one wishes to do optimization with ODE,DAE,PDE constraints. In our research, we have an objective function that looks in the most simple case like this: Phi = trace(inv(dot(J.T,J))) where J(q) = dF/dp and F is a function of the solution of a DAE at measurment times [t_1, t_2, ..., t_M]. i.e. the optimization problem is min_q Phi(q) I'd like to try SUNDIALS to do some optimal experimental design. But I'm not sure if your package supports all we need: 1) Actually, we need second, third and higher order derivatives, preferably in a combination of adoint mode and forward mode. Can SUNDIALS do that? 2) The forward mode is directional derivatives. Is it possible to do userspecified directions? I.e. we want dot(dPhi/d(q_1,q_2) ,[1,2]), i.e. the derivative of Phi w.r.t. two variables q_1 and q_2 in direction [1,2]. 3) Doing a reverse sweep starting from the objective function, we need to pass in adjoint directions at the meassurment times [t_1, t_2, ..., t_M]. Does SUNDIALS support that? Or is it possible to interrupt the integration at each measurement time, hand in a new adjoint direction and then perform a warm start? Sebastian On Mon, May 18, 2009 at 11:57 AM, Nils Wagner wrote: > FWIW, a new release of Sundials is available. > See below for details. > > ?Nils > > > > > From: Radu Serban > Date: Thu, 14 May 2009 15:07:36 -0400 > Subject: Sundials 2.4.0 release > > Announcing the release of Sundials version 2.4.0. The > suite includes the > following five solvers: > ?- CVODE (v. 2.6.0), for integration of ODE initial value > problems; > ?- CVODES (v. 2.6.0), for integration and sensitivity > analysis of ODE IVP; > ?- IDA (v. 2.6.0), for integration of DAE initial value > problems; > ?- IDAS (v. 1.0.0), for integration and sensitivity > analysis of DAE IVP; > ?- KINSOL (v. 2.6.0), for nonlinear algebraic systems. > > The Sundials solvers provide robust time integrators (with > optional > sensitivity analysis capabilities) and nonlinear solvers > that can easily be > incorporated into existing simulation codes. The solvers > are independent of > the data representation and can be used both on serial and > parallel computers. > They are written in ANSI C, with CVODE, IDA, and KINSOL > also providing a > Fortran interface. In addition, sundialsTB provides a > Matlab interface to > CVODES, IDAS, and KINSOL. > > Sundials is freely available, under a BSD license, at > ? ? ? ?http://www.llnl.gov/casc/sundials > > For the Sundials team, > Radu Serban > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From yosefmel at post.tau.ac.il Tue May 19 09:48:55 2009 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Tue, 19 May 2009 16:48:55 +0300 Subject: [SciPy-user] Trivial quadratic equation question In-Reply-To: <3ec88f300905130854j7bda66d2ve4facbf100d7aeb5@mail.gmail.com> References: <3ec88f300905120101u74b4561fnfbeddd3e76127919@mail.gmail.com> <1242118354.6312.5.camel@cm-laptop> <3ec88f300905130854j7bda66d2ve4facbf100d7aeb5@mail.gmail.com> Message-ID: <200905191648.55839.yosefmel@post.tau.ac.il> On Wednesday 13 May 2009 18:54:31 Douglas Macdonald wrote: > Thanks Christian. This looks like it will do the job. No point in > reinventing the wheel. Note that the NumPy root-finding function is very general, so can be much slower than computing the well-known formula for quadratic roots - especially if you also know something about the coefficients that can save time. I once had an algorithm sped up by two orders of magnitude by implementing a tailored formula for a quartic formula with the cubic and quadratic coefficients known to be zero and some other specific knowledge. In case anyone is wondering, this case arises in a lumped-elements network of thermal components with radiative heat transfer making the quadratic and conduction/convection making the linear part. From scotta_2002 at yahoo.com Tue May 19 11:03:40 2009 From: scotta_2002 at yahoo.com (Scott Askey) Date: Tue, 19 May 2009 08:03:40 -0700 (PDT) Subject: [SciPy-user] Sage 3.4 and scipy speed Message-ID: <925030.31679.qm@web36507.mail.mud.yahoo.com> Is running the scipy .6 and python 2.5 under sage in anyway inferior (beyond the version number) to the packages that are in my Linux distro? I have to use Centos 5 at work. V/R Scott From nwagner at iam.uni-stuttgart.de Tue May 19 12:47:01 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 19 May 2009 18:47:01 +0200 Subject: [SciPy-user] Sundials In-Reply-To: References: Message-ID: On Tue, 19 May 2009 09:48:47 +0200 Sebastian Walter wrote: > Hello Nils, hello Radu, > > This package is basically what I could use for my >research (optimal > experimental design with underlying DAE dynamics). > Scipy is really missing integrators that also provide >sensitivities: > nowadays, simulation is regarded as predictive and >therefore > it is typical, that one wishes to do optimization with >ODE,DAE,PDE constraints. > > In our research, we have an objective function that >looks in the most > simple case like this: > Phi = trace(inv(dot(J.T,J))) > where J(q) = dF/dp and F is a function of the solution >of a DAE at > measurment times [t_1, t_2, ..., t_M]. > i.e. the optimization problem is > min_q Phi(q) > > I'd like to try SUNDIALS to do some optimal experimental >design. But > I'm not sure if your package supports all we need: > 1) Actually, we need second, third and higher order >derivatives, > preferably in a combination of adoint mode and forward >mode. > Can SUNDIALS do that? > 2) The forward mode is directional derivatives. Is it >possible to do > userspecified directions? I.e. we want > dot(dPhi/d(q_1,q_2) ,[1,2]), i.e. the derivative of Phi >w.r.t. two > variables q_1 and q_2 in direction [1,2]. > 3) Doing a reverse sweep starting from the objective >function, we need > to pass in adjoint directions at the meassurment times >[t_1, t_2, ..., > t_M]. > Does SUNDIALS support that? Or is it possible to >interrupt the > integration at each measurement time, hand in a new >adjoint direction > and then perform a warm start? > > Sebastian > Hi Sebastian, I guess Radu is not subscribed to this list. You probably know about http://pysundials.sourceforge.net/ http://sourceforge.net/projects/pysundials I suggest that you ask the maintaner of pysundials. Cheers, Nils From dwf at cs.toronto.edu Tue May 19 19:48:10 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 19 May 2009 19:48:10 -0400 Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> Message-ID: <6595CCDD-785D-448E-AE21-1D184BEF6330@cs.toronto.edu> Hi Adam, On 17-Apr-09, at 12:38 PM, Keflavich wrote: > can't get a 64-bit version of python compiled and google has been > unhelpful in resolving the problem. Is there a workaround to get 64 I have had a lot of success with (using the 2.6.2 sources) mkdir -p build && cd build && ./configure --with-framework- name=Python64 --with-universal-archs=all --enable-framework --enable- universalsdk=/ MACOSX_DEPLOYMENT_TARGET=10.5 && make && sudo make install That builds a 4-way universal binary. --with-universal-archs=64-bit will get you just the 64 bit stuff (note that a few of the make install steps will fail because of Carbon deprecation but nothing important as far as I can see). David From dwf at cs.toronto.edu Tue May 19 20:18:37 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 19 May 2009 20:18:37 -0400 Subject: [SciPy-user] numpy 64 bit build on mac os x In-Reply-To: References: <4A0E611E.2060907@ar.media.kyoto-u.ac.jp> Message-ID: On 16-May-09, at 3:23 AM, Roger Herikstad wrote: > -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/src/multia rray > -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/src/umath -c' > 1575 gcc: build/src.macosx-10.5-universal-2.6/numpy/core/src/ > _sortmodule.c > 1576 In file included from numpy/core/include/numpy/ndarrayobject.h: > 33, > 1577 from numpy/core/include/numpy/noprefix.h:7, > 1578 from numpy/core/src/_sortmodule.c.src:29: > 1579 numpy/core/include/numpy/npy_endian.h:33:10: error: #error > Unknown CPU: can not set endianness > > Is this an issue of the mixed endianness between ppc and intel macs? Actually I discovered it's a small bug in npy_endian.h. npy_cpu.h correctly detects and sets CPU architecture macros *but* for some reason npy_endian.h doesn't handle PPC64 and set big-endian. So, npy_cpu.h correctly detects PPC64 as the arch being built for but then the endianness setting code doesn't know what to do with it. The AMD64 build works fine. I've filed a ticket at http://projects.scipy.org/numpy/ticket/1111 and attached a patch there, if you need it. David From roger.herikstad at gmail.com Tue May 19 21:37:17 2009 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Wed, 20 May 2009 09:37:17 +0800 Subject: [SciPy-user] numpy 64 bit build on mac os x In-Reply-To: References: <4A0E611E.2060907@ar.media.kyoto-u.ac.jp> Message-ID: Hi, Just wanted to say thanks for looking into this. I now have a working 4-way universal build of numpy! ~ Roger On Wed, May 20, 2009 at 8:18 AM, David Warde-Farley wrote: > On 16-May-09, at 3:23 AM, Roger Herikstad wrote: > >> -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/src/multia ? ? rray >> -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/src/umath -c' >> 1575 gcc: build/src.macosx-10.5-universal-2.6/numpy/core/src/ >> _sortmodule.c >> 1576 In file included from numpy/core/include/numpy/ndarrayobject.h: >> 33, >> 1577 ? ? ? ? ? ? ? ? ?from numpy/core/include/numpy/noprefix.h:7, >> 1578 ? ? ? ? ? ? ? ? ?from numpy/core/src/_sortmodule.c.src:29: >> 1579 numpy/core/include/numpy/npy_endian.h:33:10: error: #error >> Unknown CPU: can not set endianness >> >> Is this an issue of the mixed endianness between ppc and intel macs? > > Actually I discovered it's a small bug in npy_endian.h. > > npy_cpu.h correctly detects and sets CPU architecture macros *but* for > some reason npy_endian.h doesn't handle PPC64 and set big-endian. So, > npy_cpu.h correctly detects PPC64 as the arch being built for but then > the endianness setting code doesn't know what to do with it. The AMD64 > build works fine. > > I've filed a ticket at http://projects.scipy.org/numpy/ticket/1111 and > attached a patch there, if you need it. > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From dwf at cs.toronto.edu Tue May 19 22:29:18 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 19 May 2009 22:29:18 -0400 Subject: [SciPy-user] Sage 3.4 and scipy speed In-Reply-To: <925030.31679.qm@web36507.mail.mud.yahoo.com> References: <925030.31679.qm@web36507.mail.mud.yahoo.com> Message-ID: <48AE1F3F-A140-4F1E-9D71-D9F121D2493C@cs.toronto.edu> On 19-May-09, at 11:03 AM, Scott Askey wrote: > Is running the scipy .6 and python 2.5 under sage in anyway inferior > (beyond the version number) to the packages that are in my Linux > distro? I have to use Centos 5 at work. You'd be better off asking on a sage list, but I'd imagine not. If you get the right ones for your CPU, Sage binaries might even support some CPU features that CentOS binaries don't. David From roger.herikstad at gmail.com Tue May 19 23:52:39 2009 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Wed, 20 May 2009 11:52:39 +0800 Subject: [SciPy-user] numpy 64 bit build on mac os x In-Reply-To: References: <4A0E611E.2060907@ar.media.kyoto-u.ac.jp> Message-ID: Hi, I guess I rejoiced too soon. The build works in 64 bit mode, but if I start the python interpreter in 32 bit mode, I get the following error when I try to import numpy: Python 2.6.2 (r262:71600, May 15 2009, 09:54:38) [GCC 4.0.1 (Apple Inc. build 5490)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/numpy/__init__.py", line 130, in import add_newdocs File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/numpy/add_newdocs.py", line 9, in from lib import add_newdoc File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/__init__.py", line 4, in from type_check import * File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/type_check.py", line 8, in import numpy.core.numeric as _nx File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/numpy/core/__init__.py", line 8, in import numerictypes as nt File "/Library/Frameworks/Python64.framework/Versions/2.6/lib/python2.6/site-packages/numpy/core/numerictypes.py", line 593, in _typestr[key] = empty((1,),key).dtype.str[1:] ValueError: array is too big. Looking through the source, I find a reference to that error in the file numpy/core/src/multiarray/ctors.c:1396 . From what I can understand, this has something to do with the size of a pointer not being what it is expected to be? Any thoughts on this? ~ Roger On Wed, May 20, 2009 at 9:37 AM, Roger Herikstad wrote: > Hi, > ?Just wanted to say thanks for looking into this. I now have a working > 4-way universal build of numpy! > > ~ Roger > > On Wed, May 20, 2009 at 8:18 AM, David Warde-Farley wrote: >> On 16-May-09, at 3:23 AM, Roger Herikstad wrote: >> >>> -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/src/multia ? ? rray >>> -Ibuild/src.macosx-10.5-universal-2.6/numpy/core/src/umath -c' >>> 1575 gcc: build/src.macosx-10.5-universal-2.6/numpy/core/src/ >>> _sortmodule.c >>> 1576 In file included from numpy/core/include/numpy/ndarrayobject.h: >>> 33, >>> 1577 ? ? ? ? ? ? ? ? ?from numpy/core/include/numpy/noprefix.h:7, >>> 1578 ? ? ? ? ? ? ? ? ?from numpy/core/src/_sortmodule.c.src:29: >>> 1579 numpy/core/include/numpy/npy_endian.h:33:10: error: #error >>> Unknown CPU: can not set endianness >>> >>> Is this an issue of the mixed endianness between ppc and intel macs? >> >> Actually I discovered it's a small bug in npy_endian.h. >> >> npy_cpu.h correctly detects and sets CPU architecture macros *but* for >> some reason npy_endian.h doesn't handle PPC64 and set big-endian. So, >> npy_cpu.h correctly detects PPC64 as the arch being built for but then >> the endianness setting code doesn't know what to do with it. The AMD64 >> build works fine. >> >> I've filed a ticket at http://projects.scipy.org/numpy/ticket/1111 and >> attached a patch there, if you need it. >> >> David >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From harald.schilly at gmail.com Wed May 20 03:51:47 2009 From: harald.schilly at gmail.com (Harald Schilly) Date: Wed, 20 May 2009 09:51:47 +0200 Subject: [SciPy-user] Sage 3.4 and scipy speed In-Reply-To: <925030.31679.qm@web36507.mail.mud.yahoo.com> References: <925030.31679.qm@web36507.mail.mud.yahoo.com> Message-ID: <20548feb0905200051t2503ce35t5ca2d8da3efe283e@mail.gmail.com> On Tue, May 19, 2009 at 17:03, Scott Askey wrote: > > Is running the scipy .6 and python 2.5 under sage in anyway inferior (beyond the version number) to the packages that are in my Linux distro? ?I have to use Centos 5 at work. I don't know if there are currently issues with centos, but you should compile sage from source. prerequesites are, i think, "gcc, g++, make, m4, perl, and ranlib" and besides time after "make" you don't need to do anything else. numeric libraries are then optimized for your cpu, you get an encapsulated package of libraries and they are tested to play well together... h From keflavich at gmail.com Wed May 20 10:36:41 2009 From: keflavich at gmail.com (Gins) Date: Wed, 20 May 2009 07:36:41 -0700 (PDT) Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: <6595CCDD-785D-448E-AE21-1D184BEF6330@cs.toronto.edu> References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> <6595CCDD-785D-448E-AE21-1D184BEF6330@cs.toronto.edu> Message-ID: Thanks. I successfully got python 2.6.2 compiled with 64 bit support, but when I try to compile numpy I run into errors that are a little beyond my experience: gcc: build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.c In file included from numpy/core/include/numpy/ndarrayobject.h:26, from numpy/core/include/numpy/noprefix.h:7, from numpy/core/src/_sortmodule.c.src:29: numpy/core/include/numpy/npy_endian.h:33:10: error: #error Unknown CPU: can not set endianness lipo: can't figure out the architecture type of: /var/folders/ni/ni +DtdqFGMeSMH13AvkNkU+++TI/-Tmp-//ccJos8Iw.out In file included from numpy/core/include/numpy/ndarrayobject.h:26, from numpy/core/include/numpy/noprefix.h:7, from numpy/core/src/_sortmodule.c.src:29: numpy/core/include/numpy/npy_endian.h:33:10: error: #error Unknown CPU: can not set endianness lipo: can't figure out the architecture type of: /var/folders/ni/ni +DtdqFGMeSMH13AvkNkU+++TI/-Tmp-//ccJos8Iw.out error: Command "gcc -arch i386 -arch ppc -arch ppc64 -arch x86_64 - isysroot / -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g - fwrapv -O3 -Wall -Wstrict-prototypes -Inumpy/core/include -Ibuild/ src.macosx-10.5-universal-2.6/numpy/core/include/numpy -Inumpy/core/ src -Inumpy/core/include -I/Library/Frameworks/Python.framework/ Versions/2.6/include/python2.6 -c build/src.macosx-10.5-universal-2.6/ numpy/core/src/_sortmodule.c -o build/temp.macosx-10.5-universal-2.6/ build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.o" failed with exit status 1 and I haven't had any luck with the numpy .dmg files for mac. I'll check out sage next and report back. Thanks for the tips! Adam On May 19, 5:48?pm, David Warde-Farley wrote: > Hi Adam, > > On 17-Apr-09, at 12:38 PM, Keflavich wrote: > > > can't get a 64-bit version of python compiled and google has been > > unhelpful in resolving the problem. ?Is there a workaround to get 64 > > I have had a lot of success with (using the 2.6.2 sources) > > mkdir -p build && cd build && ./configure --with-framework- > name=Python64 --with-universal-archs=all --enable-framework --enable- > universalsdk=/ MACOSX_DEPLOYMENT_TARGET=10.5 && make && sudo make ? > install > > That builds a 4-way universal binary. --with-universal-archs=64-bit ? > will get you just the 64 bit stuff (note that a few of the make ? > install steps will fail because of Carbon deprecation but nothing ? > important as far as I can see). > > David > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From oliphant at enthought.com Wed May 20 10:45:12 2009 From: oliphant at enthought.com (Travis Oliphant) Date: Wed, 20 May 2009 09:45:12 -0500 Subject: [SciPy-user] Join us for "Scientific Computing with Python Webinar" References: <1437076956.5204661242825355676.JavaMail.root@g2mp1br2.las.expertcity.com> Message-ID: <2355F1D0-DD01-4BD1-8482-FDDC6FEE6C91@enthought.com> Hello all Python users: I am pleased to announce the beginning of a free Webinar series that discusses using Python for scientific computing. Enthought will host this free series which will take place once a month for 30-45 minutes. The schedule and length may change based on participation feedback, but for now it is scheduled for the fourth Friday of every month. This free webinar should not be confused with the EPD webinar on the first Friday of each month which is open only to subscribers to the Enthought Python Distribution. I (Travis Oliphant) will be the first speaker at this continuing series. I plan to present a brief (10-15) minute talk on reading binary files with NumPy using memory mapped arrays and structured data- types. This talk will be followed by a demonstration of Chaco for interactive 2-d visualization and Mayavi for interactive 3-d visualization. Both Chaco and Mayavi are open-source tools and part of the Enthought Tool Suite. They can be conveniently installed using the Enthought Python Distribution. Topics for future webinars will be chosen later based on participant feedback. This event will take place on Friday at 3:00pm CDT and will last 30 to 45 minutes depending on questions asked. Space is limited at this event. If you would like to participate, please register by going to https://www1.gotomeeting.com/register/422340144 or by clicking on the appropriate link in the attached announcement. There will be a 10 minute technical help session prior to the on-line meeting which you should plan to use if you have never participated in a GoToWebinar previously. During this time you can test your connection and audio equipment as well as familiarize yourself with the GoTo Meeting software. I am looking forward to interacting with many of you this Friday. Best regards, Travis Oliphant Enthought, Inc. Enthought is the company that sponsored the creation of SciPy and the Enthought Tool Suite. It continues to sponsor the SciPy community by hosting the SciPy mailing list and website and participating in the development of SciPy and NumPy. Enthought creates custom scientific and technical software applications and provides training on using Python for technical computing. Enthought also provides the Enthought Python Distribution. Learn more at http://www.enthought.com Travis Oliphant's bio can be read at http://www.enthought.com/company/executive-team.php > > > > > > Scientific Computing with Python Webinar > > > > > > Each webinar in this continuing series will demonstrate the use of > some aspect of Python to assist with scientific, engineering, and > technical computing. Enthought will host each meeting and select a > specific topic based on feedback from participants > Register for a session now by clicking a date below: > Fri, May 22, 2009 3:00 PM - 3:30 PM CDT > Fri, Jun 19, 2009 1:00 PM - 1:30 PM CDT > Fri, Jul 17, 2009 1:00 PM - 1:30 PM CDT > Once registered you will receive an email confirming your registration > with information you need to join the Webinar. > System Requirements > PC-based attendees > Required: Windows? 2000, XP Home, XP Pro, 2003 Server, Vista > Macintosh?-based attendees > Required: Mac OS? X 10.4 (Tiger?) or newer > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kenneth.arnold at gmail.com Wed May 20 10:53:55 2009 From: kenneth.arnold at gmail.com (Kenneth Arnold) Date: Wed, 20 May 2009 10:53:55 -0400 Subject: [SciPy-user] Join us for "Scientific Computing with Python Webinar" In-Reply-To: <2355F1D0-DD01-4BD1-8482-FDDC6FEE6C91@enthought.com> References: <1437076956.5204661242825355676.JavaMail.root@g2mp1br2.las.expertcity.com> <2355F1D0-DD01-4BD1-8482-FDDC6FEE6C91@enthought.com> Message-ID: This is a great idea! Will the presentations be archived? Even an unedited screen capture could be very helpful for people who can't make the time, have technical issues with the meeting software, or need to review the details later. Thanks, -Ken On Wed, May 20, 2009 at 10:45 AM, Travis Oliphant wrote: > > Hello all Python users: > > I am pleased to announce the beginning of a free Webinar series that > discusses using Python for scientific computing. Enthought will host this > free series which will take place once a month for 30-45 minutes. The > schedule and length may change based on participation feedback, but for now > it is scheduled for the fourth Friday of every month. This free webinar > should not be confused with the EPD webinar on the first Friday of each > month which is open only to subscribers to the Enthought Python > Distribution. > > I (Travis Oliphant) will be the first speaker at this continuing series. I > plan to present a brief (10-15) minute talk on reading binary files with > NumPy using memory mapped arrays and structured data-types. This talk will > be followed by a demonstration of Chaco for interactive 2-d visualization > and Mayavi for interactive 3-d visualization. Both Chaco and Mayavi are > open-source tools and part of the Enthought Tool Suite. They can be > conveniently installed using the Enthought Python Distribution. Topics for > future webinars will be chosen later based on participant feedback. > > This event will take place on Friday at 3:00pm CDT and will last 30 to 45 > minutes depending on questions asked. Space is limited at this event. If > you would like to participate, please register by going to > https://www1.gotomeeting.com/register/422340144 or by clicking on the > appropriate link in the attached announcement. > > There will be a 10 minute technical help session prior to the on-line > meeting which you should plan to use if you have never participated in a > GoToWebinar previously. During this time you can test your connection and > audio equipment as well as familiarize yourself with the GoTo Meeting > software. > > I am looking forward to interacting with many of you this Friday. > > Best regards, > > Travis Oliphant > Enthought, Inc. > > > Enthought is the company that sponsored the creation of SciPy and the > Enthought Tool Suite. It continues to sponsor the SciPy community by > hosting the SciPy mailing list and website and participating in the > development of SciPy and NumPy. Enthought creates custom scientific and > technical software applications and provides training on using Python for > technical computing. Enthought also provides the Enthought Python > Distribution. Learn more at http://www.enthought.com > > Travis Oliphant's bio can be read at > http://www.enthought.com/company/executive-team.php > > > * > * > > > > Scientific Computing with Python Webinar Each > webinar in this continuing series will demonstrate the use of some aspect of > Python to assist with scientific, engineering, and technical computing. > Enthought will host each meeting and select a specific topic based on > feedback from participants *Register for a session now by clicking a > date below:* Fri, May 22, 2009 3:00 PM - 3:30 PM CDT Fri, > Jun 19, 2009 1:00 PM - 1:30 PM CDT Fri, > Jul 17, 2009 1:00 PM - 1:30 PM CDT Once > registered you will receive an email confirming your registration > with information you need to join the Webinar. *System Requirements* > PC-based attendees > Required: Windows? 2000, XP Home, XP Pro, 2003 Server, Vista Macintosh?-based > attendees > Required: Mac OS? X 10.4 (Tiger?) or newer > > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jim.Vickroy at noaa.gov Wed May 20 11:14:18 2009 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Wed, 20 May 2009 09:14:18 -0600 Subject: [SciPy-user] Join us for "Scientific Computing with Python Webinar" In-Reply-To: References: <1437076956.5204661242825355676.JavaMail.root@g2mp1br2.las.expertcity.com> <2355F1D0-DD01-4BD1-8482-FDDC6FEE6C91@enthought.com> Message-ID: <4A141E4A.2000304@noaa.gov> Kenneth Arnold wrote: > This is a great idea! Will the presentations be archived? Even an > unedited screen capture could be very helpful for people who can't > make the time, have technical issues with the meeting software, or > need to review the details later. > > Thanks, > -Ken > Great idea and good question! I hope the presentations will be archived. The first presentation is very interesting to me; unfortunately, I will be traveling then. -- jv > > On Wed, May 20, 2009 at 10:45 AM, Travis Oliphant > > wrote: > > > Hello all Python users: > > I am pleased to announce the beginning of a free Webinar series > that discusses using Python for scientific computing. Enthought > will host this free series which will take place once a month for > 30-45 minutes. The schedule and length may change based on > participation feedback, but for now it is scheduled for the fourth > Friday of every month. This free webinar should not be > confused with the EPD webinar on the first Friday of each month > which is open only to subscribers to the Enthought Python > Distribution. > > I (Travis Oliphant) will be the first speaker at this continuing > series. I plan to present a brief (10-15) minute talk on reading > binary files with NumPy using memory mapped arrays and structured > data-types. This talk will be followed by a demonstration of > Chaco for interactive 2-d visualization and Mayavi for interactive > 3-d visualization. Both Chaco and Mayavi are open-source tools > and part of the Enthought Tool Suite. They can be conveniently > installed using the Enthought Python Distribution. Topics for > future webinars will be chosen later based on participant feedback. > > This event will take place on Friday at 3:00pm CDT and will last > 30 to 45 minutes depending on questions asked. Space is limited > at this event. If you would like to participate, please register > by going to https://www1.gotomeeting.com/register/422340144 or by > clicking on the appropriate link in the attached announcement. > > There will be a 10 minute technical help session prior to the > on-line meeting which you should plan to use if you have never > participated in a GoToWebinar previously. During this time you > can test your connection and audio equipment as well as > familiarize yourself with the GoTo Meeting software. > > I am looking forward to interacting with many of you this Friday. > > Best regards, > > Travis Oliphant > Enthought, Inc. > > > Enthought is the company that sponsored the creation of SciPy and > the Enthought Tool Suite. It continues to sponsor the SciPy > community by hosting the SciPy mailing list and website and > participating in the development of SciPy and NumPy. > Enthought creates custom scientific and technical software > applications and provides training on using Python for technical > computing. Enthought also provides the Enthought Python > Distribution. Learn more at http://www.enthought.com > > Travis Oliphant's bio can be read > at http://www.enthought.com/company/executive-team.php > > >> * >> * >> >> >> >> >> >> Scientific Computing with Python Webinar >> >> >> >> >> >> >> >> Each webinar in this continuing series will demonstrate the use >> of some aspect of Python to assist with scientific, engineering, >> and technical computing. Enthought will host each meeting and >> select a specific topic based on feedback from participants >> >> *Register for a session now by clicking a date below:* >> >> Fri, May 22, 2009 3:00 PM - 3:30 PM CDT >> >> >> Fri, Jun 19, 2009 1:00 PM - 1:30 PM CDT >> >> >> Fri, Jul 17, 2009 1:00 PM - 1:30 PM CDT >> >> >> >> Once registered you will receive an email confirming your >> registration >> with information you need to join the Webinar. >> >> *System Requirements* >> PC-based attendees >> Required: Windows? 2000, XP Home, XP Pro, 2003 Server, Vista >> >> Macintosh?-based attendees >> Required: Mac OS? X 10.4 (Tiger?) or newer >> >> >> >> >> >> >> > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perry at stsci.edu Wed May 20 11:16:46 2009 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 20 May 2009 11:16:46 -0400 Subject: [SciPy-user] Join us for "Scientific Computing with Python Webinar" In-Reply-To: <2355F1D0-DD01-4BD1-8482-FDDC6FEE6C91@enthought.com> References: <1437076956.5204661242825355676.JavaMail.root@g2mp1br2.las.expertcity.com> <2355F1D0-DD01-4BD1-8482-FDDC6FEE6C91@enthought.com> Message-ID: <14B8B804-C2F2-4B8D-B713-2C30465CD409@stsci.edu> Hi Travis, Does registration imply that there is a limit to how many can participate? Is there a certain preference as to who has priority (e.g., newbies over old hands, etc.)? Thanks, Perry On May 20, 2009, at 10:45 AM, Travis Oliphant wrote: > > Hello all Python users: > > I am pleased to announce the beginning of a free Webinar series that > discusses using Python for scientific computing. Enthought will > host this free series which will take place once a month for 30-45 > minutes. The schedule and length may change based on participation > feedback, but for now it is scheduled for the fourth Friday of every > month. This free webinar should not be confused with the EPD > webinar on the first Friday of each month which is open only to > subscribers to the Enthought Python Distribution. > > I (Travis Oliphant) will be the first speaker at this continuing > series. I plan to present a brief (10-15) minute talk on reading > binary files with NumPy using memory mapped arrays and structured > data-types. This talk will be followed by a demonstration of Chaco > for interactive 2-d visualization and Mayavi for interactive 3-d > visualization. Both Chaco and Mayavi are open-source tools and > part of the Enthought Tool Suite. They can be conveniently > installed using the Enthought Python Distribution. Topics for > future webinars will be chosen later based on participant feedback. > > This event will take place on Friday at 3:00pm CDT and will last 30 > to 45 minutes depending on questions asked. Space is limited at > this event. If you would like to participate, please register by > going to https://www1.gotomeeting.com/register/422340144 or by > clicking on the appropriate link in the attached announcement. > > There will be a 10 minute technical help session prior to the on- > line meeting which you should plan to use if you have never > participated in a GoToWebinar previously. During this time you can > test your connection and audio equipment as well as familiarize > yourself with the GoTo Meeting software. > > I am looking forward to interacting with many of you this Friday. > > Best regards, > > Travis Oliphant > Enthought, Inc. > > > Enthought is the company that sponsored the creation of SciPy and > the Enthought Tool Suite. It continues to sponsor the SciPy > community by hosting the SciPy mailing list and website and > participating in the development of SciPy and NumPy. Enthought > creates custom scientific and technical software applications and > provides training on using Python for technical computing. > Enthought also provides the Enthought Python Distribution. Learn > more at http://www.enthought.com > > Travis Oliphant's bio can be read at http://www.enthought.com/company/executive-team.php > > >> >> >> >> >> >> Scientific Computing with Python Webinar >> >> >> >> >> >> >> >> Each webinar in this continuing series will demonstrate the use of >> some aspect of Python to assist with scientific, engineering, and >> technical computing. Enthought will host each meeting and select >> a specific topic based on feedback from participants >> Register for a session now by clicking a date below: >> Fri, May 22, 2009 3:00 PM - 3:30 PM CDT >> Fri, Jun 19, 2009 1:00 PM - 1:30 PM CDT >> Fri, Jul 17, 2009 1:00 PM - 1:30 PM CDT >> Once registered you will receive an email confirming your >> registration >> with information you need to join the Webinar. >> System Requirements >> PC-based attendees >> Required: Windows? 2000, XP Home, XP Pro, 2003 Server, Vista >> Macintosh?-based attendees >> Required: Mac OS? X 10.4 (Tiger?) or newer >> >> > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josephsmidt at gmail.com Wed May 20 15:44:55 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Wed, 20 May 2009 12:44:55 -0700 Subject: [SciPy-user] Easy way to make a block diagonal matrix? Message-ID: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> Hi, Is there an easy way to create a block diagonal matrix from existing matrices? For example, lets assume I have three 2x2 matrices a, b and c. Is there something like d = block_diag(a,b,c) which would create a 6x6 block diagonal matrix from a, b and c? If not, is there a straight forward way to accomplish the same thing? Joseph Smidt -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From jrennie at gmail.com Wed May 20 15:51:03 2009 From: jrennie at gmail.com (Jason Rennie) Date: Wed, 20 May 2009 15:51:03 -0400 Subject: [SciPy-user] scipy.optimize.fmin_cg Message-ID: <75c31b2a0905201251q770cda2ek61c9ee1f137da328@mail.gmail.com> Hello, I'm planning to use this function to optimize a least squares objective. I noticed that the "norm" argument defaults to "inf" or max norm. Does this mean that (by default) the search is done in max-norm space rather than L2/Euclidean norm space? Should I be worried about this setting? Thanks, Jason -- Jason Rennie Research Scientist, ITA Software http://www.itasoftware.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Wed May 20 16:48:47 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 20 May 2009 16:48:47 -0400 Subject: [SciPy-user] scipy.optimize.fmin_cg In-Reply-To: <75c31b2a0905201251q770cda2ek61c9ee1f137da328@mail.gmail.com> References: <75c31b2a0905201251q770cda2ek61c9ee1f137da328@mail.gmail.com> Message-ID: <5CFA6355-68FF-4561-B19C-311B3B7CA023@cs.toronto.edu> On 20-May-09, at 3:51 PM, Jason Rennie wrote: > Hello, > > I'm planning to use this function to optimize a least squares > objective. I > noticed that the "norm" argument defaults to "inf" or max norm. > Does this > mean that (by default) the search is done in max-norm space rather > than > L2/Euclidean norm space? Should I be worried about this setting? No; the termination criterion is based on the norm of the gradient. By default, it uses the infinity norm. This simply means that by default, the search terminates when _every_ element of the returned gradient is less than gtol. This is a bit easier to think about than figuring out a tolerance on the 2-norm of the gradient vector, especially in very high dimensional spaces. David From jrennie at gmail.com Wed May 20 16:51:26 2009 From: jrennie at gmail.com (Jason Rennie) Date: Wed, 20 May 2009 16:51:26 -0400 Subject: [SciPy-user] scipy.optimize.fmin_cg In-Reply-To: <5CFA6355-68FF-4561-B19C-311B3B7CA023@cs.toronto.edu> References: <75c31b2a0905201251q770cda2ek61c9ee1f137da328@mail.gmail.com> <5CFA6355-68FF-4561-B19C-311B3B7CA023@cs.toronto.edu> Message-ID: <75c31b2a0905201351h1e3a5dc5k96e596ace7c58dce@mail.gmail.com> Makes sense. Thanks! Jason On Wed, May 20, 2009 at 4:48 PM, David Warde-Farley wrote: > On 20-May-09, at 3:51 PM, Jason Rennie wrote: > > > Hello, > > > > I'm planning to use this function to optimize a least squares > > objective. I > > noticed that the "norm" argument defaults to "inf" or max norm. > > Does this > > mean that (by default) the search is done in max-norm space rather > > than > > L2/Euclidean norm space? Should I be worried about this setting? > > No; the termination criterion is based on the norm of the gradient. By > default, it uses the infinity norm. > > This simply means that by default, the search terminates when _every_ > element of the returned gradient is less than gtol. This is a bit > easier to think about than figuring out a tolerance on the 2-norm of > the gradient vector, especially in very high dimensional spaces. > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Jason Rennie Research Scientist, ITA Software http://www.itasoftware.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed May 20 17:31:56 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 20 May 2009 23:31:56 +0200 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> Message-ID: <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> Hi Joseph 2009/5/20 Joseph Smidt : > ? ? Is there an easy way to create a block diagonal matrix from > existing matrices? ?For example, lets assume I have three 2x2 matrices > a, b and c. ?Is there something like d = block_diag(a,b,c) which would > create a 6x6 block diagonal matrix from a, b and c? ?If not, is there > a straight forward way to accomplish the same thing? The attached function should do the trick. Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: block.py Type: application/octet-stream Size: 961 bytes Desc: not available URL: From josephsmidt at gmail.com Wed May 20 17:37:55 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Wed, 20 May 2009 14:37:55 -0700 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> Message-ID: <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> Thank you, this looks exactly what I need. Joseph Smidt 2009/5/20 St?fan van der Walt : > Hi Joseph > > 2009/5/20 Joseph Smidt : >> ? ? Is there an easy way to create a block diagonal matrix from >> existing matrices? ?For example, lets assume I have three 2x2 matrices >> a, b and c. ?Is there something like d = block_diag(a,b,c) which would >> create a 6x6 block diagonal matrix from a, b and c? ?If not, is there >> a straight forward way to accomplish the same thing? > > The attached function should do the trick. > > Regards > St?fan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From josephsmidt at gmail.com Wed May 20 17:40:06 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Wed, 20 May 2009 14:40:06 -0700 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> Message-ID: <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> Actually, I don't know if you could submit this routine for inclusion into scipy itself. I'm sure there are lots of people who need to create block diagonal arrays like this. Plus, your script looks really well written. Joseph Smidt On Wed, May 20, 2009 at 2:37 PM, Joseph Smidt wrote: > Thank you, this looks exactly what I need. > > ? ? ? ? ? ? ? ? ? ? ?Joseph Smidt > > 2009/5/20 St?fan van der Walt : >> Hi Joseph >> >> 2009/5/20 Joseph Smidt : >>> ? ? Is there an easy way to create a block diagonal matrix from >>> existing matrices? ?For example, lets assume I have three 2x2 matrices >>> a, b and c. ?Is there something like d = block_diag(a,b,c) which would >>> create a 6x6 block diagonal matrix from a, b and c? ?If not, is there >>> a straight forward way to accomplish the same thing? >> >> The attached function should do the trick. >> >> Regards >> St?fan >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > > > -- > ------------------------------------------------------------------------ > Joseph Smidt > > Physics and Astronomy > 4129 Frederick Reines Hall > Irvine, CA 92697-4575 > Office: 949-824-3269 > -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From joshua.stults at gmail.com Wed May 20 17:58:52 2009 From: joshua.stults at gmail.com (Joshua Stults) Date: Wed, 20 May 2009 17:58:52 -0400 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> Message-ID: Probably numpy.kron() already provides this functionality plus easily generating more general block matrices: http://docs.scipy.org/doc/numpy/reference/generated/numpy.kron.html On Wed, May 20, 2009 at 5:40 PM, Joseph Smidt wrote: > Actually, ?I don't know if you could submit this routine for inclusion > into scipy itself. ?I'm sure there are lots of people who need to > create block diagonal arrays like this. ?Plus, your script looks > really well written. > > ? ? ? ? ? ? ? ? ? ? ? ? ? Joseph Smidt > > On Wed, May 20, 2009 at 2:37 PM, Joseph Smidt wrote: >> Thank you, this looks exactly what I need. >> >> ? ? ? ? ? ? ? ? ? ? ?Joseph Smidt >> >> 2009/5/20 St?fan van der Walt : >>> Hi Joseph >>> >>> 2009/5/20 Joseph Smidt : >>>> ? ? Is there an easy way to create a block diagonal matrix from >>>> existing matrices? ?For example, lets assume I have three 2x2 matrices >>>> a, b and c. ?Is there something like d = block_diag(a,b,c) which would >>>> create a 6x6 block diagonal matrix from a, b and c? ?If not, is there >>>> a straight forward way to accomplish the same thing? >>> >>> The attached function should do the trick. >>> >>> Regards >>> St?fan >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> >> >> -- >> ------------------------------------------------------------------------ >> Joseph Smidt >> >> Physics and Astronomy >> 4129 Frederick Reines Hall >> Irvine, CA 92697-4575 >> Office: 949-824-3269 >> > > > > -- > ------------------------------------------------------------------------ > Joseph Smidt > > Physics and Astronomy > 4129 Frederick Reines Hall > Irvine, CA 92697-4575 > Office: 949-824-3269 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Joshua Stults Website: http://j-stults.blogspot.com From stefan at sun.ac.za Wed May 20 18:11:17 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 21 May 2009 00:11:17 +0200 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> Message-ID: <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> 2009/5/20 Joseph Smidt : > Actually, ?I don't know if you could submit this routine for inclusion > into scipy itself. ?I'm sure there are lots of people who need to > create block diagonal arrays like this. ?Plus, your script looks > really well written. I'd be glad if others find it useful. I'm not quite sure where in SciPy it would go, though? I see the scipy.sparse module has similar functionality, although it is a bit more painful to use in this situation: In [2]: import scipy.sparse as ss In [3]: A = np.array([[1, 2], [3, 4]]) In [4]: B = np.array([[1, 2, 3, 4]]) In [5]: C = np.array([[4]]) In [10]: ss.bmat([[A, None, None], [None, B, None], [None, None, C]]).todense() Out[10]: matrix([[1, 2, 0, 0, 0, 0, 0], [3, 4, 0, 0, 0, 0, 0], [0, 0, 1, 2, 3, 4, 0], [0, 0, 0, 0, 0, 0, 4]]) More generally: import scipy.sparse as ss import numpy as np def block_diag(*arrs): arrs = [np.asarray(a) for a in arrs] D = len(arrs) Dr = np.arange(D) diag_arr = np.empty((D, D), dtype=object) diag_arr[Dr, Dr] = arrs return ss.bmat(diag_arr).todense() Cheers St?fan From stefan at sun.ac.za Wed May 20 18:16:09 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 21 May 2009 00:16:09 +0200 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> Message-ID: <9457e7c80905201516l3f8ea74dxa958d12cdad70fba@mail.gmail.com> 2009/5/20 Joshua Stults : > Probably numpy.kron() already provides this functionality plus easily > generating more general block matrices: > > http://docs.scipy.org/doc/numpy/reference/generated/numpy.kron.html numpy.kron takes two arrays as input, so I'm not sure how that could work, especially for diagonal blocks with varying shapes? Would you use object arrays? Could be that my brain has gone to bed already! Regards St?fan From ivo.maljevic at gmail.com Wed May 20 18:41:18 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Wed, 20 May 2009 18:41:18 -0400 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> Message-ID: <826c64da0905201541x43522effj428d9cd9158618b8@mail.gmail.com> Since there is a matlab function called blkdiag that does the same, maybe this one could be included in numpy/scipy. 2009/5/20 St?fan van der Walt > Hi Joseph > > 2009/5/20 Joseph Smidt : > > Is there an easy way to create a block diagonal matrix from > > existing matrices? For example, lets assume I have three 2x2 matrices > > a, b and c. Is there something like d = block_diag(a,b,c) which would > > create a 6x6 block diagonal matrix from a, b and c? If not, is there > > a straight forward way to accomplish the same thing? > > The attached function should do the trick. > > Regards > St?fan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joshua.stults at gmail.com Wed May 20 18:47:07 2009 From: joshua.stults at gmail.com (Joshua Stults) Date: Wed, 20 May 2009 18:47:07 -0400 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <9457e7c80905201516l3f8ea74dxa958d12cdad70fba@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201516l3f8ea74dxa958d12cdad70fba@mail.gmail.com> Message-ID: Well that's a good point; I didn't even notice he wanted three different matrices, I just latched on to 'block diagonal'; I guess I've only ever used kron type functions for doing block matrices with the same matrix at each block entry: http://j-stults.blogspot.com/2009/01/kronecker-product-of-sparse-matrices.html Like you'd get with a multi-dimensional finite difference discretization. Just curious, what sort of application would give you different matrix blocks on the diagonal? 2009/5/20 St?fan van der Walt : > 2009/5/20 Joshua Stults : >> Probably numpy.kron() already provides this functionality plus easily >> generating more general block matrices: >> >> http://docs.scipy.org/doc/numpy/reference/generated/numpy.kron.html > > numpy.kron takes two arrays as input, so I'm not sure how that could > work, especially for diagonal blocks with varying shapes? ?Would you > use object arrays? Could be that my brain has gone to bed already! > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Joshua Stults Website: http://j-stults.blogspot.com From ivo.maljevic at gmail.com Wed May 20 18:48:08 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Wed, 20 May 2009 18:48:08 -0400 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <9457e7c80905201516l3f8ea74dxa958d12cdad70fba@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201516l3f8ea74dxa958d12cdad70fba@mail.gmail.com> Message-ID: <826c64da0905201548v731caf0eud34baeea2960cf6a@mail.gmail.com> I am pretty sure kron() function doesn't work. I've used it both in matlab and with scipy to insert zeros or repeat vectors. Your function is very well written. 2009/5/20 St?fan van der Walt > 2009/5/20 Joshua Stults : > > Probably numpy.kron() already provides this functionality plus easily > > generating more general block matrices: > > > > http://docs.scipy.org/doc/numpy/reference/generated/numpy.kron.html > > numpy.kron takes two arrays as input, so I'm not sure how that could > work, especially for diagonal blocks with varying shapes? Would you > use object arrays? Could be that my brain has gone to bed already! > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed May 20 18:59:02 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 20 May 2009 18:59:02 -0400 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> Message-ID: <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> 2009/5/20 St?fan van der Walt : > 2009/5/20 Joseph Smidt : >> Actually, ?I don't know if you could submit this routine for inclusion >> into scipy itself. ?I'm sure there are lots of people who need to >> create block diagonal arrays like this. ?Plus, your script looks >> really well written. > > I'd be glad if others find it useful. ?I'm not quite sure where in > SciPy it would go, though? > scipy.linalg has some matrix creation functions, some look like duplicate functionality compared with numpy to me, this would be a possible location (partially hidden in docs) + kron + hankel + toeplitz + tri + tril + triu I think adding it to numpy instead, alongside kron and diag, might be more appropriate. Josef From josephsmidt at gmail.com Wed May 20 19:03:03 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Wed, 20 May 2009 16:03:03 -0700 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201516l3f8ea74dxa958d12cdad70fba@mail.gmail.com> Message-ID: <142682e10905201603y599dd50k585626b85e4866d7@mail.gmail.com> On Wed, May 20, 2009 at 3:47 PM, Joshua Stults wrote: > discretization. ?Just curious, what sort of application would give you > different matrix blocks on the diagonal? There are several applications. For me I need it for group theory. In group theory, in physics at least, you can represent groups by matrices. If you have an NxN matrix representation of your group you say the dimension of your representation is N. Sometimes, you have to combine representations. To do this you have to combine the two representations by creating a new block diagonal matrix of the two representation. For example, in standard quantum mechanics two spin 1/2 particles are in a 3+1 state, so you represent this with a block diagonal matrix from two matrices A which is 3x3 and B which is 1x1. For more complex situations you might need to create a say 6+3 representation. In QCD there is a 10+8+8+1 representation. Anyways, there is an application. Plus, you need them if you have systems of equations where you don't want them to couple. Joseph Smidt -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From mikedewar at gmail.com Wed May 20 19:06:18 2009 From: mikedewar at gmail.com (mike dewar) Date: Thu, 21 May 2009 00:06:18 +0100 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201516l3f8ea74dxa958d12cdad70fba@mail.gmail.com> Message-ID: <6FD773E6-237D-4784-9BD8-F582C734129D@gmail.com> A state space model consisting of three independent dynamic structures, but sharing a common input would use a block diagonal structure in the state matrix. So if x_t = Ax_{t-1} + Bu_{t-1} where x_t is the state at time t, u_t is the input at time t, and where B was a full matrix but A was block diagonal, s.t. A = [A_1 0 0 0 A_2 0 0 0 A_3] could describe three independent dynamic systems acted on by the same input. We use block matrices a lot when the state of a system consists of something like a Takens embedding, where the model state consists of the current and past systems states, so as to make everything Markovian and therefore easier to deal with. The state matrix is then naturally 'blocky' and sometimes it's quicker to refer to blocks rather than constantly messing about with slicing. Cheers, Mike Dewar On 20 May 2009, at 23:47, Joshua Stults wrote: > Well that's a good point; I didn't even notice he wanted three > different matrices, I just latched on to 'block diagonal'; I guess > I've only ever used kron type functions for doing block matrices with > the same matrix at each block entry: > > http://j-stults.blogspot.com/2009/01/kronecker-product-of-sparse-matrices.html > > Like you'd get with a multi-dimensional finite difference > discretization. Just curious, what sort of application would give you > different matrix blocks on the diagonal? > > 2009/5/20 St?fan van der Walt : >> 2009/5/20 Joshua Stults : >>> Probably numpy.kron() already provides this functionality plus >>> easily >>> generating more general block matrices: >>> >>> http://docs.scipy.org/doc/numpy/reference/generated/numpy.kron.html >> >> numpy.kron takes two arrays as input, so I'm not sure how that could >> work, especially for diagonal blocks with varying shapes? Would you >> use object arrays? Could be that my brain has gone to bed already! >> >> Regards >> St?fan >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Joshua Stults > Website: http://j-stults.blogspot.com > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From ferrell at diablotech.com Wed May 20 19:12:50 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Wed, 20 May 2009 17:12:50 -0600 Subject: [SciPy-user] Subclass of TimeSeries Message-ID: <5A44BDD7-E7C3-40B2-8286-9CD7E9AE3E83@diablotech.com> How do I derive a subclass from TimeSeries? I tried import scikits.timeseries as ts import numpy as np class MyTS(ts.TimeSeries): def __init__(self, label, dataArray, dateArray): ts.TimeSeries.__init__(self, data=dataArray, dates=dateArray) self.label=label return A = np.random.random(10) dts = ts.date_array(start_date = ts.Date('d', '2009-1-1'), length=10, freq='d') myts = MyTS(label='notWork', dataArray=A, dateArray=dts) But that doesn't work. In this case, I got: /Users/Shared/Develop/Sandbox/tsSubClass.py in () 13 A = np.random.random(10) 14 dts = ts.date_array(start_date = ts.Date('d', '2009-1-1'), length=10, freq='d') ---> 15 myts = MyTS(label='notWork', dataArray=A, dateArray=dts) 16 print myts.series 17 : __new__() takes at least 3 non-keyword arguments (1 given) Am I making some silly mistake? Or is this a bit more complicated than I realize? thanks, -robert From stefan at sun.ac.za Wed May 20 19:32:22 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 21 May 2009 01:32:22 +0200 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> Message-ID: <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> 2009/5/21 : > scipy.linalg has some matrix creation functions, some look like Thanks, that looks like a good spot. Please review the attached patch (if anybody does not want it to go in, now is a good time to voice your concerns). Cheers St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Add-block-diagonal-matrix-constructor.patch Type: application/octet-stream Size: 3282 bytes Desc: not available URL: From pgmdevlist at gmail.com Wed May 20 19:50:58 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 20 May 2009 19:50:58 -0400 Subject: [SciPy-user] Subclass of TimeSeries In-Reply-To: <5A44BDD7-E7C3-40B2-8286-9CD7E9AE3E83@diablotech.com> References: <5A44BDD7-E7C3-40B2-8286-9CD7E9AE3E83@diablotech.com> Message-ID: <10A4C5BE-B8AD-4160-A00C-705D150869D8@gmail.com> On May 20, 2009, at 7:12 PM, Robert Ferrell wrote: > How do I derive a subclass from TimeSeries? I tried > > > import scikits.timeseries as ts > import numpy as np > > class MyTS(ts.TimeSeries): > def __init__(self, label, dataArray, dateArray): > ts.TimeSeries.__init__(self, data=dataArray, dates=dateArray) > self.label=label > return Just like the use of __init__ is not the way to subclass ndarray (except when subclassing also from an object that requires __init__), that's not the way to go. You need to implement a __new__ and a __array_finalize__ as described here: http://docs.scipy.org/doc/numpy/user/basics.subclassing.html The easiest is to follow some of the examples given in the scikits.hydroclimpy package (http://hydroclimpy.sourceforge.net/). For instance, here's a class that attaches a reference period to the series (in scikits/hydroclimpy/core/base.py). Adapting this example to your case should be straightforward. Nevertheless, don't hesitate to ask for more details/info as needed. Cheers P. ### class ReferencedSeries(TimeSeries, object): def __new__(cls, data, dates=None, mask=nomask, refperiod=None, freq=None, start_date=None, autosort=True, dtype=None, copy=False, **options): (maoptions, options) = get_maskoptions(**options) maoptions.update(dict(copy=copy, dtype=dtype)) _data = ts.time_series(data, dates=dates, mask=mask, freq=freq, start_date=start_date, autosort=autosort, **maoptions).view(cls) # Set the reference period if refperiod is None or tuple(refperiod) == (None, None): # Don't call refperiod yet, in case we come from a pickle _data._optinfo['reference_period'] = refperiod else: # OK, here we can call refperiod _data.refperiod = refperiod return _data ### From josef.pktd at gmail.com Wed May 20 20:23:08 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 20 May 2009 20:23:08 -0400 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> Message-ID: <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> 2009/5/20 St?fan van der Walt : > 2009/5/21 ?: >> scipy.linalg has some matrix creation functions, some look like > > Thanks, that looks like a good spot. > > Please review the attached patch (if anybody does not want it to go > in, now is a good time to voice your concerns). > It might be better to preserve the dtype of the input arrays, e.g. I could think of a use for integer variables, e.g. dummy variables in regression or anova, or to allow an option for the dtype when you create the zeros array. I don't know if anybody would want complex or character matrices. I just checked, np.kron and np.diag preserves integer type, and np.kron converts to float for mixed types, diag preserves character type. otherwise it looks good and useful to me. Josef From bsouthey at gmail.com Wed May 20 21:32:34 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 20 May 2009 20:32:34 -0500 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> Message-ID: On Wed, May 20, 2009 at 7:23 PM, wrote: > 2009/5/20 St?fan van der Walt : >> 2009/5/21 ?: >>> scipy.linalg has some matrix creation functions, some look like >> >> Thanks, that looks like a good spot. >> >> Please review the attached patch (if anybody does not want it to go >> in, now is a good time to voice your concerns). >> > > It might be better to preserve the dtype of the input arrays, e.g. I > could think of a use for integer variables, e.g. dummy variables in > regression or anova, or to allow an option for the dtype when you > create the zeros array. > > I don't know if anybody would want complex or character matrices. > > I just checked, np.kron and np.diag preserves integer type, and > np.kron converts to float for mixed types, diag preserves character > type. > > otherwise it looks good and useful to me. > > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi, What is the definition that you are using for a block diagonal matrix? Some definitions use square matrices: http://en.wikipedia.org/wiki/Block_matrix#Block_diagonal_matrices http://mathworld.wolfram.com/BlockDiagonalMatrix.html But Matlab's blkdiag function does not and, thus, it may not result in a diagonal matrix: http://www.mathworks.com/access/helpdesk/help/techdoc/index.html?/access/helpdesk/help/techdoc/ref/blkdiag.html So the documentation should reflect the selected definition. Also, I support Josef's suggestion that this function would be better suited in numpy rather than scipy. Bruce From erik.tollerud at gmail.com Wed May 20 21:54:09 2009 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Wed, 20 May 2009 18:54:09 -0700 Subject: [SciPy-user] scipy.interpolate spline class names Message-ID: I use the splines in scipy.interpolate quite a bit, and I particularly like the *UnivariateSpline and *BivariateSpline wrapper classes. However, I cannot for the life of me work out what gives with the names and documentation... As far as I can tell, the univariate splines are as follows: UnivariateSpline : A spline where the number of knots is chosen using the "smoothing factor" s LSQUnivariateSpline: A spline where the knots are explicitly specified InterpolatedUnivariateSpline: A spline with s=0 or t=[] (e.g. passes through all the fitting points) The documentation just says the second two "just have less error checking"... aren't they for very different purposes? And while I recognize that name changes at this stage might be uncalled for, the names are somewhat misleading, too... shouldn't they be "SmoothUnivariateSpline","KnotUnivariateSpline", and "InterpolatedUnivariateSpline" or something like that? It also seems there are similar versions for the *BivariateSpline classes, although it's unclear to me exactly what the raw BivariateSpline class does as compared to the SmoothBivariateSpline (and the RectBivariateSpline, at least, makes sense) From josephsmidt at gmail.com Wed May 20 21:54:55 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Wed, 20 May 2009 18:54:55 -0700 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> Message-ID: <142682e10905201854s21dbfa47p24128e012a6963c5@mail.gmail.com> On Wed, May 20, 2009 at 6:32 PM, Bruce Southey wrote: > > Hi, > What is the definition that you are using for a block diagonal matrix? > ... > But Matlab's blkdiag function does not and, thus, it may not result in > a diagonal matrix: > http://www.mathworks.com/access/helpdesk/help/techdoc/index.html?/access/helpdesk > /help/techdoc/ref/blkdiag.html Typically I would define a block diagonal matrix as a square matrix as the Wikipedia does. However, it might be nice to side with Matlab since there may be circumstances where one needs a more general block diagonal matrix than a square one. Personally, if it is just as efficient algorithmically, I would side with the more general definition as Matlab does. Thanks again for all of this. Joseph Smidt -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From josef.pktd at gmail.com Wed May 20 22:09:11 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 20 May 2009 22:09:11 -0400 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> Message-ID: <1cd32cbb0905201909n6a2b4855v2ea65f9a2077d29@mail.gmail.com> On Wed, May 20, 2009 at 9:32 PM, Bruce Southey wrote: > On Wed, May 20, 2009 at 7:23 PM, ? wrote: >> 2009/5/20 St?fan van der Walt : >>> 2009/5/21 ?: >>>> scipy.linalg has some matrix creation functions, some look like >>> >>> Thanks, that looks like a good spot. >>> >>> Please review the attached patch (if anybody does not want it to go >>> in, now is a good time to voice your concerns). >>> >> >> It might be better to preserve the dtype of the input arrays, e.g. I >> could think of a use for integer variables, e.g. dummy variables in >> regression or anova, or to allow an option for the dtype when you >> create the zeros array. >> >> I don't know if anybody would want complex or character matrices. >> >> I just checked, np.kron and np.diag preserves integer type, and >> np.kron converts to float for mixed types, diag preserves character >> type. >> >> otherwise it looks good and useful to me. >> >> Josef >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > Hi, > What is the definition that you are using for a block diagonal matrix? > > Some definitions use square matrices: > http://en.wikipedia.org/wiki/Block_matrix#Block_diagonal_matrices > http://mathworld.wolfram.com/BlockDiagonalMatrix.html > > But Matlab's blkdiag function does not and, thus, it may not result in > a diagonal matrix: > http://www.mathworks.com/access/helpdesk/help/techdoc/index.html?/access/helpdesk/help/techdoc/ref/blkdiag.html Since it's just a useful function and not a mathematical concept, I think the meaning is clear from the construction and example, although the matlab explanation is more informative. If all individual component matrices are square, then you get the wikipedia definition. But for regression with panel data or with dummy variables, the analogy to kronecker product is better, component matrices have many rows (observations) and only a few columns (regressors). I would have to look it up again, but I think the design matrix for a seemingly unrelated regression (SUR) would be just block_diag(x1,x2,...xn) and endogenous variable is vstack(y1,y2,...yn), I'm not sure what matrix operation (kronecker product) would yield the covariance matrix in one step. (there is only a stub at http://en.wikipedia.org/wiki/Seemingly_unrelated_regression and it doesn't look completely correct to me) The wikipedia page has more types of block matrices, and maybe some of them also have general use cases. I haven't gotten around yet to program anything for panel data or SUR, so I don't know what else might be needed. Josef > > So the documentation should reflect the selected definition. > > Also, I ?support Josef's suggestion ?that this function would be > better suited in numpy rather than scipy. > > Bruce From roger.herikstad at gmail.com Wed May 20 22:17:31 2009 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Thu, 21 May 2009 10:17:31 +0800 Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> <6595CCDD-785D-448E-AE21-1D184BEF6330@cs.toronto.edu> Message-ID: Hi, Sounds like the exact same problem I was having. There's a ticket for it here http://projects.scipy.org/numpy/ticket/1111, with a patch that fixed the problem for me, at least. Good luck! ~ Roger On Wed, May 20, 2009 at 10:36 PM, Gins wrote: > Thanks. ?I successfully got python 2.6.2 compiled with 64 bit support, > but when I try to compile numpy I run into errors that are a little > beyond my experience: > > gcc: build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.c > In file included from numpy/core/include/numpy/ndarrayobject.h:26, > ? ? ? ? ? ? ? ? from numpy/core/include/numpy/noprefix.h:7, > ? ? ? ? ? ? ? ? from numpy/core/src/_sortmodule.c.src:29: > numpy/core/include/numpy/npy_endian.h:33:10: error: #error Unknown > CPU: can not set endianness > lipo: can't figure out the architecture type of: /var/folders/ni/ni > +DtdqFGMeSMH13AvkNkU+++TI/-Tmp-//ccJos8Iw.out > In file included from numpy/core/include/numpy/ndarrayobject.h:26, > ? ? ? ? ? ? ? ? from numpy/core/include/numpy/noprefix.h:7, > ? ? ? ? ? ? ? ? from numpy/core/src/_sortmodule.c.src:29: > numpy/core/include/numpy/npy_endian.h:33:10: error: #error Unknown > CPU: can not set endianness > lipo: can't figure out the architecture type of: /var/folders/ni/ni > +DtdqFGMeSMH13AvkNkU+++TI/-Tmp-//ccJos8Iw.out > error: Command "gcc -arch i386 -arch ppc -arch ppc64 -arch x86_64 - > isysroot / -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g - > fwrapv -O3 -Wall -Wstrict-prototypes -Inumpy/core/include -Ibuild/ > src.macosx-10.5-universal-2.6/numpy/core/include/numpy -Inumpy/core/ > src -Inumpy/core/include -I/Library/Frameworks/Python.framework/ > Versions/2.6/include/python2.6 -c build/src.macosx-10.5-universal-2.6/ > numpy/core/src/_sortmodule.c -o build/temp.macosx-10.5-universal-2.6/ > build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.o" > failed with exit status 1 > > > and I haven't had any luck with the numpy .dmg files for mac. > > I'll check out sage next and report back. ?Thanks for the tips! > Adam > > On May 19, 5:48?pm, David Warde-Farley wrote: >> Hi Adam, >> >> On 17-Apr-09, at 12:38 PM, Keflavich wrote: >> >> > can't get a 64-bit version of python compiled and google has been >> > unhelpful in resolving the problem. ?Is there a workaround to get 64 >> >> I have had a lot of success with (using the 2.6.2 sources) >> >> mkdir -p build && cd build && ./configure --with-framework- >> name=Python64 --with-universal-archs=all --enable-framework --enable- >> universalsdk=/ MACOSX_DEPLOYMENT_TARGET=10.5 && make && sudo make >> install >> >> That builds a 4-way universal binary. --with-universal-archs=64-bit >> will get you just the 64 bit stuff (note that a few of the make >> install steps will fail because of Carbon deprecation but nothing >> important as far as I can see). >> >> David >> _______________________________________________ >> SciPy-user mailing list >> SciPy-u... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Wed May 20 22:38:10 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 20 May 2009 22:38:10 -0400 Subject: [SciPy-user] scipy.interpolate spline class names In-Reply-To: References: Message-ID: <1cd32cbb0905201938n6e91cc93s5429c613a8221296@mail.gmail.com> On Wed, May 20, 2009 at 9:54 PM, Erik Tollerud wrote: > I use the splines in scipy.interpolate quite a bit, and I particularly > like ?the *UnivariateSpline and *BivariateSpline ?wrapper classes. > However, I cannot for the life of me work out what gives with the > names and documentation... As far as I can tell, the univariate > splines are as follows: > > UnivariateSpline : A spline where the number of knots is chosen using > the "smoothing factor" s > LSQUnivariateSpline: A spline where the knots are explicitly specified At least the docs need a lot of improvement, I tried out the splines for the first time a short time ago, and I only realized this for LSQUnivariateSpline after receiving exceptions when I wanted to update the knots as described in the docs. Also, the dispatch behaviour of UnivariateSpline is not described. The docs for the original wrappers, splrep, splev, sproot, spalde, splint, is more informative. I was looking at these spline classes as a replacement for the spline implementation in stats.models, but for a newbie to splines the documentation is not very helpful. But the splines produce nice pictures. Josef > InterpolatedUnivariateSpline: A spline with s=0 or t=[] (e.g. passes > through all the fitting points) > > The documentation just says the second two "just have less error > checking"... aren't they for very different purposes? ?And while I > recognize that name changes at this stage might be uncalled for, the > names are somewhat misleading, too... shouldn't they be > "SmoothUnivariateSpline","KnotUnivariateSpline", and > "InterpolatedUnivariateSpline" or something like that? > > It also seems there are similar versions for the *BivariateSpline > classes, although it's unclear to me exactly what the raw > BivariateSpline class does as compared to the SmoothBivariateSpline > (and the RectBivariateSpline, at least, makes sense) > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From oliphant at enthought.com Wed May 20 23:44:23 2009 From: oliphant at enthought.com (Travis Oliphant) Date: Wed, 20 May 2009 22:44:23 -0500 Subject: [SciPy-user] Join us for "Scientific Computing with Python Webinar" In-Reply-To: <14B8B804-C2F2-4B8D-B713-2C30465CD409@stsci.edu> References: <1437076956.5204661242825355676.JavaMail.root@g2mp1br2.las.expertcity.com> <2355F1D0-DD01-4BD1-8482-FDDC6FEE6C91@enthought.com> <14B8B804-C2F2-4B8D-B713-2C30465CD409@stsci.edu> Message-ID: On May 20, 2009, at 10:16 AM, Perry Greenfield wrote: > Hi Travis, > > Does registration imply that there is a limit to how many can > participate? The upper limit is pretty large, but there may be technical limits. I'm not sure. If you are interested come. At this point it is first- come first-serve. Thanks for the interest. -Travis From oliphant at enthought.com Wed May 20 23:45:31 2009 From: oliphant at enthought.com (Travis Oliphant) Date: Wed, 20 May 2009 22:45:31 -0500 Subject: [SciPy-user] Join us for "Scientific Computing with Python Webinar" In-Reply-To: References: <1437076956.5204661242825355676.JavaMail.root@g2mp1br2.las.expertcity.com> <2355F1D0-DD01-4BD1-8482-FDDC6FEE6C91@enthought.com> Message-ID: <2D5E2C32-693E-46E5-ADE7-F523C2197D19@enthought.com> On May 20, 2009, at 9:53 AM, Kenneth Arnold wrote: > This is a great idea! Will the presentations be archived? Even an > unedited screen capture could be very helpful for people who can't > make the time, have technical issues with the meeting software, or > need to review the details later. We are going to experiment with recording them, but do not know if it will be successful. I will let everyone know if the recording works. Thanks, -Travis -- Travis Oliphant Enthought Inc. 1-512-536-1057 http://www.enthought.com oliphant at enthought.com From ferrell at diablotech.com Wed May 20 23:56:16 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Wed, 20 May 2009 21:56:16 -0600 Subject: [SciPy-user] Subclass of TimeSeries In-Reply-To: <10A4C5BE-B8AD-4160-A00C-705D150869D8@gmail.com> References: <5A44BDD7-E7C3-40B2-8286-9CD7E9AE3E83@diablotech.com> <10A4C5BE-B8AD-4160-A00C-705D150869D8@gmail.com> Message-ID: On May 20, 2009, at 5:50 PM, Pierre GM wrote: > > On May 20, 2009, at 7:12 PM, Robert Ferrell wrote: > >> How do I derive a subclass from TimeSeries? I tried >> >> >> import scikits.timeseries as ts >> import numpy as np >> >> class MyTS(ts.TimeSeries): >> def __init__(self, label, dataArray, dateArray): >> ts.TimeSeries.__init__(self, data=dataArray, dates=dateArray) >> self.label=label >> return > > Just like the use of __init__ is not the way to subclass ndarray > (except when subclassing also from an object that requires __init__), > that's not the way to go. You need to implement a __new__ and a > __array_finalize__ as described here: > http://docs.scipy.org/doc/numpy/user/basics.subclassing.html > > The easiest is to follow some of the examples given in the > scikits.hydroclimpy package (http://hydroclimpy.sourceforge.net/). > For instance, here's a class that attaches a reference period to the > series (in scikits/hydroclimpy/core/base.py). Adapting this example to > your case should be straightforward. Nevertheless, don't hesitate to > ask for more details/info as needed. > > Cheers > P. Thanks for the response. I think I get the idea. For my purposes, it seems to work if I just call the TimeSeries.__new__ method. That way I don't have to get into maoptions. I don't think I need to call __array_finalize__. I think I can use __init__ for my case. I hope I've understood the docs correctly on that. I definitely did not understand that part completely. thanks for the help, -robert Here's what seems to do what I want. ### import scikits.timeseries as ts import numpy as np from numpy import ma from numpy.ma import nomask import unittest class MyTS(ts.TimeSeries): def __new__(cls, label, data, dates, mask=nomask, dtype=None, copy=False, fill_value=None, subok=True, keep_mask=True, hard_mask=False, autosort=True, **options): cls = ts.TimeSeries.__new__(cls, data=data, dates=dates, mask=mask, dtype=dtype, copy=copy, fill_value=fill_value, subok=subok, keep_mask=keep_mask, hard_mask=hard_mask, autosort=autosort, **options) return cls def __init__(self, label, **args): self.label=label return class MyTSTests(unittest.TestCase): def setUp(self): self.dateArray = ts.date_array(start_date = ts.Date('d', '2009-1-1'), length=10, freq='d') self.A = np.random.random(10) self.B = np.random.random(10) self.myDtype = [('a', np.float64), ('b', np.float64)] self.myData = np.array(zip(self.A,self.B), dtype=self.myDtype) def test_1_instantiate(self): """Test that we can instantiate a MyTS instance.""" myts = MyTS(label='doesWork', data=zip(self.A,self.B), dates=self.dateArray, dtype=self.myDtype) self.failUnless(isinstance(myts, MyTS), 'Expected an instance of %s, but got %s.' % (MyTS, type(myts))) def test_2_label(self): """Test that the label got set""" myts = MyTS(label='doesWork', data=zip(self.A,self.B), dates=self.dateArray, dtype=self.myDtype) self.failUnless(myts.label == 'doesWork', 'Expected label %s, but got label %s.' % ('doesWork', myts.label)) def test_3_field(self): """Test that we can get the fields and they have the correct data.""" myts = MyTS(label='doesWork', data=zip(self.A,self.B), dates=self.dateArray, dtype=self.myDtype) # Put the field accessor first self.failUnless((myts['a'].data == self.A).all()) # Or apply it to the data self.failUnless((myts.data['b'] == self.B).all()) MyTSTestSuite = unittest.TestLoader().loadTestsFromTestCase(MyTSTests) if __name__ == '__main__': unittest.TextTestRunner(verbosity=2).run(MyTSTestSuite) From bsouthey at gmail.com Thu May 21 09:38:01 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 21 May 2009 08:38:01 -0500 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <1cd32cbb0905201909n6a2b4855v2ea65f9a2077d29@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201431p69fc758oee63aa7ad4aada0e@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> <1cd32cbb0905201909n6a2b4855v2ea65f9a2077d29@mail.gmail.com> Message-ID: <4A155939.3070303@gmail.com> josef.pktd at gmail.com wrote: > On Wed, May 20, 2009 at 9:32 PM, Bruce Southey wrote: > >> On Wed, May 20, 2009 at 7:23 PM, wrote: >> >>> 2009/5/20 St?fan van der Walt : >>> >>>> 2009/5/21 : >>>> >>>>> scipy.linalg has some matrix creation functions, some look like >>>>> >>>> Thanks, that looks like a good spot. >>>> >>>> Please review the attached patch (if anybody does not want it to go >>>> in, now is a good time to voice your concerns). >>>> >>>> >>> It might be better to preserve the dtype of the input arrays, e.g. I >>> could think of a use for integer variables, e.g. dummy variables in >>> regression or anova, or to allow an option for the dtype when you >>> create the zeros array. >>> >>> I don't know if anybody would want complex or character matrices. >>> >>> I just checked, np.kron and np.diag preserves integer type, and >>> np.kron converts to float for mixed types, diag preserves character >>> type. >>> >>> otherwise it looks good and useful to me. >>> >>> Josef >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> Hi, >> What is the definition that you are using for a block diagonal matrix? >> >> Some definitions use square matrices: >> http://en.wikipedia.org/wiki/Block_matrix#Block_diagonal_matrices >> http://mathworld.wolfram.com/BlockDiagonalMatrix.html >> >> But Matlab's blkdiag function does not and, thus, it may not result in >> a diagonal matrix: >> http://www.mathworks.com/access/helpdesk/help/techdoc/index.html?/access/helpdesk/help/techdoc/ref/blkdiag.html >> > > Since it's just a useful function and not a mathematical concept, I > think the meaning is clear from the construction and example, although > the matlab explanation is more informative. > I disagree because block diagonal does have a special meaning and the result is not a diagonal matrix! > If all individual component matrices are square, then you get the > wikipedia definition. > My understanding is that only if the inputs are diagonal matrices will you get a block diagonal matrix from this function. > But for regression with panel data or with dummy variables, the > analogy to kronecker product is better, component matrices have many > rows (observations) and only a few columns (regressors). I would have > to look it up again, but I think the design matrix for a seemingly > unrelated regression (SUR) would be just block_diag(x1,x2,...xn) and > endogenous variable is vstack(y1,y2,...yn), I'm not sure what matrix > operation (kronecker product) would yield the covariance matrix in one > step. > (there is only a stub at > http://en.wikipedia.org/wiki/Seemingly_unrelated_regression and it > doesn't look completely correct to me) > > The wikipedia page has more types of block matrices, and maybe some of > them also have general use cases. > > I haven't gotten around yet to program anything for panel data or SUR, > so I don't know what else might be needed. > > Josef > > I am not discounting the function because I am well aware of the potential uses of this (but not SUR models as I prefer the more general multivariate and mixed models). Rather I am objecting to the name because it does not return a block diagonal matrix. Bruce From stefan at sun.ac.za Thu May 21 10:03:34 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 21 May 2009 16:03:34 +0200 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <4A155939.3070303@gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> <1cd32cbb0905201909n6a2b4855v2ea65f9a2077d29@mail.gmail.com> <4A155939.3070303@gmail.com> Message-ID: <9457e7c80905210703o611217a7i2acc0cfabf6bee97@mail.gmail.com> Hi Bruce 2009/5/21 Bruce Southey : > I am not discounting the function because I am well aware of the > potential uses of this (but not SUR models as I prefer the more general > multivariate and mixed models). Rather I am objecting to the name > because it does not return a block diagonal matrix. I can't really think of anything better; would you like to make a suggestion? Thanks! St?fan From ivo.maljevic at gmail.com Thu May 21 10:18:44 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 21 May 2009 10:18:44 -0400 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <4A155939.3070303@gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> <1cd32cbb0905201909n6a2b4855v2ea65f9a2077d29@mail.gmail.com> <4A155939.3070303@gmail.com> Message-ID: <826c64da0905210718g316c8926k9eabc8960c5b678@mail.gmail.com> > > I disagree because block diagonal does have a special meaning and the > result is not a diagonal matrix! > > > If all individual component matrices are square, then you get the > > wikipedia definition. > > > My understanding is that only if the inputs are diagonal matrices will > you get a block diagonal matrix from this function. > > I am not a scipy developer, but from time to time I send an email to this list with either questions or answers to somebody else's questions. What do you mean when you say "My understanding is that only if the inputs are diagonal matrices will you get a block diagonal matrix from this function." Here is a non-definition block diagonal matrix with functionality similar to matlab: >>> a=numpy.ones([2,2]) >>> b=numpy.random.rand(3,3) >>> import block >>> c=block.block_diag(a,b) >>> print c [[ 1. 1. 0. 0. 0. ] [ 1. 1. 0. 0. 0. ] [ 0. 0. 0.93924665 0.43404552 0.46698808] [ 0. 0. 0.61331601 0.23593332 0.39016641] [ 0. 0. 0.10644194 0.66638397 0.6305998 ]] as you can see, 'a' and 'b' are not diagonal matrices. I think the only question is whether this function should work only in square matrix cases such as next example: >>> bb=numpy.random.randn(2,2) >>> cc=block.block_diag(a,bb) >>> print cc [[ 1. 1. 0. 0. ] [ 1. 1. 0. 0. ] [ 0. 0. 0.57135707 -0.03777517] [ 0. 0. -0.22069926 -1.40840111]] >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Thu May 21 10:29:18 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 21 May 2009 10:29:18 -0400 Subject: [SciPy-user] Inconsistent function calls? Message-ID: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> Just as I was making up an example for the block diagonal matrix question, I remembered the old problem I had with consistency of nympy functions. If you want to generate a random number matrix, you can make the same call as with matlab: rand(2,2) for 2x2 matrix, randn(1,5) for 1x5 etc. but if you want to generate ones or zeros matrices, you cannot say ones(3,3), you have to write ones([3,3]) or zeros([3,3]) (note the extra brackets). It is not a big deal, but it seems a bit inconsistent for me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu May 21 10:32:31 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 May 2009 10:32:31 -0400 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <826c64da0905210718g316c8926k9eabc8960c5b678@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> <1cd32cbb0905201909n6a2b4855v2ea65f9a2077d29@mail.gmail.com> <4A155939.3070303@gmail.com> <826c64da0905210718g316c8926k9eabc8960c5b678@mail.gmail.com> Message-ID: <1cd32cbb0905210732q2ff2ff38w3ef47bbf44a43671@mail.gmail.com> On Thu, May 21, 2009 at 10:18 AM, Ivo Maljevic wrote: >> I disagree because block diagonal does have a special meaning and the >> result is not a diagonal matrix! >> >> > If all individual component matrices are square, then you get the >> > wikipedia definition. >> > >> My understanding is that only if the inputs are diagonal matrices will >> you get a block diagonal matrix from this function. >> > > I am not a scipy developer, but from time to time I send an email to this > list with either questions or answers to somebody > else's questions. > > What do you mean when you say "My understanding is that only if the inputs > are diagonal matrices will > you get a block diagonal matrix from this function." > > Here is a non-definition block diagonal matrix with functionality similar to > matlab: > >>>> a=numpy.ones([2,2]) >>>> b=numpy.random.rand(3,3) >>>> import block >>>> c=block.block_diag(a,b) >>>> print c > [[ 1.????????? 1.????????? 0.????????? 0.????????? 0.??????? ] > ?[ 1.????????? 1.????????? 0.????????? 0.????????? 0.??????? ] > ?[ 0.????????? 0.????????? 0.93924665? 0.43404552? 0.46698808] > ?[ 0.????????? 0.????????? 0.61331601? 0.23593332? 0.39016641] > ?[ 0.????????? 0.????????? 0.10644194? 0.66638397? 0.6305998 ]] > > > as you can see, 'a' and 'b' are not diagonal matrices. I think the only > question is whether this function should work only in square matrix cases > such as next example: > >>>> bb=numpy.random.randn(2,2) >>>> cc=block.block_diag(a,bb) >>>> print cc > [[ 1.????????? 1.????????? 0.????????? 0.??????? ] > ?[ 1.????????? 1.????????? 0.????????? 0.??????? ] > ?[ 0.????????? 0.????????? 0.57135707 -0.03777517] > ?[ 0.????????? 0.???????? -0.22069926 -1.40840111]] >>>> > If the user uses only square matrices, then (s)he will get only "block diagonal" (according to wikipedia) matrices back. I don't see why we should choose another name just because the function allows for more flexibility, and I don't see a use case for automatically checking whether the matrices are square, so that the user does not violate the square definition. Josef another example >>> x = block_diag(np.eye(2), [[1,2],[3,4],[5,6]]) >>> x.shape (5, 4) >>> np.dot(x.T,x) array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 35., 44.], [ 0., 0., 44., 56.]]) >>> np.linalg.inv(np.dot(x.T,x)) array([[ 1. , 0. , 0. , 0. ], [ 0. , 1. , 0. , 0. ], [ 0. , 0. , 2.33333333, -1.83333333], [ 0. , 0. , -1.83333333, 1.45833333]]) From matthieu.brucher at gmail.com Thu May 21 10:34:03 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 21 May 2009 16:34:03 +0200 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> Message-ID: Hi, This is because ones(3,3) will be called with 3 as the first argument and 3 as the second argument, not with (3, 3) as the first argument. As the second argument is also used for something else, it is not even possible to detect if the second argument is an typo or a value for the matrix (for objects, it's not possible to choose). Python (a langage) is not meant to behave like Matlab (not a langage). This was also raised several months/years ago, you can browse the ML archives to find the discussion. Matthieu 2009/5/21 Ivo Maljevic : > Just as I was making up an example for the block diagonal matrix question, I > remembered the old problem I had with > consistency of nympy functions. > > If you want to generate a random number matrix, you can make the same call > as with matlab: > > rand(2,2) for 2x2 matrix, > randn(1,5) for 1x5 etc. > > but if you want to generate ones or zeros matrices, you cannot say > ones(3,3), you have to write ones([3,3]) or zeros([3,3]) (note the extra > brackets). > > It is not a big deal, but it seems a bit inconsistent for me. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From ivo.maljevic at gmail.com Thu May 21 10:46:33 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 21 May 2009 10:46:33 -0400 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> Message-ID: <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> I appreciate your argument that Python is not meant to to be Matlab, but the learning curve is definitely less steep if the basic stuff works in a similar fashion. I do not have a problem with more esoteric functions working differently, as I have to look them up in Matlab as well as in Scipy. I did not look at the history of this issue, but it can get confusing when you write ones(3) and you get a vector instead of the expected 3x3 matrix. It is one thing to be different because of implementation limitations, but this almost looks like being different for the sake of being different. Then, if you write ones(3,3) you get an error message like this: >>> numpy.ones(3,3) Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.6/dist-packages/numpy/core/numeric.py", line 1489, in ones a = empty(shape, dtype, order) TypeError: data type not understood Don't get me wrong, I like very much what Scipy community is doing, and I use every opportunity to mention to other people that they can switch from Matlab to Python/Scipy without much effort. Ivo 2009/5/21 Matthieu Brucher > Hi, > > This is because ones(3,3) will be called with 3 as the first argument > and 3 as the second argument, not with (3, 3) as the first argument. > As the second argument is also used for something else, it is not even > possible to detect if the second argument is an typo or a value for > the matrix (for objects, it's not possible to choose). > Python (a langage) is not meant to behave like Matlab (not a langage). > > This was also raised several months/years ago, you can browse the ML > archives to find the discussion. > > Matthieu > > 2009/5/21 Ivo Maljevic : > > Just as I was making up an example for the block diagonal matrix > question, I > > remembered the old problem I had with > > consistency of nympy functions. > > > > If you want to generate a random number matrix, you can make the same > call > > as with matlab: > > > > rand(2,2) for 2x2 matrix, > > randn(1,5) for 1x5 etc. > > > > but if you want to generate ones or zeros matrices, you cannot say > > ones(3,3), you have to write ones([3,3]) or zeros([3,3]) (note the extra > > brackets). > > > > It is not a big deal, but it seems a bit inconsistent for me. > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Thu May 21 11:05:31 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 21 May 2009 10:05:31 -0500 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <9457e7c80905210703o611217a7i2acc0cfabf6bee97@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <142682e10905201437y451875ack35f92cfbdce5547e@mail.gmail.com> <142682e10905201440y65c71cefu55fd7169a09b2f21@mail.gmail.com> <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> <1cd32cbb0905201909n6a2b4855v2ea65f9a2077d29@mail.gmail.com> <4A155939.3070303@gmail.com> <9457e7c80905210703o611217a7i2acc0cfabf6bee97@mail.gmail.com> Message-ID: <4A156DBB.7000105@gmail.com> St?fan van der Walt wrote: > Hi Bruce > > 2009/5/21 Bruce Southey : > >> I am not discounting the function because I am well aware of the >> potential uses of this (but not SUR models as I prefer the more general >> multivariate and mixed models). Rather I am objecting to the name >> because it does not return a block diagonal matrix. >> > > I can't really think of anything better; would you like to make a suggestion? > > Thanks! > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Not really because everyone has different terminologies and expectations. Block_array? Well it would be true if the function was generalized and array because common terminology used by Numpy is arrays. Bruce From pgmdevlist at gmail.com Thu May 21 11:13:20 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 21 May 2009 11:13:20 -0400 Subject: [SciPy-user] Subclass of TimeSeries In-Reply-To: References: <5A44BDD7-E7C3-40B2-8286-9CD7E9AE3E83@diablotech.com> <10A4C5BE-B8AD-4160-A00C-705D150869D8@gmail.com> Message-ID: On May 20, 2009, at 11:56 PM, Robert Ferrell wrote: > On May 20, 2009, at 5:50 PM, Pierre GM wrote: > >> >> On May 20, 2009, at 7:12 PM, Robert Ferrell wrote: >> >>> How do I derive a subclass from TimeSeries? I tried >> >> >> The easiest is to follow some of the examples given in the >> scikits.hydroclimpy package (http://hydroclimpy.sourceforge.net/). >> For instance, here's a class that attaches a reference period to the >> series (in scikits/hydroclimpy/core/base.py). Adapting this example >> to >> your case should be straightforward. Nevertheless, don't hesitate to >> ask for more details/info as needed. >> >> Cheers >> P. > > Thanks for the response. I think I get the idea. Almost. Once again, you should not use __init__ to define new attributes, as it may fail in some cases. For example, try to get the 'label' of your myts['a'] object, or pickle myts... > For my purposes, it > seems to work if I just call the TimeSeries.__new__ method. That way > I don't have to get into maoptions. No pb with that. > I don't think I need to call __array_finalize__. You may not have to if you use the fact that MaskedArrays and TimeSeries always carry a special dictionary (_optinfo) with them, which stores various attributes and which is properly taken care of in __array_finalize__. You just need to make sure you store your 'label' in _optinfo, and provide convenient access methods. Check that: class MyTS(ts.TimeSeries): def __new__(cls, label, data, dates, mask=nomask, dtype=None, copy=False, fill_value=None, subok=True, keep_mask=True, hard_mask=False, autosort=True, **options): cls = ts.TimeSeries.__new__(cls, data=data, dates=dates, mask=mask, dtype=dtype, copy=copy, fill_value=fill_value, subok=subok, keep_mask=keep_mask, hard_mask=hard_mask, autosort=autosort, **options) cls._optinfo['label'] = label return cls def _get_label(self): return self._optinfo['label'] def _set_label(self, label): self._optinfo['label'] = label label = property(fget=_get_label, fset=_set_label) It works as you expect, and in cases you didn't think of (try again taking the label of myts['a']. Now, if I remember correctly, Travis O. toyed with the idea of implementing _optinfo for basic ndarrays some time ago, sometime in the future. No doubt that'll be the way to go when it'll happen. From aisaac at american.edu Thu May 21 11:18:18 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 21 May 2009 11:18:18 -0400 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> Message-ID: <4A1570BA.2060801@american.edu> On 5/21/2009 10:46 AM Ivo Maljevic apparently wrote: > if you write ones(3,3) you get an error message You can find a full discussion of this some time back, and imo, NumPy has the superior (more explicit) convention, especially given that it allows you to simply specify the dtype of your array. Also, while a 2d (matrix) focus might suggest to you that ones(3) should be a 3 by 3 matrix, NumPy has an nd-array focus, so a vector result is more natural. Cheers, Alan Isaac From elmickerino at hotmail.com Thu May 21 11:19:15 2009 From: elmickerino at hotmail.com (ElMickerino) Date: Thu, 21 May 2009 08:19:15 -0700 (PDT) Subject: [SciPy-user] fmin using spherical bounds Message-ID: <23654947.post@talk.nabble.com> Hello Fellow SciPythonistas, I have a seemingly simple task: minimize a function inside a (hyper)sphere in parameter space. Unfortunately, I can't seem to make fmin_cobyla do what I'd like it to do, and after reading some of the old messages posted to this forum, it seems that fmin_cobyla will actually wander outside of the allowed regions of parameter space as long as it smells a minimum there (with some appropriate hand-waving). The function I'd like to minimize is only defined in this hypersphere (well, hyperellipsoid, but I do some linear algebra), so ideally I'd use something like fmin_bounds to strictly limit where the search can occur, but it seems that fmin_bounds can only handle rectangular bounds. fmin_cobyla seems to be happy to simply ignore the constraints I give it (and yes, I've got print statements that make it clear that it is wandering far, far outside of the allowed region of parameter space). Is there a simple way to use fmin_bounds with a bound of the form: x^2 + y^2 + z^2 + .... <= 1.0 ? or more generally: transpose(x).M.x <= 1.0 where x is a column vector and M is a positive definite matrix? It seems very bizarre that fmin_cobyla is perfectly happy to wander very, very far outside of where it should be. Thanks very much, Michael -- View this message in context: http://www.nabble.com/fmin-using-spherical-bounds-tp23654947p23654947.html Sent from the Scipy-User mailing list archive at Nabble.com. From josef.pktd at gmail.com Thu May 21 11:19:13 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 May 2009 11:19:13 -0400 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> Message-ID: <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> On Thu, May 21, 2009 at 10:46 AM, Ivo Maljevic wrote: > I appreciate your argument that Python is not meant to to be Matlab, but > the learning curve is definitely less steep if the basic stuff works in a > similar fashion. > I do not have a problem with more esoteric functions working differently, as > I have to look them up > in Matlab as well as in Scipy. > > I did not look at the history of this issue, but it can get confusing when > you write > ones(3) and you get a vector instead of the expected 3x3 matrix. It is one > thing to be > different because of implementation limitations, but this almost looks like > being different > for the sake of being different. Then, if you write ones(3,3) you get an > error message like this: > >>>> numpy.ones(3,3) > Traceback (most recent call last): > ? File "", line 1, in > ? File "/usr/lib/python2.6/dist-packages/numpy/core/numeric.py", line 1489, > in ones > ??? a = empty(shape, dtype, order) > TypeError: data type not understood I regularly trip on missing ( ) for shape or size, but on the other hand, you can use keyword arguments also as positional arguments, which saves again on typing. I find the brackets inconvenient, but since I start to write functions with flexible number of arguments, requiring tuples reduces the ambiguity in the interpretation of the function arguments. numpy.ones(3) shouldn't produce a 2dim array, otherwise it would be difficult to produce a one dimensional array of ones. matlab doesn't have this problem, since it requires or creates almost always 2 dimensional arrays. My main problem in the syntax, when I switch back and forth between matlab and python are [] for array indices versus () in matlab. But I think these syntactic differences are easy to adjust to, especially if I get an instantaneous error message. For me the main problems are the conceptional differences between matlab and numpy, e.g. views, instead of copy on write, and some of the fancy indexing behavior in numpy. We pay for the increased flexibility of numpy compared to matlab with a steeper learning curve and more bug hunting, at least I do. Josef > > Don't get me wrong, I like very much what Scipy community is doing, and I > use every opportunity to mention to > other people that they can switch from Matlab to Python/Scipy without much > effort. > > Ivo > > > 2009/5/21 Matthieu Brucher >> >> Hi, >> >> This is because ones(3,3) will be called with 3 as the first argument >> and 3 as the second argument, not with (3, 3) as the first argument. >> As the second argument is also used for something else, it is not even >> possible to detect if the second argument is an typo or a value for >> the matrix (for objects, it's not possible to choose). >> Python (a langage) is not meant to behave like Matlab (not a langage). >> >> This was also raised several months/years ago, you can browse the ML >> archives to find the discussion. >> >> Matthieu >> >> 2009/5/21 Ivo Maljevic : >> > Just as I was making up an example for the block diagonal matrix >> > question, I >> > remembered the old problem I had with >> > consistency of nympy functions. >> > >> > If you want to generate a random number matrix, you can make the same >> > call >> > as with matlab: >> > >> > rand(2,2) for 2x2 matrix, >> > randn(1,5) for 1x5 etc. >> > >> > but if you want to generate ones or zeros matrices, you cannot say >> > ones(3,3), you have to write ones([3,3]) or zeros([3,3]) (note the extra >> > brackets). >> > >> > It is not a big deal, but it seems a bit inconsistent for me. >> > >> > _______________________________________________ >> > SciPy-user mailing list >> > SciPy-user at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> >> >> -- >> Information System Engineer, Ph.D. >> Website: http://matthieu-brucher.developpez.com/ >> Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 >> LinkedIn: http://www.linkedin.com/in/matthieubrucher >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From ferrell at diablotech.com Thu May 21 11:39:27 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Thu, 21 May 2009 09:39:27 -0600 Subject: [SciPy-user] Subclass of TimeSeries In-Reply-To: References: <5A44BDD7-E7C3-40B2-8286-9CD7E9AE3E83@diablotech.com> <10A4C5BE-B8AD-4160-A00C-705D150869D8@gmail.com> Message-ID: <796AE3B3-7FB0-421F-AC4B-38E074BA29BB@diablotech.com> On May 21, 2009, at 9:13 AM, Pierre GM wrote: > > On May 20, 2009, at 11:56 PM, Robert Ferrell wrote: > >> On May 20, 2009, at 5:50 PM, Pierre GM wrote: >> >>> >>> On May 20, 2009, at 7:12 PM, Robert Ferrell wrote: >>> >>>> How do I derive a subclass from TimeSeries? I tried >>> > >>> >>> The easiest is to follow some of the examples given in the >>> scikits.hydroclimpy package (http://hydroclimpy.sourceforge.net/). >>> For instance, here's a class that attaches a reference period to the >>> series (in scikits/hydroclimpy/core/base.py). Adapting this example >>> to >>> your case should be straightforward. Nevertheless, don't hesitate to >>> ask for more details/info as needed. >>> >>> Cheers >>> P. >> >> Thanks for the response. I think I get the idea. > > Almost. Once again, you should not use __init__ to define new > attributes, as it may fail in some cases. For example, try to get the > 'label' of your myts['a'] object, or pickle myts... > > >> For my purposes, it >> seems to work if I just call the TimeSeries.__new__ method. That way >> I don't have to get into maoptions. > > No pb with that. > > >> I don't think I need to call __array_finalize__. > > You may not have to if you use the fact that MaskedArrays and > TimeSeries always carry a special dictionary (_optinfo) with them, > which stores various attributes and which is properly taken care of in > __array_finalize__. You just need to make sure you store your 'label' > in _optinfo, and provide convenient access methods. > Check that: > > > class MyTS(ts.TimeSeries): > def __new__(cls, label, data, dates, mask=nomask, dtype=None, > copy=False, fill_value=None, subok=True, > keep_mask=True, > hard_mask=False, autosort=True, **options): > cls = ts.TimeSeries.__new__(cls, data=data, dates=dates, > mask=mask, > dtype=dtype, > copy=copy, fill_value=fill_value, subok=subok, > keep_mask=keep_mask, > hard_mask=hard_mask, autosort=autosort, **options) > cls._optinfo['label'] = label > return cls > > def _get_label(self): > return self._optinfo['label'] > def _set_label(self, label): > self._optinfo['label'] = label > label = property(fget=_get_label, fset=_set_label) > > It works as you expect, and in cases you didn't think of (try again > taking the label of myts['a']. Thanks so much for the detailed solution. Works like a charm, and surely saved me much agony sometime in the future, wondering why myts['a'] .label wasn't working. I like the _optinfo solution. -r From josef.pktd at gmail.com Thu May 21 11:41:06 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 May 2009 11:41:06 -0400 Subject: [SciPy-user] fmin using spherical bounds In-Reply-To: <23654947.post@talk.nabble.com> References: <23654947.post@talk.nabble.com> Message-ID: <1cd32cbb0905210841r65dd05c3p850195943bad8759@mail.gmail.com> On Thu, May 21, 2009 at 11:19 AM, ElMickerino wrote: > > Hello Fellow SciPythonistas, > > I have a seemingly simple task: minimize a function inside a (hyper)sphere > in parameter space. ?Unfortunately, I can't seem to make fmin_cobyla do what > I'd like it to do, and after reading some of the old messages posted to this > forum, it seems that fmin_cobyla will actually wander outside of the allowed > regions of parameter space as long as it smells a minimum there (with some > appropriate hand-waving). > > The function I'd like to minimize is only defined in this hypersphere (well, > hyperellipsoid, but I do some linear algebra), so ideally I'd use something > like fmin_bounds to strictly limit where the search can occur, but it seems > that fmin_bounds can only handle rectangular bounds. ?fmin_cobyla seems to > be happy to simply ignore the constraints I give it (and yes, I've got print > statements that make it clear that it is wandering far, far outside of the > allowed region of parameter space). ?Is there a simple way to use > fmin_bounds with a bound of the form: > > ? ? ?x^2 + y^2 + z^2 + .... <= 1.0 ? > > or more generally: > > ? ? ?transpose(x).M.x <= 1.0 ?where x is a column vector and M is a > positive definite matrix? > > > It seems very bizarre that fmin_cobyla is perfectly happy to wander very, > very far outside of where it should be. > maybe you can give fmin_slsqp a try http://docs.scipy.org/scipy/docs/scipy.optimize.slsqp.fmin_slsqp/#scipy-optimize-fmin-slsqp It is very flexible for defining the constraints. I recently added the example from the trac ticket to the tutorial. Otherwise I would try to reparameterize, or extend the objective function to all real numbers with a penalization term. Josef From stefan at sun.ac.za Thu May 21 11:48:21 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 21 May 2009 17:48:21 +0200 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> Message-ID: <9457e7c80905210848o5c46a5ag31f1fc3084714aab@mail.gmail.com> Hi Ivo 2009/5/21 Ivo Maljevic : > If you want to generate a random number matrix, you can make the same call > as with matlab: > > rand(2,2) for 2x2 matrix, > randn(1,5) for 1x5 etc. > > but if you want to generate ones or zeros matrices, you cannot say > ones(3,3), you have to write ones([3,3]) or zeros([3,3]) (note the extra > brackets). This is one of those situations where you have six on the one hand and half-a-dozen on the other. ones(3, 3) implies that the code shape = (3, 4) ones(shape) has to be changed to ones(*shape) Also, it becomes harder to understand the function signatures: def ones(shape) -> def ones(*args) which means that it is no longer clear what the function expects as parameters. We have decided to use func(shape) as the standard interface, so you may see randn and rand as exceptions. Regards St?fan From ivo.maljevic at gmail.com Thu May 21 11:49:12 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 21 May 2009 11:49:12 -0400 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> Message-ID: <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> Josef, Yes, the copy problem is also something I always have to keep in mind, and the explicit data type for ones and zeros. As far as I am concerned, I have learned the differences and it is OK for me. I guess my observation about the inconsistence is more of a philosophical question: why bother to make something that looks like matlab, and than claim that it is not meant to be that, except sometimes. Matplotlib does a pretty good job at replicating matlab plot functions, at least at the level I need it to. I do not see any python imposed limitation at making ones(3,3) by default returning what ones([3,3]) does, the same way random.rand(3,3) does, but if it was decided things should be done the way they are, I can play by the rules, especially since I am aware of them. BTW, the reason why I included that error message in my previous message is because I think it is completely non-helpful. Ivo 2009/5/21 > On Thu, May 21, 2009 at 10:46 AM, Ivo Maljevic > wrote: > > I appreciate your argument that Python is not meant to to be Matlab, but > > the learning curve is definitely less steep if the basic stuff works in a > > similar fashion. > > I do not have a problem with more esoteric functions working differently, > as > > I have to look them up > > in Matlab as well as in Scipy. > > > > I did not look at the history of this issue, but it can get confusing > when > > you write > > ones(3) and you get a vector instead of the expected 3x3 matrix. It is > one > > thing to be > > different because of implementation limitations, but this almost looks > like > > being different > > for the sake of being different. Then, if you write ones(3,3) you get an > > error message like this: > > > >>>> numpy.ones(3,3) > > Traceback (most recent call last): > > File "", line 1, in > > File "/usr/lib/python2.6/dist-packages/numpy/core/numeric.py", line > 1489, > > in ones > > a = empty(shape, dtype, order) > > TypeError: data type not understood > > I regularly trip on missing ( ) for shape or size, but on the other > hand, you can use keyword arguments also as positional arguments, > which saves again on typing. I find the brackets inconvenient, but > since I start to write functions with flexible number of arguments, > requiring tuples reduces the ambiguity in the interpretation of the > function arguments. > > numpy.ones(3) shouldn't produce a 2dim array, otherwise it would be > difficult to produce a one dimensional array of ones. matlab doesn't > have this problem, since it requires or creates almost always 2 > dimensional arrays. > > My main problem in the syntax, when I switch back and forth between > matlab and python are [] for array indices versus () in matlab. But I > think these syntactic differences are easy to adjust to, especially if > I get an instantaneous error message. > > For me the main problems are the conceptional differences between > matlab and numpy, e.g. views, instead of copy on write, and some of > the fancy indexing behavior in numpy. We pay for the increased > flexibility of numpy compared to matlab with a steeper learning curve > and more bug hunting, at least I do. > > Josef > > > > > Don't get me wrong, I like very much what Scipy community is doing, and I > > use every opportunity to mention to > > other people that they can switch from Matlab to Python/Scipy without > much > > effort. > > > > Ivo > > > > > > 2009/5/21 Matthieu Brucher > >> > >> Hi, > >> > >> This is because ones(3,3) will be called with 3 as the first argument > >> and 3 as the second argument, not with (3, 3) as the first argument. > >> As the second argument is also used for something else, it is not even > >> possible to detect if the second argument is an typo or a value for > >> the matrix (for objects, it's not possible to choose). > >> Python (a langage) is not meant to behave like Matlab (not a langage). > >> > >> This was also raised several months/years ago, you can browse the ML > >> archives to find the discussion. > >> > >> Matthieu > >> > >> 2009/5/21 Ivo Maljevic : > >> > Just as I was making up an example for the block diagonal matrix > >> > question, I > >> > remembered the old problem I had with > >> > consistency of nympy functions. > >> > > >> > If you want to generate a random number matrix, you can make the same > >> > call > >> > as with matlab: > >> > > >> > rand(2,2) for 2x2 matrix, > >> > randn(1,5) for 1x5 etc. > >> > > >> > but if you want to generate ones or zeros matrices, you cannot say > >> > ones(3,3), you have to write ones([3,3]) or zeros([3,3]) (note the > extra > >> > brackets). > >> > > >> > It is not a big deal, but it seems a bit inconsistent for me. > >> > > >> > _______________________________________________ > >> > SciPy-user mailing list > >> > SciPy-user at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > > >> > >> > >> > >> -- > >> Information System Engineer, Ph.D. > >> Website: http://matthieu-brucher.developpez.com/ > >> Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > >> LinkedIn: http://www.linkedin.com/in/matthieubrucher > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu May 21 11:50:17 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 May 2009 10:50:17 -0500 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <4A156DBB.7000105@gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201511p93042dem50be4fa66d0a1dca@mail.gmail.com> <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> <1cd32cbb0905201909n6a2b4855v2ea65f9a2077d29@mail.gmail.com> <4A155939.3070303@gmail.com> <9457e7c80905210703o611217a7i2acc0cfabf6bee97@mail.gmail.com> <4A156DBB.7000105@gmail.com> Message-ID: <3d375d730905210850x5b76118cg680619e8e5ca95a6@mail.gmail.com> On Thu, May 21, 2009 at 10:05, Bruce Southey wrote: > Not really because everyone has different terminologies and expectations. > Block_array? Well it would be true if the function was generalized and > array because common terminology used by Numpy is arrays. That doesn't really capture the notion. I would expect block_array() to let me build up any blocked array (e.g. [[A, B], [C, D]]) like bmat() already does, not just blocks along a "notional diagonal". I suggest we bow to Matlab's precedent. It apparently has not caused many problems in their community. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu May 21 11:58:16 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 May 2009 10:58:16 -0500 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> Message-ID: <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> On Thu, May 21, 2009 at 10:49, Ivo Maljevic wrote: > Josef, > Yes, the copy problem is also something I always have to keep in mind, and > the explicit data type for ones and zeros. > > As far as I am concerned, I have learned the differences and it is OK for > me. > I guess my observation about the inconsistence is more of a philosophical > question: > why bother to make something that looks like matlab, and than claim that it > is not > meant to be that, except sometimes. Matplotlib does a pretty good job at > replicating > matlab plot functions, at least at the level I need it to. > > I do not see any python imposed limitation at making ones(3,3) by default > returning > what ones([3,3]) does, the same way random.rand(3,3) does, As has been said several times already, there is one: ones() takes an optional second argument, the dtype. That is the Python-imposed limitation. Ignore rand() if you like. numpy.random.random() does take a shape tuple like ones() and the rest. rand() was added the way it was because some people wanted a Matlab-like version of that particular function. I'd personally be quite happy to drop it, but we do have to maintain some amount of backwards compatibility, so you do have to deal with the warts accumulated by history. > but if it was > decided > things should be done the way they are, I can play by the rules, especially > since I am > aware of them. > > BTW, the reason why I included that error message in my previous message is > because > I think it is completely non-helpful. It's telling you exactly what the problem is, but it is necessarily brief and general-purpose. It can't read your mind to try to figure out what you expected to happen and tailor its message to that. Any time you see an error with arguments, you should always then read the function's docstring which should enlighten you more thoroughly. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ivo.maljevic at gmail.com Thu May 21 12:13:18 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 21 May 2009 12:13:18 -0400 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> Message-ID: <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> 2009/5/21 Robert Kern > > > I do not see any python imposed limitation at making ones(3,3) by default > > returning > > what ones([3,3]) does, the same way random.rand(3,3) does, > > As has been said several times already, there is one: ones() takes an > optional second argument, the dtype. That is the Python-imposed > limitation. > Robert, by following this list I know that I shouldn't even try to respond to your message as you always turn out to be right, but I'll try anyway. Yes, there is a limitation if you take into account that that is how ones() is implemented now, but there is no limitation that prevents this function from being modified. But, as you said, rand() and rand() are not the norm but the exception, so that is fine. > BTW, the reason why I included that error message in my previous message > is > > because > > I think it is completely non-helpful. > > It's telling you exactly what the problem is, but it is necessarily > brief and general-purpose. It can't read your mind to try to figure > out what you expected to happen and tailor its message to that. Any > time you see an error with arguments, you should always then read the > function's docstring which should enlighten you more thoroughly. > How about "The second argument is not dtype"? Still brief ... > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu May 21 12:13:44 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 21 May 2009 12:13:44 -0400 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <9457e7c80905210848o5c46a5ag31f1fc3084714aab@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <9457e7c80905210848o5c46a5ag31f1fc3084714aab@mail.gmail.com> Message-ID: <4A157DB8.3050204@american.edu> On 5/21/2009 11:48 AM St?fan van der Walt apparently wrote: > We have decided to use func(shape) as the standard interface, so you > may see randn and rand as exceptions. And recall that some people argued long ago that these exceptions would just prove confusing. That is my view, although I guess it is too late now. ... Recall that random.rand is just a "convenience function" for random.random, and the only "convenience" is that it accepts a nonstandard argument. The docs are fairly explicit about this: "This is a convenience function. If you want an interface that takes a shape-tuple as the first argument, refer to `random`." I say "fairly explicit" because it will not be clear to a new user exactly what the "convenience" is and how unusual the interface is. (Iirc, only rand and randn do this.) Cheers, Alan Isaac From matthieu.brucher at gmail.com Thu May 21 12:16:04 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 21 May 2009 18:16:04 +0200 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> Message-ID: > Robert, by following this list I know that I shouldn't even try to respond > to your message as > you always turn out to be right, but I'll try anyway. Yes, there is a > limitation if you > take into account that that is how ones() is implemented now, but there is > no limitation that > prevents this function from being modified. But, as you said, rand() and > rand() are not > the norm but the exception, so that is fine. But rand() doesn't have the full extent of random() use, as I've said in my first answer. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From elmickerino at hotmail.com Thu May 21 12:21:50 2009 From: elmickerino at hotmail.com (ElMickerino) Date: Thu, 21 May 2009 09:21:50 -0700 (PDT) Subject: [SciPy-user] fmin using spherical bounds In-Reply-To: <1cd32cbb0905210841r65dd05c3p850195943bad8759@mail.gmail.com> References: <23654947.post@talk.nabble.com> <1cd32cbb0905210841r65dd05c3p850195943bad8759@mail.gmail.com> Message-ID: <23656148.post@talk.nabble.com> Thanks very much for your rapid response, I'll certainly give this a try. Unfortunately, re-parameterizing my function is not an option, since the only sensible new choice of coordinates would be hyperspherical ones: that way, I would specify the bounds of the radius to be b/w 0. and R. The problem with that scheme is that these minimization schema evaluate the gradient in cartesian coordinates only (not to mention how amazingly messy it would be to decompose my function into hyperspherical harmonics). It would make sense that scipy/numpy should have a minimization function that will evaluate the function only after it verifies that it has taken a step into an allowed region of parameter space. --Michael josef.pktd wrote: > > On Thu, May 21, 2009 at 11:19 AM, ElMickerino > wrote: >> >> Hello Fellow SciPythonistas, >> >> I have a seemingly simple task: minimize a function inside a >> (hyper)sphere >> in parameter space. ?Unfortunately, I can't seem to make fmin_cobyla do >> what >> I'd like it to do, and after reading some of the old messages posted to >> this >> forum, it seems that fmin_cobyla will actually wander outside of the >> allowed >> regions of parameter space as long as it smells a minimum there (with >> some >> appropriate hand-waving). >> >> The function I'd like to minimize is only defined in this hypersphere >> (well, >> hyperellipsoid, but I do some linear algebra), so ideally I'd use >> something >> like fmin_bounds to strictly limit where the search can occur, but it >> seems >> that fmin_bounds can only handle rectangular bounds. ?fmin_cobyla seems >> to >> be happy to simply ignore the constraints I give it (and yes, I've got >> print >> statements that make it clear that it is wandering far, far outside of >> the >> allowed region of parameter space). ?Is there a simple way to use >> fmin_bounds with a bound of the form: >> >> ? ? ?x^2 + y^2 + z^2 + .... <= 1.0 ? >> >> or more generally: >> >> ? ? ?transpose(x).M.x <= 1.0 ?where x is a column vector and M is a >> positive definite matrix? >> >> >> It seems very bizarre that fmin_cobyla is perfectly happy to wander very, >> very far outside of where it should be. >> > > maybe you can give fmin_slsqp a try > http://docs.scipy.org/scipy/docs/scipy.optimize.slsqp.fmin_slsqp/#scipy-optimize-fmin-slsqp > > It is very flexible for defining the constraints. I recently added the > example from the trac ticket to the tutorial. > > Otherwise I would try to reparameterize, or extend the objective > function to all real numbers with a penalization term. > > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://www.nabble.com/fmin-using-spherical-bounds-tp23654947p23656148.html Sent from the Scipy-User mailing list archive at Nabble.com. From peridot.faceted at gmail.com Thu May 21 12:27:54 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 21 May 2009 12:27:54 -0400 Subject: [SciPy-user] fmin using spherical bounds In-Reply-To: <23654947.post@talk.nabble.com> References: <23654947.post@talk.nabble.com> Message-ID: 2009/5/21 ElMickerino : > > Hello Fellow SciPythonistas, > > I have a seemingly simple task: minimize a function inside a (hyper)sphere > in parameter space. ?Unfortunately, I can't seem to make fmin_cobyla do what > I'd like it to do, and after reading some of the old messages posted to this > forum, it seems that fmin_cobyla will actually wander outside of the allowed > regions of parameter space as long as it smells a minimum there (with some > appropriate hand-waving). > > The function I'd like to minimize is only defined in this hypersphere (well, > hyperellipsoid, but I do some linear algebra), so ideally I'd use something > like fmin_bounds to strictly limit where the search can occur, but it seems > that fmin_bounds can only handle rectangular bounds. ?fmin_cobyla seems to > be happy to simply ignore the constraints I give it (and yes, I've got print > statements that make it clear that it is wandering far, far outside of the > allowed region of parameter space). ?Is there a simple way to use > fmin_bounds with a bound of the form: > > ? ? ?x^2 + y^2 + z^2 + .... <= 1.0 ? > > or more generally: > > ? ? ?transpose(x).M.x <= 1.0 ?where x is a column vector and M is a > positive definite matrix? > > > It seems very bizarre that fmin_cobyla is perfectly happy to wander very, > very far outside of where it should be. > > Thanks very much, > Michael My experience with this sort of thing has been that while constrained optimizers will only report a minimum satisfying the constraints, none of them (that I have used) can work without evaluating the function outside the bounded region. This is obviously a problem if your function doesn't make any sense out there. I have to agree that reparameterizing your function is the way to go. Rectangular constraints are possible. If evaluating the gradient is too hard, just let the minimizer approximate it (though it shouldn't be too hard to come up with a gradient-conversion matrix so that it's a simple matrix multiply). There's no need to rewrite your function at all; you just use a wrapper function that converts coordinates back from spherical to what your function wants. Anne > -- > View this message in context: http://www.nabble.com/fmin-using-spherical-bounds-tp23654947p23654947.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From travis at enthought.com Thu May 21 12:29:44 2009 From: travis at enthought.com (Travis Vaught) Date: Thu, 21 May 2009 11:29:44 -0500 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> Message-ID: On May 21, 2009, at 11:13 AM, Ivo Maljevic wrote: > ... > > Robert, by following this list I know that I shouldn't even try to > respond to your message as > you always turn out to be right, but I'll try anyway. ... Robert, Could you write some code to verify this? Here's a snippet off the top of my head: def check_robert(): if 1: print "Robert is right." else: print "Robert is wrong." TIA, Travis From robert.kern at gmail.com Thu May 21 12:35:16 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 May 2009 11:35:16 -0500 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> Message-ID: <3d375d730905210935n1a745bap8478713989af1a00@mail.gmail.com> On Thu, May 21, 2009 at 11:29, Travis Vaught wrote: > On May 21, 2009, at 11:13 AM, Ivo Maljevic wrote: > >> ... >> >> Robert, by following this list I know that I shouldn't even try to >> respond to your message as >> you always turn out to be right, but I'll try anyway. ... > > Robert, > > Could you write some code to verify this? > > Here's a snippet off the top of my head: > > def check_robert(): > ? ? if 1: > ? ? ? ? print "Robert is right." > ? ? else: > ? ? ? ? print "Robert is wrong." These days, it's been more like: def check_robert(): if random() > 0.5: print "Robert is right." else: print "Robert is wrong." -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Thu May 21 12:36:01 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 May 2009 12:36:01 -0400 Subject: [SciPy-user] fmin using spherical bounds In-Reply-To: References: <23654947.post@talk.nabble.com> Message-ID: <1cd32cbb0905210936g3d062a7dr778a240868da548a@mail.gmail.com> On Thu, May 21, 2009 at 12:27 PM, Anne Archibald wrote: > 2009/5/21 ElMickerino : >> >> Hello Fellow SciPythonistas, >> >> I have a seemingly simple task: minimize a function inside a (hyper)sphere >> in parameter space. ?Unfortunately, I can't seem to make fmin_cobyla do what >> I'd like it to do, and after reading some of the old messages posted to this >> forum, it seems that fmin_cobyla will actually wander outside of the allowed >> regions of parameter space as long as it smells a minimum there (with some >> appropriate hand-waving). >> >> The function I'd like to minimize is only defined in this hypersphere (well, >> hyperellipsoid, but I do some linear algebra), so ideally I'd use something >> like fmin_bounds to strictly limit where the search can occur, but it seems >> that fmin_bounds can only handle rectangular bounds. ?fmin_cobyla seems to >> be happy to simply ignore the constraints I give it (and yes, I've got print >> statements that make it clear that it is wandering far, far outside of the >> allowed region of parameter space). ?Is there a simple way to use >> fmin_bounds with a bound of the form: >> >> ? ? ?x^2 + y^2 + z^2 + .... <= 1.0 ? >> >> or more generally: >> >> ? ? ?transpose(x).M.x <= 1.0 ?where x is a column vector and M is a >> positive definite matrix? >> >> >> It seems very bizarre that fmin_cobyla is perfectly happy to wander very, >> very far outside of where it should be. >> >> Thanks very much, >> Michael > > My experience with this sort of thing has been that while constrained > optimizers will only report a minimum satisfying the constraints, none > of them (that I have used) can work without evaluating the function > outside the bounded region. This is obviously a problem if your > function doesn't make any sense out there. > > I have to agree that reparameterizing your function is the way to go. > Rectangular constraints are possible. If evaluating the gradient is > too hard, just let the minimizer approximate it (though it shouldn't > be too hard to come up with a gradient-conversion matrix so that it's > a simple matrix multiply). There's no need to rewrite your function at > all; you just use a wrapper function that converts coordinates back > from spherical to what your function wants. > > > Anne > Do you know how well these optimization functions would handle discontinuities at the boundary? e.g def wrapobjectivefn(x): if transpose(x).M.x > 1.0: return a_large_number else: return realobjectivefn(x) I don't know what the appropriate wrapper for the gradient would be, maybe also some large vector. I'm doing things like this in matlab, but I haven't tried with the scipy minimizers yet. Josef From robert.kern at gmail.com Thu May 21 12:38:51 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 May 2009 11:38:51 -0500 Subject: [SciPy-user] fmin using spherical bounds In-Reply-To: <1cd32cbb0905210936g3d062a7dr778a240868da548a@mail.gmail.com> References: <23654947.post@talk.nabble.com> <1cd32cbb0905210936g3d062a7dr778a240868da548a@mail.gmail.com> Message-ID: <3d375d730905210938w38511010l1afafb45ede08202@mail.gmail.com> On Thu, May 21, 2009 at 11:36, wrote: > On Thu, May 21, 2009 at 12:27 PM, Anne Archibald > wrote: >> 2009/5/21 ElMickerino : >>> >>> Hello Fellow SciPythonistas, >>> >>> I have a seemingly simple task: minimize a function inside a (hyper)sphere >>> in parameter space. ?Unfortunately, I can't seem to make fmin_cobyla do what >>> I'd like it to do, and after reading some of the old messages posted to this >>> forum, it seems that fmin_cobyla will actually wander outside of the allowed >>> regions of parameter space as long as it smells a minimum there (with some >>> appropriate hand-waving). >>> >>> The function I'd like to minimize is only defined in this hypersphere (well, >>> hyperellipsoid, but I do some linear algebra), so ideally I'd use something >>> like fmin_bounds to strictly limit where the search can occur, but it seems >>> that fmin_bounds can only handle rectangular bounds. ?fmin_cobyla seems to >>> be happy to simply ignore the constraints I give it (and yes, I've got print >>> statements that make it clear that it is wandering far, far outside of the >>> allowed region of parameter space). ?Is there a simple way to use >>> fmin_bounds with a bound of the form: >>> >>> ? ? ?x^2 + y^2 + z^2 + .... <= 1.0 ? >>> >>> or more generally: >>> >>> ? ? ?transpose(x).M.x <= 1.0 ?where x is a column vector and M is a >>> positive definite matrix? >>> >>> >>> It seems very bizarre that fmin_cobyla is perfectly happy to wander very, >>> very far outside of where it should be. >>> >>> Thanks very much, >>> Michael >> >> My experience with this sort of thing has been that while constrained >> optimizers will only report a minimum satisfying the constraints, none >> of them (that I have used) can work without evaluating the function >> outside the bounded region. This is obviously a problem if your >> function doesn't make any sense out there. >> >> I have to agree that reparameterizing your function is the way to go. >> Rectangular constraints are possible. If evaluating the gradient is >> too hard, just let the minimizer approximate it (though it shouldn't >> be too hard to come up with a gradient-conversion matrix so that it's >> a simple matrix multiply). There's no need to rewrite your function at >> all; you just use a wrapper function that converts coordinates back >> from spherical to what your function wants. >> >> >> Anne >> > > Do you know how well these optimization functions would handle > discontinuities at the boundary? e.g > > def wrapobjectivefn(x): > ? ? if transpose(x).M.x > 1.0: > ? ? ? ? ?return a_large_number > ? ?else: > ? ? ? ? ?return realobjectivefn(x) > > I don't know what the appropriate wrapper for the gradient would be, > maybe also some large vector. > > I'm doing things like this in matlab, but I haven't tried with the > scipy minimizers yet. You would probably want some gradient out there to point it back to the feasible region, at least roughly. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu May 21 12:39:39 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 21 May 2009 12:39:39 -0400 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <4A157DB8.3050204@american.edu> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <9457e7c80905210848o5c46a5ag31f1fc3084714aab@mail.gmail.com> <4A157DB8.3050204@american.edu> Message-ID: <4A1583CB.1000203@american.edu> The years go by... Here's a reference, with Robert's still relevant answer. http://www.mail-archive.com/numpy-discussion at lists.sourceforge.net/msg00089.html Cheers, Alan Isaac From ondrej at certik.cz Thu May 21 12:42:13 2009 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 21 May 2009 09:42:13 -0700 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <3d375d730905210935n1a745bap8478713989af1a00@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> <3d375d730905210935n1a745bap8478713989af1a00@mail.gmail.com> Message-ID: <85b5c3130905210942t32941853t5d828b24c154a2fa@mail.gmail.com> On Thu, May 21, 2009 at 9:35 AM, Robert Kern wrote: > On Thu, May 21, 2009 at 11:29, Travis Vaught wrote: >> On May 21, 2009, at 11:13 AM, Ivo Maljevic wrote: >> >>> ... >>> >>> Robert, by following this list I know that I shouldn't even try to >>> respond to your message as >>> you always turn out to be right, but I'll try anyway. ... >> >> Robert, >> >> Could you write some code to verify this? >> >> Here's a snippet off the top of my head: >> >> def check_robert(): >> ? ? if 1: >> ? ? ? ? print "Robert is right." >> ? ? else: >> ? ? ? ? print "Robert is wrong." > > These days, it's been more like: > > ?def check_robert(): > ? ?if random() > 0.5: > ? ? ?print "Robert is right." > ? ?else: > ? ? ?print "Robert is wrong." what about: def check_enthought(): if 1: print "Robert is right." else: print "Travis is right." O. From stefan at sun.ac.za Thu May 21 12:44:24 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 21 May 2009 18:44:24 +0200 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> Message-ID: <9457e7c80905210944x539243cew9139326961ec7b6b@mail.gmail.com> 2009/5/21 Travis Vaught : > On May 21, 2009, at 11:13 AM, Ivo Maljevic wrote: > >> ... >> >> Robert, by following this list I know that I shouldn't even try to >> respond to your message as >> you always turn out to be right, but I'll try anyway. ... > > Robert, > > Could you write some code to verify this? > > Here's a snippet off the top of my head: > > def check_robert(): > ? ? if 1: > ? ? ? ? print "Robert is right." > ? ? else: > ? ? ? ? print "Robert is wrong." Robert's answer negatively biased, even though he used the Mersenne twister. I preferred this very trustworthy source on the internet: http://www.googlefight.com/index.php?lang=en_GB&word1=robert+is+always+right&word2=robert+is+wrong+sometimes which yielded the following definitive answer: Robert is always right: 74600000 Robert is wrong sometimes: 1760000 Clearly, Robert is always right at least 97% of the time and only *2% of sometimes* wrong (that's a very small number). Now that's good science. Cheers St?fan From travis at enthought.com Thu May 21 12:47:25 2009 From: travis at enthought.com (Travis Vaught) Date: Thu, 21 May 2009 11:47:25 -0500 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <85b5c3130905210942t32941853t5d828b24c154a2fa@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> <3d375d730905210935n1a745bap8478713989af1a00@mail.gmail.com> <85b5c3130905210942t32941853t5d828b24c154a2fa@mail.gmail.com> Message-ID: <755047CF-00F5-4B36-A6E5-1C838B262287@enthought.com> On May 21, 2009, at 11:42 AM, Ondrej Certik wrote: > On Thu, May 21, 2009 at 9:35 AM, Robert Kern > wrote: >> On Thu, May 21, 2009 at 11:29, Travis Vaught >> wrote: >>> On May 21, 2009, at 11:13 AM, Ivo Maljevic wrote: >>> >>>> ... >>>> >>>> Robert, by following this list I know that I shouldn't even try to >>>> respond to your message as >>>> you always turn out to be right, but I'll try anyway. ... >>> >>> Robert, >>> >>> Could you write some code to verify this? >>> >>> Here's a snippet off the top of my head: >>> >>> def check_robert(): >>> if 1: >>> print "Robert is right." >>> else: >>> print "Robert is wrong." >> >> These days, it's been more like: >> >> def check_robert(): >> if random() > 0.5: >> print "Robert is right." >> else: >> print "Robert is wrong." > > what about: > > def check_enthought(): > if 1: > print "Robert is right." > else: > print "Travis is right." > > O. This is nonsense--because I always agree with Robert ;-) Travis From ivo.maljevic at gmail.com Thu May 21 12:56:57 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 21 May 2009 12:56:57 -0400 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <755047CF-00F5-4B36-A6E5-1C838B262287@enthought.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> <3d375d730905210935n1a745bap8478713989af1a00@mail.gmail.com> <85b5c3130905210942t32941853t5d828b24c154a2fa@mail.gmail.com> <755047CF-00F5-4B36-A6E5-1C838B262287@enthought.com> Message-ID: <826c64da0905210956o74341e31y28497d45cdd2c064@mail.gmail.com> You can also add G?del's twist: "Robert K. will never say this sentence is true". And then ask him if this sentence is true or not. 2009/5/21 Travis Vaught > > On May 21, 2009, at 11:42 AM, Ondrej Certik wrote: > > > On Thu, May 21, 2009 at 9:35 AM, Robert Kern > > wrote: > >> On Thu, May 21, 2009 at 11:29, Travis Vaught > >> wrote: > >>> On May 21, 2009, at 11:13 AM, Ivo Maljevic wrote: > >>> > >>>> ... > >>>> > >>>> Robert, by following this list I know that I shouldn't even try to > >>>> respond to your message as > >>>> you always turn out to be right, but I'll try anyway. ... > >>> > >>> Robert, > >>> > >>> Could you write some code to verify this? > >>> > >>> Here's a snippet off the top of my head: > >>> > >>> def check_robert(): > >>> if 1: > >>> print "Robert is right." > >>> else: > >>> print "Robert is wrong." > >> > >> These days, it's been more like: > >> > >> def check_robert(): > >> if random() > 0.5: > >> print "Robert is right." > >> else: > >> print "Robert is wrong." > > > > what about: > > > > def check_enthought(): > > if 1: > > print "Robert is right." > > else: > > print "Travis is right." > > > > O. > > This is nonsense--because I always agree with Robert ;-) > > Travis > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu May 21 12:58:12 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 May 2009 12:58:12 -0400 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <3d375d730905210850x5b76118cg680619e8e5ca95a6@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <1cd32cbb0905201559i6081e8c6s4a559e77f42fdbc2@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> <1cd32cbb0905201909n6a2b4855v2ea65f9a2077d29@mail.gmail.com> <4A155939.3070303@gmail.com> <9457e7c80905210703o611217a7i2acc0cfabf6bee97@mail.gmail.com> <4A156DBB.7000105@gmail.com> <3d375d730905210850x5b76118cg680619e8e5ca95a6@mail.gmail.com> Message-ID: <1cd32cbb0905210958i463c21cao27dd8180686eec54@mail.gmail.com> On Thu, May 21, 2009 at 11:50 AM, Robert Kern wrote: > On Thu, May 21, 2009 at 10:05, Bruce Southey wrote: >> Not really because everyone has different terminologies and expectations. >> Block_array? Well it would be true if the function was generalized and >> array because common terminology used by Numpy is arrays. > > That doesn't really capture the notion. I would expect block_array() > to let me build up any blocked array (e.g. [[A, B], [C, D]]) like > bmat() already does, not just blocks along a "notional diagonal". I > suggest we bow to Matlab's precedent. It apparently has not caused > many problems in their community. > If we consider name changes, I would prefer the name without the underline either blockdiag or blkdiag as in matlab, since it is essentially the same function. Josef From robert.kern at gmail.com Thu May 21 13:00:02 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 May 2009 12:00:02 -0500 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <1cd32cbb0905210958i463c21cao27dd8180686eec54@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> <1cd32cbb0905201909n6a2b4855v2ea65f9a2077d29@mail.gmail.com> <4A155939.3070303@gmail.com> <9457e7c80905210703o611217a7i2acc0cfabf6bee97@mail.gmail.com> <4A156DBB.7000105@gmail.com> <3d375d730905210850x5b76118cg680619e8e5ca95a6@mail.gmail.com> <1cd32cbb0905210958i463c21cao27dd8180686eec54@mail.gmail.com> Message-ID: <3d375d730905211000p68c677cftbadf943ccaf58c26@mail.gmail.com> On Thu, May 21, 2009 at 11:58, wrote: > If we consider name changes, Let's not. I think we've had enough bike-shedding for one morning. :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ivo.maljevic at gmail.com Thu May 21 13:00:45 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 21 May 2009 13:00:45 -0400 Subject: [SciPy-user] Easy way to make a block diagonal matrix? In-Reply-To: <1cd32cbb0905210958i463c21cao27dd8180686eec54@mail.gmail.com> References: <142682e10905201244y66cfd603m74f54229e1fbf896@mail.gmail.com> <9457e7c80905201632y7f0659cfkac5c5e1a22418e78@mail.gmail.com> <1cd32cbb0905201723l74afffd3ga5903a16f0a3383d@mail.gmail.com> <1cd32cbb0905201909n6a2b4855v2ea65f9a2077d29@mail.gmail.com> <4A155939.3070303@gmail.com> <9457e7c80905210703o611217a7i2acc0cfabf6bee97@mail.gmail.com> <4A156DBB.7000105@gmail.com> <3d375d730905210850x5b76118cg680619e8e5ca95a6@mail.gmail.com> <1cd32cbb0905210958i463c21cao27dd8180686eec54@mail.gmail.com> Message-ID: <826c64da0905211000s448deca0m222def64deedd29b@mail.gmail.com> As a non member I would "vote" for blkdiag as I normally look for matlab names when I try to find a SciPy function. 2009/5/21 > On Thu, May 21, 2009 at 11:50 AM, Robert Kern > wrote: > > On Thu, May 21, 2009 at 10:05, Bruce Southey wrote: > >> Not really because everyone has different terminologies and > expectations. > >> Block_array? Well it would be true if the function was generalized and > >> array because common terminology used by Numpy is arrays. > > > > That doesn't really capture the notion. I would expect block_array() > > to let me build up any blocked array (e.g. [[A, B], [C, D]]) like > > bmat() already does, not just blocks along a "notional diagonal". I > > suggest we bow to Matlab's precedent. It apparently has not caused > > many problems in their community. > > > > If we consider name changes, I would prefer the name without the underline > either blockdiag > or blkdiag as in matlab, since it is essentially the same function. > > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej at certik.cz Thu May 21 13:10:38 2009 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 21 May 2009 10:10:38 -0700 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <755047CF-00F5-4B36-A6E5-1C838B262287@enthought.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> <3d375d730905210935n1a745bap8478713989af1a00@mail.gmail.com> <85b5c3130905210942t32941853t5d828b24c154a2fa@mail.gmail.com> <755047CF-00F5-4B36-A6E5-1C838B262287@enthought.com> Message-ID: <85b5c3130905211010q5e505c2ascc6f9a1d72df6ad8@mail.gmail.com> On Thu, May 21, 2009 at 9:47 AM, Travis Vaught wrote: > > On May 21, 2009, at 11:42 AM, Ondrej Certik wrote: > >> On Thu, May 21, 2009 at 9:35 AM, Robert Kern >> wrote: >>> On Thu, May 21, 2009 at 11:29, Travis Vaught >>> wrote: >>>> On May 21, 2009, at 11:13 AM, Ivo Maljevic wrote: >>>> >>>>> ... >>>>> >>>>> Robert, by following this list I know that I shouldn't even try to >>>>> respond to your message as >>>>> you always turn out to be right, but I'll try anyway. ... >>>> >>>> Robert, >>>> >>>> Could you write some code to verify this? >>>> >>>> Here's a snippet off the top of my head: >>>> >>>> def check_robert(): >>>> ? ? if 1: >>>> ? ? ? ? print "Robert is right." >>>> ? ? else: >>>> ? ? ? ? print "Robert is wrong." >>> >>> These days, it's been more like: >>> >>> ?def check_robert(): >>> ? ?if random() > 0.5: >>> ? ? ?print "Robert is right." >>> ? ?else: >>> ? ? ?print "Robert is wrong." >> >> what about: >> >> def check_enthought(): >> ?if 1: >> ? ?print "Robert is right." >> ?else: >> ? ?print "Travis is right." >> >> O. > > This is nonsense--because I always agree with Robert ;-) Very good! Boss should always stay behind his employees. :) Ondrej From peridot.faceted at gmail.com Thu May 21 13:19:44 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 21 May 2009 13:19:44 -0400 Subject: [SciPy-user] fmin using spherical bounds In-Reply-To: <1cd32cbb0905210936g3d062a7dr778a240868da548a@mail.gmail.com> References: <23654947.post@talk.nabble.com> <1cd32cbb0905210936g3d062a7dr778a240868da548a@mail.gmail.com> Message-ID: 2009/5/21 : > Do you know how well these optimization functions would handle > discontinuities at the boundary? e.g > > def wrapobjectivefn(x): > if transpose(x).M.x > 1.0: > return a_large_number > else: > return realobjectivefn(x) > > I don't know what the appropriate wrapper for the gradient would be, > maybe also some large vector. > > I'm doing things like this in matlab, but I haven't tried with the > scipy minimizers yet. I would say they'd handle it badly, in the sense that most of them try to do something like build up a quadratic form approximating the function, then head for the minimum of that quadratic form. A discontinuity is of course going to make nonsense of this quadratic form, though the fact that you get a huge value will tend to send the optimizer screeching back in the direction it came from. Unfortunately it probably won't know it should discard the huge value, so if it overshoots too much you could wind up with the set of evaluation points being filled with bogus values. If feasible, it might not hurt to use something like huge_value*(x**2+y**2+...) so that the solver tends to gravitate back towards the allowed value. In an ideal world, even in the absence of constraints, it would be possible for the objective function to return NaN, which the solver would (ideally) recognize as indicating a point where the function cannot be safely evaluated. It would then choose some other point to evaluate the function. You might need a constraint to safely choose such a new point. (And if you had a constraint, safer and simpler to simply refuse to even call the objective anywhere the constraint is not met.) But as I understand it, the problem with this is not just that it hasn't been implemented, but that a good optimization scheme would need to be more flexible about the locations of its samples than the current ones are. Anne > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From Chris.Barker at noaa.gov Thu May 21 16:53:13 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 21 May 2009 13:53:13 -0700 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> Message-ID: <4A15BF39.20501@noaa.gov> Ivo Maljevic wrote: > why bother to make something that looks like matlab, who ever said numpy "looks like matlab", any more than it look s like any number of other programming environments... > Matplotlib does a pretty good job at replicating > matlab plot functions, at least at the level I need it to. Because is was designed exactly to do that -- but I think MPL's Matlab replicating has been a hindrance, rather than a help, to a good API. However, is has been a help to its adoption. You may have noticed that over the years MPL is moving away from matlab, toward a more pythonic API. Personally, I like python so much more than Matlab exactly for these differences (and so many more). I suppose it's tough if you switch back and forth, but I haven't touched Matlab in years. It is rand() that is inconsistent, and that is an accident of history. > what ones([3,3]) does, the same way random.rand(3,3) does, well, rand() is a convenience function, and doesn't take a bunch of other parameters. In fact, it's listed under "Compatibility functions", and is really a wrapper for: numpy.random.uniform, which takes a shape argument. > the reason why I included that error message in my previous message > is because I think it is completely non-helpful. That's another issue -- non-helpful error messages do show up a lot -- in that case, if the user had typed: np.zeros(3, dtype=3) the error message would make sense. If you can suggest a better message, patches are always welcome. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu May 21 17:02:53 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 21 May 2009 14:02:53 -0700 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <9457e7c80905210944x539243cew9139326961ec7b6b@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> <9457e7c80905210944x539243cew9139326961ec7b6b@mail.gmail.com> Message-ID: <4A15C17D.6070906@noaa.gov> St?fan van der Walt wrote: > I preferred this very trustworthy source on the internet: > > http://www.googlefight.com/index.php?lang=en_GB&word1=robert+is+always+right&word2=robert+is+wrong+sometimes > > which yielded the following definitive answer: > > Robert is always right: 74600000 > Robert is wrong sometimes: 1760000 not so good when you do this, though: http://www.googlefight.com/index.php?lang=en_GB&word1=robert+kern+is+right&word2=robert+kern+is+wrong Robert Kern is right: 112000 Robert Kern is wrong: 648000 ouch! sorry, Robert. That's clearly not surveying the numpy list! -Chris NOTE: my score is worse! -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From dwf at cs.toronto.edu Thu May 21 17:04:54 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 21 May 2009 17:04:54 -0400 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <85b5c3130905210942t32941853t5d828b24c154a2fa@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> <3d375d730905210935n1a745bap8478713989af1a00@mail.gmail.com> <85b5c3130905210942t32941853t5d828b24c154a2fa@mail.gmail.com> Message-ID: <3310CE8B-C8A8-4F0E-AAF8-95CA1404E970@cs.toronto.edu> On 21-May-09, at 12:42 PM, Ondrej Certik wrote: > > def check_enthought(): > if 1: > print "Robert is right." > else: > print "Travis is right." I'm not sure this works, considering there are two Travises at Enthought. One of them is almost always going to be right, whether or not Robert is right. :) D From dwf at cs.toronto.edu Thu May 21 17:11:31 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 21 May 2009 17:11:31 -0400 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> Message-ID: <0FB5B2B7-3B92-4000-8A8D-E0890E761C00@cs.toronto.edu> On 21-May-09, at 11:58 AM, Robert Kern wrote: > I'd personally be quite happy to drop it, but we do have to > maintain some amount of backwards compatibility, so you do have to > deal with the warts accumulated by history. +1 on being happy to drop them. I am routinely tempted into using rand() and randn() instead of random() and random.normal and then I forget which things want tuples and which things want regular arguments. matplotlib.mlab seems like the most reasonable place for Matlab compatibility functions, that behave "the Matlab way" as opposed to "the NumPy way" as they've already got plenty (find() is basically a poor man's which(), etc.). NumPy's goal should be consistency, I think. Opinions may vary, of course, and it depends whether the matplotlib guys mind taking NumPy's impure castoffs. David From dwf at cs.toronto.edu Thu May 21 17:13:29 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 21 May 2009 17:13:29 -0400 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <4A15C17D.6070906@noaa.gov> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <3d375d730905210858u1540087arcb27589dc2b71601@mail.gmail.com> <826c64da0905210913m70a69e4ela2edd489b9cc2f94@mail.gmail.com> <9457e7c80905210944x539243cew9139326961ec7b6b@mail.gmail.com> <4A15C17D.6070906@noaa.gov> Message-ID: <1920C8FF-D2AD-475C-A46E-B7769A193F0D@cs.toronto.edu> On 21-May-09, at 5:02 PM, Christopher Barker wrote: > not so good when you do this, though: > > http://www.googlefight.com/index.php?lang=en_GB&word1=robert+kern+is+right&word2=robert+kern+is+wrong > > > Robert Kern is right: 112000 > Robert Kern is wrong: 648000 > > ouch! sorry, Robert. That's clearly not surveying the numpy list! Except all the cases where other people are wrong and he's correcting them are counted against him by this metric. At least if either of them raises the word "wrong". :P David From david_baddeley at yahoo.com.au Thu May 21 21:47:00 2009 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Thu, 21 May 2009 18:47:00 -0700 (PDT) Subject: [SciPy-user] Fitting an arbitrary distribution Message-ID: <36002.66689.qm@web33005.mail.mud.yahoo.com> Hi all, I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? thanks in advance, David From david at ar.media.kyoto-u.ac.jp Thu May 21 21:58:06 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 22 May 2009 10:58:06 +0900 Subject: [SciPy-user] Fitting an arbitrary distribution In-Reply-To: <36002.66689.qm@web33005.mail.mud.yahoo.com> References: <36002.66689.qm@web33005.mail.mud.yahoo.com> Message-ID: <4A1606AE.1030008@ar.media.kyoto-u.ac.jp> David Baddeley wrote: > Hi all, > > I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? That's a complex topic in general, there is no best answer, it depends on your case, and what you intend to do with the estimated distribution. In the case of a sum of mutiple Gaussians, the more commonly used name for this model is mixture models, and there is a vast range of possible techniques for fitting a dataset to this model. There is a package in scikits.learn to use the so-called Expectation Maximization algorithm to estimate the maximum likelihood of such models http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/em/ You can have an overview on the wiki page: http://en.wikipedia.org/wiki/Mixture_model cheers, David From ivo.maljevic at gmail.com Thu May 21 22:17:42 2009 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 21 May 2009 22:17:42 -0400 Subject: [SciPy-user] Inconsistent function calls? In-Reply-To: <4A15BF39.20501@noaa.gov> References: <826c64da0905210729l2444b542mb86e9014abdd6540@mail.gmail.com> <826c64da0905210746u5a1f4f0aw3bd883af985efc3e@mail.gmail.com> <1cd32cbb0905210819w3c208fc0w76aec9b4d007b8a8@mail.gmail.com> <826c64da0905210849k6bcf5920j767099497116f559@mail.gmail.com> <4A15BF39.20501@noaa.gov> Message-ID: <826c64da0905211917u15ec1567g72547e6cff117535@mail.gmail.com> Sorry Christopher, I thought since they are used for the same purpose, and have similar syntax (http://www.scipy.org/NumPy_for_Matlab_Users says ``MATLAB? and NumPy/SciPy have a lot in common``), that SciPy looks more like Matlab than any other programing language (excluding Octave and other Matlab clones). As for everything else you wrote, I already said that I don`t have any problem with using SciPy the way it is. Ivo 2009/5/21 Christopher Barker > Ivo Maljevic wrote: > > why bother to make something that looks like matlab, > > who ever said numpy "looks like matlab", any more than it look s like > any number of other programming environments... > > > Matplotlib does a pretty good job at replicating > > matlab plot functions, at least at the level I need it to. > > Because is was designed exactly to do that -- but I think MPL's Matlab > replicating has been a hindrance, rather than a help, to a good API. > However, is has been a help to its adoption. > > You may have noticed that over the years MPL is moving away from matlab, > toward a more pythonic API. > > Personally, I like python so much more than Matlab exactly for these > differences (and so many more). I suppose it's tough if you switch back > and forth, but I haven't touched Matlab in years. > > It is rand() that is inconsistent, and that is an accident of history. > > > what ones([3,3]) does, the same way random.rand(3,3) does, > > well, rand() is a convenience function, and doesn't take a bunch of > other parameters. In fact, it's listed under "Compatibility functions", > and is really a wrapper for: > > numpy.random.uniform, which takes a shape argument. > > > the reason why I included that error message in my previous message > > is because I think it is completely non-helpful. > > That's another issue -- non-helpful error messages do show up a lot -- > in that case, if the user had typed: > > np.zeros(3, dtype=3) > > the error message would make sense. If you can suggest a better message, > patches are always welcome. > > -Chris > > > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu May 21 22:27:20 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 May 2009 22:27:20 -0400 Subject: [SciPy-user] Fitting an arbitrary distribution In-Reply-To: <36002.66689.qm@web33005.mail.mud.yahoo.com> References: <36002.66689.qm@web33005.mail.mud.yahoo.com> Message-ID: <1cd32cbb0905211927l2ec6e3fbs1a5922b21bc966bd@mail.gmail.com> On Thu, May 21, 2009 at 9:47 PM, David Baddeley wrote: > > Hi all, > > I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? > I have an example script that tries to fit a dataset to all distributions in scipy.stats http://code.google.com/p/joepython/source/browse/trunk/joepython/scipystats/enhance/try_VaR.py I use ksstat as distance metric. If you have data with full support on the real line and look only at those distributions, then the current fit method works pretty well. Problems exist for distribution with a finite support boundary point. And stats.distributions only has univariate distributions, there is no support for multivariate distributions. I have also written several extension distributions (also univariate only), that are however not yet in scipy. What exactly do you mean with "sum of multiple Gaussians"? If i take it literally as sum of several normal distributed random variables, then the distribution would be just normal again. If you provide some more information on the structure of your data, I would be better able to see if scipy.stats can handle them. Josef From josef.pktd at gmail.com Thu May 21 22:33:12 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 May 2009 22:33:12 -0400 Subject: [SciPy-user] Fitting an arbitrary distribution In-Reply-To: <4A1606AE.1030008@ar.media.kyoto-u.ac.jp> References: <36002.66689.qm@web33005.mail.mud.yahoo.com> <4A1606AE.1030008@ar.media.kyoto-u.ac.jp> Message-ID: <1cd32cbb0905211933x53ea5b88na2f64934c5f121c@mail.gmail.com> On Thu, May 21, 2009 at 9:58 PM, David Cournapeau wrote: > David Baddeley wrote: >> Hi all, >> >> I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? > > That's a complex topic in general, there is no best answer, it depends > on your case, and what you intend to do with the estimated distribution. > > In the case of a sum of mutiple Gaussians, the more commonly used name > for this model is mixture models, and there is a vast range of possible > techniques for fitting a dataset to this model. There is a package in > scikits.learn to use the so-called Expectation Maximization algorithm to > estimate the maximum likelihood of such models > > http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/em/ > > You can have an overview on the wiki page: > > http://en.wikipedia.org/wiki/Mixture_model > Sum of random variables are convolutions, and are very different from mixtures of distributions. I just got confused in a discussion today when the other person talked about convolutions and I thought about mixtures and it didn't make a lot of sense. so, which is it? Josef From david at ar.media.kyoto-u.ac.jp Thu May 21 22:23:00 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 22 May 2009 11:23:00 +0900 Subject: [SciPy-user] Fitting an arbitrary distribution In-Reply-To: <1cd32cbb0905211933x53ea5b88na2f64934c5f121c@mail.gmail.com> References: <36002.66689.qm@web33005.mail.mud.yahoo.com> <4A1606AE.1030008@ar.media.kyoto-u.ac.jp> <1cd32cbb0905211933x53ea5b88na2f64934c5f121c@mail.gmail.com> Message-ID: <4A160C84.9050500@ar.media.kyoto-u.ac.jp> josef.pktd at gmail.com wrote: > On Thu, May 21, 2009 at 9:58 PM, David Cournapeau > wrote: > >> David Baddeley wrote: >> >>> Hi all, >>> >>> I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? >>> >> That's a complex topic in general, there is no best answer, it depends >> on your case, and what you intend to do with the estimated distribution. >> >> In the case of a sum of mutiple Gaussians, the more commonly used name >> for this model is mixture models, and there is a vast range of possible >> techniques for fitting a dataset to this model. There is a package in >> scikits.learn to use the so-called Expectation Maximization algorithm to >> estimate the maximum likelihood of such models >> >> http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/em/ >> >> You can have an overview on the wiki page: >> >> http://en.wikipedia.org/wiki/Mixture_model >> >> > > Sum of random variables are convolutions, and are very different from > mixtures of distributions. I just got confused in a discussion today > when the other person talked about convolutions and I thought about > mixtures and it didn't make a lot of sense. > It depends on what is meant by sum of Gaussians: sum of the random variables or sum of the distribution. In the case of the sum of random variables, then it is a convolution as you mentioned (assuming independence of the random variables). But I think some people think mostly in terms of histogram/distributions, specially if they are not statisticians. I don't understand the term "sum of gaussians" as a technical term. David From josef.pktd at gmail.com Thu May 21 22:41:37 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 May 2009 22:41:37 -0400 Subject: [SciPy-user] Fitting an arbitrary distribution In-Reply-To: <1cd32cbb0905211933x53ea5b88na2f64934c5f121c@mail.gmail.com> References: <36002.66689.qm@web33005.mail.mud.yahoo.com> <4A1606AE.1030008@ar.media.kyoto-u.ac.jp> <1cd32cbb0905211933x53ea5b88na2f64934c5f121c@mail.gmail.com> Message-ID: <1cd32cbb0905211941l5b7f6611g84aaedcd57150b9e@mail.gmail.com> On Thu, May 21, 2009 at 10:33 PM, wrote: > On Thu, May 21, 2009 at 9:58 PM, David Cournapeau > wrote: >> David Baddeley wrote: >>> Hi all, >>> >>> I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? >> >> That's a complex topic in general, there is no best answer, it depends >> on your case, and what you intend to do with the estimated distribution. >> >> In the case of a sum of mutiple Gaussians, the more commonly used name >> for this model is mixture models, and there is a vast range of possible >> techniques for fitting a dataset to this model. There is a package in >> scikits.learn to use the so-called Expectation Maximization algorithm to >> estimate the maximum likelihood of such models >> >> http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/em/ >> >> You can have an overview on the wiki page: >> >> http://en.wikipedia.org/wiki/Mixture_model >> > > Sum of random variables are convolutions, and are very different from > mixtures of distributions. I just got confused in a discussion today > when the other person talked about convolutions and I thought about > mixtures and it didn't make a lot of sense. > > so, which is it? > Actually, Gaussians is in this context ambiguous, does it mean a random variable or refer to the density/distribution function. Sum of random variable is very different from a (weighted) sum of distribution functions, which both are possible interpretation of "sum of Gaussians" Josef From josef.pktd at gmail.com Thu May 21 22:44:30 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 21 May 2009 22:44:30 -0400 Subject: [SciPy-user] Fitting an arbitrary distribution In-Reply-To: <4A160C84.9050500@ar.media.kyoto-u.ac.jp> References: <36002.66689.qm@web33005.mail.mud.yahoo.com> <4A1606AE.1030008@ar.media.kyoto-u.ac.jp> <1cd32cbb0905211933x53ea5b88na2f64934c5f121c@mail.gmail.com> <4A160C84.9050500@ar.media.kyoto-u.ac.jp> Message-ID: <1cd32cbb0905211944x5e65ad10r75a5b43c676a290b@mail.gmail.com> On Thu, May 21, 2009 at 10:23 PM, David Cournapeau wrote: > josef.pktd at gmail.com wrote: >> On Thu, May 21, 2009 at 9:58 PM, David Cournapeau >> wrote: >> >>> David Baddeley wrote: >>> >>>> Hi all, >>>> >>>> I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? >>>> >>> That's a complex topic in general, there is no best answer, it depends >>> on your case, and what you intend to do with the estimated distribution. >>> >>> In the case of a sum of mutiple Gaussians, the more commonly used name >>> for this model is mixture models, and there is a vast range of possible >>> techniques for fitting a dataset to this model. There is a package in >>> scikits.learn to use the so-called Expectation Maximization algorithm to >>> estimate the maximum likelihood of such models >>> >>> http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/em/ >>> >>> You can have an overview on the wiki page: >>> >>> http://en.wikipedia.org/wiki/Mixture_model >>> >>> >> >> Sum of random variables are convolutions, and are very different from >> mixtures of distributions. I just got confused in a discussion today >> when the other person talked about convolutions and I thought about >> mixtures and it didn't make a lot of sense. >> > > It depends on what is meant by sum of Gaussians: sum of the random > variables or sum of the distribution. In the case of the sum of random > variables, then it is a convolution as you mentioned (assuming > independence of the random variables). But I think some people think > mostly in terms of histogram/distributions, specially if they are not > statisticians. I don't understand the term "sum of gaussians" as a > technical term. > Yes, I agree, you were ahead of me on realizing this. Josef From dav at alum.mit.edu Thu May 21 22:55:54 2009 From: dav at alum.mit.edu (Dav Clark) Date: Thu, 21 May 2009 19:55:54 -0700 Subject: [SciPy-user] Fitting an arbitrary distribution In-Reply-To: <4A1606AE.1030008@ar.media.kyoto-u.ac.jp> References: <36002.66689.qm@web33005.mail.mud.yahoo.com> <4A1606AE.1030008@ar.media.kyoto-u.ac.jp> Message-ID: <6330E6D3-DC4D-496E-A928-7CAC413E1D45@alum.mit.edu> On May 21, 2009, at 6:58 PM, David Cournapeau wrote: > David Baddeley wrote: >> Hi all, >> >> I want to fit an arbitrary distribution (in this case the sum of >> multiple Gaussians) to some measured data and was wondering if >> anyone could give me any pointers as to the best way of doing this. >> I'd like to avoid fitting to a histogram if possible. How do >> the .fit() methods of the various distributions under scipy.stats >> do it? My first thought would be to compare the cumulative >> distribution of my data with that of the model distibution using >> something like the kolmogorov-smirnov metric (maximum absolute >> distance between the curves) and to minimize this using >> optimize.fmin. Is this the right way to do it? Or is there an >> easier way? > > That's a complex topic in general, there is no best answer, it depends > on your case, and what you intend to do with the estimated > distribution. > > In the case of a sum of mutiple Gaussians, the more commonly used name > for this model is mixture models, and there is a vast range of > possible > techniques for fitting a dataset to this model. There is a package in > scikits.learn to use the so-called Expectation Maximization > algorithm to > estimate the maximum likelihood of such models > > http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/em/ There's actually a broken link on that page if you look for the em page for the scikits project, which is now here: http://scikits.appspot.com/ Depending on what exactly you want to do, you may also want to check out PyMC for metropolis-hastings. http://code.google.com/p/pymc/ My guess is that you're looking for the em package though. Cheers, Dav From david_baddeley at yahoo.com.au Thu May 21 23:47:21 2009 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Thu, 21 May 2009 20:47:21 -0700 (PDT) Subject: [SciPy-user] Fitting an arbitrary distribution In-Reply-To: References: Message-ID: <790977.22967.qm@web33006.mail.mud.yahoo.com> Thanks for the prompt replies! I guess what I was meaning was that the PDF / histogram was the sum or multiple Gaussians/normal distibutions. Sorry about the ambiguity. I've had a quick look at the Em package and mixture models, and while my problem is similar they might be a little more general. I guess I should describe the problem in a bit more detail - I'm measuring the length of an objects which can be built up from multiple unit cells. The measured size distribution is thus multimodal, and I want to extract both the unit size and the fraction of objects having each number of unit cells. This makes the problem much more constrained than what is dealt with in the Em package. So far I've tried overriding rv_continuous to create a distribution which roughly matches - but haven't been able to fit this. cheers, David ----- Original Message ---- From: "scipy-user-request at scipy.org" To: scipy-user at scipy.org Sent: Friday, 22 May, 2009 2:44:37 PM Subject: SciPy-user Digest, Vol 69, Issue 43 Send SciPy-user mailing list submissions to scipy-user at scipy.org To subscribe or unsubscribe via the World Wide Web, visit http://mail.scipy.org/mailman/listinfo/scipy-user or, via email, send a message with subject or body 'help' to scipy-user-request at scipy.org You can reach the person managing the list at scipy-user-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-user digest..." Today's Topics: 1. Fitting an arbitrary distribution (David Baddeley) 2. Re: Fitting an arbitrary distribution (David Cournapeau) 3. Re: Inconsistent function calls? (Ivo Maljevic) 4. Re: Fitting an arbitrary distribution (josef.pktd at gmail.com) 5. Re: Fitting an arbitrary distribution (josef.pktd at gmail.com) 6. Re: Fitting an arbitrary distribution (David Cournapeau) 7. Re: Fitting an arbitrary distribution (josef.pktd at gmail.com) 8. Re: Fitting an arbitrary distribution (josef.pktd at gmail.com) ---------------------------------------------------------------------- Message: 1 Date: Thu, 21 May 2009 18:47:00 -0700 (PDT) From: David Baddeley Subject: [SciPy-user] Fitting an arbitrary distribution To: scipy-user at scipy.org Message-ID: <36002.66689.qm at web33005.mail.mud.yahoo.com> Content-Type: text/plain; charset=utf-8 Hi all, I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? thanks in advance, David ------------------------------ Message: 2 Date: Fri, 22 May 2009 10:58:06 +0900 From: David Cournapeau Subject: Re: [SciPy-user] Fitting an arbitrary distribution To: David Baddeley , SciPy Users List Message-ID: <4A1606AE.1030008 at ar.media.kyoto-u.ac.jp> Content-Type: text/plain; charset=ISO-8859-1 David Baddeley wrote: > Hi all, > > I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? That's a complex topic in general, there is no best answer, it depends on your case, and what you intend to do with the estimated distribution. In the case of a sum of mutiple Gaussians, the more commonly used name for this model is mixture models, and there is a vast range of possible techniques for fitting a dataset to this model. There is a package in scikits.learn to use the so-called Expectation Maximization algorithm to estimate the maximum likelihood of such models http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/em/ You can have an overview on the wiki page: http://en.wikipedia.org/wiki/Mixture_model cheers, David ------------------------------ Message: 3 Date: Thu, 21 May 2009 22:17:42 -0400 From: Ivo Maljevic Subject: Re: [SciPy-user] Inconsistent function calls? To: SciPy Users List Message-ID: <826c64da0905211917u15ec1567g72547e6cff117535 at mail.gmail.com> Content-Type: text/plain; charset="iso-8859-1" Sorry Christopher, I thought since they are used for the same purpose, and have similar syntax (http://www.scipy.org/NumPy_for_Matlab_Users says ``MATLAB? and NumPy/SciPy have a lot in common``), that SciPy looks more like Matlab than any other programing language (excluding Octave and other Matlab clones). As for everything else you wrote, I already said that I don`t have any problem with using SciPy the way it is. Ivo 2009/5/21 Christopher Barker > Ivo Maljevic wrote: > > why bother to make something that looks like matlab, > > who ever said numpy "looks like matlab", any more than it look s like > any number of other programming environments... > > > Matplotlib does a pretty good job at replicating > > matlab plot functions, at least at the level I need it to. > > Because is was designed exactly to do that -- but I think MPL's Matlab > replicating has been a hindrance, rather than a help, to a good API. > However, is has been a help to its adoption. > > You may have noticed that over the years MPL is moving away from matlab, > toward a more pythonic API. > > Personally, I like python so much more than Matlab exactly for these > differences (and so many more). I suppose it's tough if you switch back > and forth, but I haven't touched Matlab in years. > > It is rand() that is inconsistent, and that is an accident of history. > > > what ones([3,3]) does, the same way random.rand(3,3) does, > > well, rand() is a convenience function, and doesn't take a bunch of > other parameters. In fact, it's listed under "Compatibility functions", > and is really a wrapper for: > > numpy.random.uniform, which takes a shape argument. > > > the reason why I included that error message in my previous message > > is because I think it is completely non-helpful. > > That's another issue -- non-helpful error messages do show up a lot -- > in that case, if the user had typed: > > np.zeros(3, dtype=3) > > the error message would make sense. If you can suggest a better message, > patches are always welcome. > > -Chris > > > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20090521/9b06a80b/attachment-0001.html ------------------------------ Message: 4 Date: Thu, 21 May 2009 22:27:20 -0400 From: josef.pktd at gmail.com Subject: Re: [SciPy-user] Fitting an arbitrary distribution To: David Baddeley , SciPy Users List Message-ID: <1cd32cbb0905211927l2ec6e3fbs1a5922b21bc966bd at mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 On Thu, May 21, 2009 at 9:47 PM, David Baddeley wrote: > > Hi all, > > I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? > I have an example script that tries to fit a dataset to all distributions in scipy.stats http://code.google.com/p/joepython/source/browse/trunk/joepython/scipystats/enhance/try_VaR.py I use ksstat as distance metric. If you have data with full support on the real line and look only at those distributions, then the current fit method works pretty well. Problems exist for distribution with a finite support boundary point. And stats.distributions only has univariate distributions, there is no support for multivariate distributions. I have also written several extension distributions (also univariate only), that are however not yet in scipy. What exactly do you mean with "sum of multiple Gaussians"? If i take it literally as sum of several normal distributed random variables, then the distribution would be just normal again. If you provide some more information on the structure of your data, I would be better able to see if scipy.stats can handle them. Josef ------------------------------ Message: 5 Date: Thu, 21 May 2009 22:33:12 -0400 From: josef.pktd at gmail.com Subject: Re: [SciPy-user] Fitting an arbitrary distribution To: SciPy Users List Message-ID: <1cd32cbb0905211933x53ea5b88na2f64934c5f121c at mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 On Thu, May 21, 2009 at 9:58 PM, David Cournapeau wrote: > David Baddeley wrote: >> Hi all, >> >> I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? > > That's a complex topic in general, there is no best answer, it depends > on your case, and what you intend to do with the estimated distribution. > > In the case of a sum of mutiple Gaussians, the more commonly used name > for this model is mixture models, and there is a vast range of possible > techniques for fitting a dataset to this model. There is a package in > scikits.learn to use the so-called Expectation Maximization algorithm to > estimate the maximum likelihood of such models > > http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/em/ > > You can have an overview on the wiki page: > > http://en.wikipedia.org/wiki/Mixture_model > Sum of random variables are convolutions, and are very different from mixtures of distributions. I just got confused in a discussion today when the other person talked about convolutions and I thought about mixtures and it didn't make a lot of sense. so, which is it? Josef ------------------------------ Message: 6 Date: Fri, 22 May 2009 11:23:00 +0900 From: David Cournapeau Subject: Re: [SciPy-user] Fitting an arbitrary distribution To: SciPy Users List Message-ID: <4A160C84.9050500 at ar.media.kyoto-u.ac.jp> Content-Type: text/plain; charset=ISO-8859-1 josef.pktd at gmail.com wrote: > On Thu, May 21, 2009 at 9:58 PM, David Cournapeau > wrote: > >> David Baddeley wrote: >> >>> Hi all, >>> >>> I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? >>> >> That's a complex topic in general, there is no best answer, it depends >> on your case, and what you intend to do with the estimated distribution. >> >> In the case of a sum of mutiple Gaussians, the more commonly used name >> for this model is mixture models, and there is a vast range of possible >> techniques for fitting a dataset to this model. There is a package in >> scikits.learn to use the so-called Expectation Maximization algorithm to >> estimate the maximum likelihood of such models >> >> http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/em/ >> >> You can have an overview on the wiki page: >> >> http://en.wikipedia.org/wiki/Mixture_model >> >> > > Sum of random variables are convolutions, and are very different from > mixtures of distributions. I just got confused in a discussion today > when the other person talked about convolutions and I thought about > mixtures and it didn't make a lot of sense. > It depends on what is meant by sum of Gaussians: sum of the random variables or sum of the distribution. In the case of the sum of random variables, then it is a convolution as you mentioned (assuming independence of the random variables). But I think some people think mostly in terms of histogram/distributions, specially if they are not statisticians. I don't understand the term "sum of gaussians" as a technical term. David ------------------------------ Message: 7 Date: Thu, 21 May 2009 22:41:37 -0400 From: josef.pktd at gmail.com Subject: Re: [SciPy-user] Fitting an arbitrary distribution To: SciPy Users List Message-ID: <1cd32cbb0905211941l5b7f6611g84aaedcd57150b9e at mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 On Thu, May 21, 2009 at 10:33 PM, wrote: > On Thu, May 21, 2009 at 9:58 PM, David Cournapeau > wrote: >> David Baddeley wrote: >>> Hi all, >>> >>> I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? >> >> That's a complex topic in general, there is no best answer, it depends >> on your case, and what you intend to do with the estimated distribution. >> >> In the case of a sum of mutiple Gaussians, the more commonly used name >> for this model is mixture models, and there is a vast range of possible >> techniques for fitting a dataset to this model. There is a package in >> scikits.learn to use the so-called Expectation Maximization algorithm to >> estimate the maximum likelihood of such models >> >> http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/em/ >> >> You can have an overview on the wiki page: >> >> http://en.wikipedia.org/wiki/Mixture_model >> > > Sum of random variables are convolutions, and are very different from > mixtures of distributions. I just got confused in a discussion today > when the other person talked about convolutions and I thought about > mixtures and it didn't make a lot of sense. > > so, which is it? > Actually, Gaussians is in this context ambiguous, does it mean a random variable or refer to the density/distribution function. Sum of random variable is very different from a (weighted) sum of distribution functions, which both are possible interpretation of "sum of Gaussians" Josef ------------------------------ Message: 8 Date: Thu, 21 May 2009 22:44:30 -0400 From: josef.pktd at gmail.com Subject: Re: [SciPy-user] Fitting an arbitrary distribution To: SciPy Users List Message-ID: <1cd32cbb0905211944x5e65ad10r75a5b43c676a290b at mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 On Thu, May 21, 2009 at 10:23 PM, David Cournapeau wrote: > josef.pktd at gmail.com wrote: >> On Thu, May 21, 2009 at 9:58 PM, David Cournapeau >> wrote: >> >>> David Baddeley wrote: >>> >>>> Hi all, >>>> >>>> I want to fit an arbitrary distribution (in this case the sum of multiple Gaussians) to some measured data and was wondering if anyone could give me any pointers as to the best way of doing this. I'd like to avoid fitting to a histogram if possible. How do the .fit() methods of the various distributions under scipy.stats do it? My first thought would be to compare the cumulative distribution of my data with that of the model distibution using something like the kolmogorov-smirnov metric (maximum absolute distance between the curves) and to minimize this using optimize.fmin. Is this the right way to do it? Or is there an easier way? >>>> >>> That's a complex topic in general, there is no best answer, it depends >>> on your case, and what you intend to do with the estimated distribution. >>> >>> In the case of a sum of mutiple Gaussians, the more commonly used name >>> for this model is mixture models, and there is a vast range of possible >>> techniques for fitting a dataset to this model. There is a package in >>> scikits.learn to use the so-called Expectation Maximization algorithm to >>> estimate the maximum likelihood of such models >>> >>> http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/em/ >>> >>> You can have an overview on the wiki page: >>> >>> http://en.wikipedia.org/wiki/Mixture_model >>> >>> >> >> Sum of random variables are convolutions, and are very different from >> mixtures of distributions. I just got confused in a discussion today >> when the other person talked about convolutions and I thought about >> mixtures and it didn't make a lot of sense. >> > > It depends on what is meant by sum of Gaussians: sum of the random > variables or sum of the distribution. In the case of the sum of random > variables, then it is a convolution as you mentioned (assuming > independence of the random variables). But I think some people think > mostly in terms of histogram/distributions, specially if they are not > statisticians. I don't understand the term "sum of gaussians" as a > technical term. > Yes, I agree, you were ahead of me on realizing this. Josef ------------------------------ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user End of SciPy-user Digest, Vol 69, Issue 43 ****************************************** From josef.pktd at gmail.com Fri May 22 00:24:59 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 22 May 2009 00:24:59 -0400 Subject: [SciPy-user] Fitting an arbitrary distribution In-Reply-To: <790977.22967.qm@web33006.mail.mud.yahoo.com> References: <790977.22967.qm@web33006.mail.mud.yahoo.com> Message-ID: <1cd32cbb0905212124w42c1cff6kb9ef2a5d82f0176b@mail.gmail.com> On Thu, May 21, 2009 at 11:47 PM, David Baddeley wrote: > > Thanks for the prompt replies! > > I guess what I was meaning was that the PDF / histogram was the sum or multiple Gaussians/normal distibutions. Sorry about the ambiguity. I've had a quick look at the Em package and mixture models, and while my problem is similar they might be a little more general. > > I guess I should describe the problem in a bit more detail - I'm measuring the length of an objects which can be built up from multiple unit cells. The measured size distribution is thus multimodal, and I want to extract both the unit size and the fraction of objects having each number of unit cells. This makes the problem much more constrained than what is dealt with in the Em package. > > So far I've tried overriding rv_continuous to create a distribution which roughly matches - but haven't been able to fit this. First please don't leave the entire digest in your reply. just for clarification: Do all unit cells have the same size distribution? because in that case you have a lot more structure in your distribution than is generally assumed in mixture models. Also the number of parameters to estimate would be much smaller. So maximum likelihood might work relatively well, if you give it good starting values with information you get from the histogram. If the size distribution of a unit cell is additionally normally distributed, then it would be possible to write the correct likelihood function and use fit to estimate the distribution parameters. f(x) = sum_n f(x|n,theta) p(n) where f(x|n,theta) would be the normal distribution of n iid random variables (the cell sizes) and theta would be the common mean and variance of the normal distribution p(n) could be non-parametric, a vector of p's, or parametric e.g. Poisson. The distribution parameters would be just mean, variance and the p's This should be doable by subclassing rv_continuous, I try to look for a an example for subclassing. However, this still wouldn't give you test statistics, AIC information criteria and so on. If you don't want to impose so much structure, you might be better of with the general packages for mixtures, e.g. David's EM, or I think pymc should also be able to handle this, although I never tried. Josef From dwf at cs.toronto.edu Fri May 22 00:46:43 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 22 May 2009 00:46:43 -0400 Subject: [SciPy-user] Fitting an arbitrary distribution In-Reply-To: <790977.22967.qm@web33006.mail.mud.yahoo.com> References: <790977.22967.qm@web33006.mail.mud.yahoo.com> Message-ID: On 21-May-09, at 11:47 PM, David Baddeley wrote: > Thanks for the prompt replies! > > I guess what I was meaning was that the PDF / histogram was the sum > or multiple Gaussians/normal distibutions. Sorry about the > ambiguity. I've had a quick look at the Em package and mixture > models, and while my problem is similar they might be a little more > general. > > I guess I should describe the problem in a bit more detail - I'm > measuring the length of an objects which can be built up from > multiple unit cells. The measured size distribution is thus > multimodal, and I want to extract both the unit size and the > fraction of objects having each number of unit cells. This makes the > problem much more constrained than what is dealt with in the Em > package. So there is exactly one kind of 'unit cell' and then different lengths of objects? Are your observations expected to be particularly noisy? What you're staring down isn't quite a standard mixture model, your 'hidden variable' is just this unit size. Depending on the scale of your problem PyMC would work very well here. You'd have one Stochastic for the unit size, another for the integer multiple associated with each quantity, a Deterministic that multiplies the two and either make the Deterministic observed or add another Stochastic with the Deterministic as its mean if you believe your observations are noisy with a certain noise distribution. Note that since these things you are measuring are supposedly discrete multiples of the unit size a Gaussian distribution isn't appropriate for the multiples. Something like a Poisson would make more sense. Then you'd just fit the model and look at the posterior distributions over each quantity of interest, taking the maximum a posteriori (posterior mean) estimate where appropriate. To determine the fraction of the population that have a given number of unit cells, you basically just count (you'd have an estimate for each observation of how many unit cells it has). You could also do this by EM, but pyem would not be suitable, as it's built specifically for the case of vanilla Gaussian mixtures. PyMC would be a ready-made solution which would give you the additional flexibility of inferring a distribution over all estimated parameters rather than just a point-estimate. David From ferrell at diablotech.com Fri May 22 09:02:35 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Fri, 22 May 2009 07:02:35 -0600 Subject: [SciPy-user] TimeSeries concatenate Message-ID: <68846329-16FB-457F-822C-5E49ECAED26A@diablotech.com> I'm having trouble with concatenate in scikits.timeseries. I can't find (timeseries) concatenate documented, but I did find an example from a few years ago (http://projects.scipy.org/scipy/changeset/3570), so maybe this isn't supported. Here's what I get when I try to use the timeseries concatenate. In [82]: import scikits.timeseries as ts In [83]: ts1 = ts.time_series(data=[1,2,3], start_date=ts.now('d')) In [84]: ts2 = ts.time_series(data=[4,5,6], start_date=ts.now('d')+1) In [85]: ts.concatenate(ts1, ts2) --------------------------------------------------------------------------- Traceback (most recent call last) /Users/Shared/Develop/Financial/LakMerc/Mercury/tests/ in () /Library/Frameworks/Python.framework/Versions/2.5.2001/lib/python2.5/ site-packages/scikits/timeseries/tseries.py in concatenate(series, axis, remove_duplicates, fill_missing) 1950 """ 1951 # Get the common frequency, raise an error if incompatibility -> 1952 common_f = _compare_frequencies(*series) 1953 # Concatenate the order of series 1954 sidx = np.concatenate([np.repeat(i,len(s)) /Library/Frameworks/Python.framework/Versions/2.5.2001/lib/python2.5/ site-packages/scikits/timeseries/tseries.py in _compare_frequencies(*series) 284 frequencies. 285 """ --> 286 unique_freqs = np.unique([x.freqstr for x in series]) 287 try: 288 common_freq = unique_freqs.item() : 'numpy.int32' object has no attribute 'freqstr' From ferrell at diablotech.com Fri May 22 09:09:03 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Fri, 22 May 2009 07:09:03 -0600 Subject: [SciPy-user] TimeSeries concatenate In-Reply-To: <68846329-16FB-457F-822C-5E49ECAED26A@diablotech.com> References: <68846329-16FB-457F-822C-5E49ECAED26A@diablotech.com> Message-ID: This was user error, of course. I needed an extra set of parens: ts.concatenate( (ts1, ts2) ) Question: What happens to duplicate dates? It seems that the data in the first series is used. Is that the rule? thanks, -robert On May 22, 2009, at 7:02 AM, Robert Ferrell wrote: > I'm having trouble with concatenate in scikits.timeseries. I can't > find (timeseries) concatenate documented, but I did find an example > from a few years ago (http://projects.scipy.org/scipy/changeset/3570), > so maybe this isn't supported. > > Here's what I get when I try to use the timeseries concatenate. > > In [82]: import scikits.timeseries as ts > > In [83]: ts1 = ts.time_series(data=[1,2,3], start_date=ts.now('d')) > > In [84]: ts2 = ts.time_series(data=[4,5,6], start_date=ts.now('d')+1) > > In [85]: ts.concatenate(ts1, ts2) > --------------------------------------------------------------------------- > Traceback (most recent call > last) > > /Users/Shared/Develop/Financial/LakMerc/Mercury/tests/ console> in () > > /Library/Frameworks/Python.framework/Versions/2.5.2001/lib/python2.5/ > site-packages/scikits/timeseries/tseries.py in concatenate(series, > axis, remove_duplicates, fill_missing) > 1950 """ > 1951 # Get the common frequency, raise an error if > incompatibility > -> 1952 common_f = _compare_frequencies(*series) > 1953 # Concatenate the order of series > 1954 sidx = np.concatenate([np.repeat(i,len(s)) > > /Library/Frameworks/Python.framework/Versions/2.5.2001/lib/python2.5/ > site-packages/scikits/timeseries/tseries.py in > _compare_frequencies(*series) > 284 frequencies. > 285 """ > --> 286 unique_freqs = np.unique([x.freqstr for x in series]) > 287 try: > 288 common_freq = unique_freqs.item() > > : 'numpy.int32' object has no > attribute 'freqstr' > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Fri May 22 10:40:00 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 22 May 2009 10:40:00 -0400 Subject: [SciPy-user] Fitting an arbitrary distribution In-Reply-To: References: <790977.22967.qm@web33006.mail.mud.yahoo.com> Message-ID: <1cd32cbb0905220740jd623f2dl5d8091cded91489c@mail.gmail.com> On Fri, May 22, 2009 at 12:46 AM, David Warde-Farley wrote: > On 21-May-09, at 11:47 PM, David Baddeley wrote: > >> Thanks for the prompt replies! >> >> I guess what I was meaning was that the PDF / histogram was the sum >> or multiple Gaussians/normal distibutions. Sorry about the >> ambiguity. I've had a quick look at the Em package and mixture >> models, and while my problem is similar they might be a little more >> general. >> >> I guess I should describe the problem in a bit more detail - I'm >> measuring the length of an objects which can be built up from >> multiple unit cells. The measured size distribution is thus >> multimodal, and I want to extract both the unit size and the >> fraction of objects having each number of unit cells. This makes the >> problem much more constrained than what is dealt with in the Em >> package. > > So there is exactly one kind of 'unit cell' and then different lengths > of objects? Are your observations expected to be particularly noisy? > > What you're staring down isn't quite a standard mixture model, your > 'hidden variable' is just this unit size. > > Depending on the scale of your problem PyMC would work very well here. > You'd have one Stochastic for the unit size, another for the integer > multiple associated with each quantity, a Deterministic that > multiplies the two and either make the Deterministic observed or add > another Stochastic with the Deterministic as its mean if you believe > your observations are noisy with a certain noise distribution. > > Note that since these things you are measuring are supposedly discrete > multiples of the unit size a Gaussian distribution isn't appropriate > for the multiples. Something like a Poisson would make more sense. > > Then you'd just fit the model and look at the posterior distributions > over each quantity of interest, taking the maximum a posteriori > (posterior mean) estimate where appropriate. ?To determine the > fraction of the population that have a given number of unit cells, you > basically just count (you'd have an estimate for each observation of > how many unit cells it has). > > You could also do this by EM, but pyem would not be suitable, as it's > built specifically for the case of vanilla Gaussian mixtures. PyMC > would be a ready-made solution which would give you the additional > flexibility of inferring a distribution over all estimated parameters > rather than just a point-estimate. > I also think that pymc offers the best and well tested way of doing this. Just to show how it is possible to do it with stats distributions, I attach a script where I quickly hacked together different pieces to try out the maximum likelihood estimation for this case. It's not cleaned up and has pieces left over from copy and paste, but it's a proof of concept for how univariate mixture distributions can be constructed as subclasses of rv_continuous. But to be really useful, several parts need to be improved. Josef -------------- next part -------------- # -*- coding: utf-8 -*- """ univariate_mixture.py Notes ----- * estimates a mixture of normal distribution given by random sum of iid normal distributions * only works for good initial conditions and if variance of individual normal distribution is not too large * maximum likelihood estimation using generic fit method is not very precise and makes interpretation of parameters more difficult * maximum likelihood estimation with fixed location and scale done in a quick hack, gives good results for the "nice" estimation problem (good initial conditions and small variance) * to use generic methods, only pdf would need to be specified * restriction: hard coded number of mixtures is four * to be useful would need AIC, BIC, and covariance matrix of maximum likelihood estimates * does not impose non-negativity of the mixture probabilities, could use logit transformation * this pattern might work well for arbitrary distributions, when the estimation problem is nice, e.g. mixture of only 2 distributions or well separated distributions * need proper implementation of estimation with frozen parameters like location and scale #see: for application of frozen parameter in fit #http://mail.scipy.org/pipermail/scipy-user/2009-February/019968.html License: same as scipy """ import numpy as np from scipy import stats, special, optimize from scipy.stats import distributions class normmix_gen(distributions.rv_continuous): def _rvs(self, mu, sig, p1, p2, p3): return np.hstack(( mu+ sig*stats.norm.rvs(size=p1*self._size), 2.0*mu+2.0*sig*stats.norm.rvs(size=p2*self._size), 3.0*mu+3.0*sig*stats.norm.rvs(size=p3*self._size), 4.0*mu+4.0*sig*stats.norm.rvs(size=round((1-p1-p2-p3)*self._size)))) def _pdf(self, x, mu, sig, p1, p2, p3): return p1*stats.norm.pdf(x,loc=mu, scale=sig) + \ p2*stats.norm.pdf(x,loc=2.0*mu, scale=2.0*sig) +\ p3*stats.norm.pdf(x,loc=3.0*mu, scale=3.0*sig) +\ (1-p1-p2-p3)*stats.norm.pdf(x,loc=4.0*mu, scale=4.0*sig) def _nnlf_(self, x, *args): # inherited version for comparison #print 'args in nnlf_', args return -np.sum(np.log(self._pdf(x, *args)),axis=0) def nnlf_fix(self, theta, x): # quick hack to remove loc and scale, removed also bound checking # # - sum (log pdf(x, theta),axis=0) # where theta are the parameters (including loc and scale) # # try: # loc = theta[-2] # scale = theta[-1] # args = tuple(theta[:-2]) # except IndexError: # raise ValueError, "Not enough input arguments." # if not self._argcheck(*args) or scale <= 0: # return inf # x = arr((x-loc) / scale) # cond0 = (x <= self.a) | (x >= self.b) # if (any(cond0)): # return inf # else: # N = len(x) args = tuple(theta) #print args return self._nnlf_(x, *args)# + N*log(scale) def fit_fix(self, data, *args, **kwds): '''stolen from frozen distribution estimation and partial removal of loc and scale''' loc0, scale0 = map(kwds.get, ['loc', 'scale'],[0.0, 1.0]) Narg = len(args) if Narg == 0 and hasattr(self, '_fitstart'): x0 = self._fitstart(data) elif Narg > self.numargs: raise ValueError, "Too many input arguments." else: #args += (1.0,)*(self.numargs-Narg) # location and scale are at the end x0 = args# + (loc0, scale0) if 'frozen' in kwds: frmask = np.array(kwds['frozen']) if len(frmask) != self.numargs+2: raise ValueError, "Incorrect number of frozen arguments." else: # keep starting values for not frozen parameters x0 = np.array(x0)[np.isnan(frmask)] else: frmask = None #print x0 #print frmask return optimize.fmin(self.nnlf_fix, x0, args=(np.ravel(data), ), disp=0) normmix = normmix_gen(a=0.0,name='normmix',longname='A normmix', shapes=('mu, sig, p1, p2, p3'), extradoc="""normmix""") true = (1.,0.05,0.4,0.2,0.2) rvs = normmix.rvs(size=1000,*true) #rvs = normmix.rvs(1.,0.01,0.4,0.2,0.2, size=1000) #rvs = normmix.rvs(1.,0.05,0.5,0.49,0.01, size=1000) est = normmix.fit(rvs,1.,0.005,0.4,0.2,0.2,loc=0,scale=1) startval = np.array((1.,0.005,0.4,0.2,0.2))*1.1 #est2 = normmix.fit_fix(rvs,*startval) est2 = normmix.fit_fix(1.0*rvs,1.05,0.005,0.3,0.21,0.21) print 'estimate with generic fit' print est print 'estimate with fixed loc scale' print est2 import matplotlib.pyplot as plt #x = rvs mu, sig, p1, p2, p3 = true plt.figure() c,b,d=plt.hist(rvs,bins=50,normed=True) plt.figure() plt.plot(b,p1*stats.norm.pdf(b,loc=mu, scale=sig), b,p2*stats.norm.pdf(b,loc=2.0*mu, scale=2.0*sig), b,p3*stats.norm.pdf(b,loc=3.0*mu, scale=3.0*sig)) plt.figure() plt.plot(b,normmix.pdf(b,1,0.05,0.4,0.2,0.2)) #plt.show() plt.figure() plt.plot(b,normmix.pdf(b,*est)) plt.title('estimated pdf, generic fit') #plt.show() plt.figure() plt.plot(b,normmix.pdf(b,*est2)) plt.title('estimated pdf, loc,scale fixed') #plt.show() print np.array(est)[:-2]-true print np.array(est2)-true print 'estimated mean unit size', est2[0] print 'estimated standard deviation of unit size', est2[1] From keflavich at gmail.com Fri May 22 11:40:22 2009 From: keflavich at gmail.com (Adam) Date: Fri, 22 May 2009 08:40:22 -0700 (PDT) Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> <6595CCDD-785D-448E-AE21-1D184BEF6330@cs.toronto.edu> Message-ID: <3f919b92-d7a6-4a82-9cb7-755172bd4af9@v23g2000pro.googlegroups.com> Awesome, thanks! That fixed my numpy build errors. I then successfully installed scipy and matplotlib. However, I still failed at getting a nice pylab session running: ipython won't run because of 'entry point' errors. Argh. Any tips there? I swear someday I'll get a 64 bit pylab session running... Traceback (most recent call last): File "/usr/local/bin/ipython", line 9, in load_entry_point('ipython==0.9.1', 'console_scripts', 'ipython')() File "build/bdist.macosx-10.5-universal/egg/pkg_resources.py", line 277, in load_entry_point File "build/bdist.macosx-10.5-universal/egg/pkg_resources.py", line 2179, in load_entry_point ImportError: Entry point ('console_scripts', 'ipython') not found Also, if anyone's interested, I'm also pursuing a solution via sage, but it has its own set of unique and interesting problems.... http://groups.google.com/group/sage-support/browse_thread/thread/7ef78c32ff399de5/02c84985c84d25e3?lnk=gst&q=sage+with+tkinter#02c84985c84d25e3 Again, thanks, Adam On May 20, 8:17?pm, Roger Herikstad wrote: > Hi, > ?Sounds like the exact same problem I was having. There's a ticket for > it herehttp://projects.scipy.org/numpy/ticket/1111, with a patch that > fixed the problem for me, at least. > Good luck! > > ~ Roger > > > > On Wed, May 20, 2009 at 10:36 PM, Gins wrote: > > Thanks. ?I successfully got python 2.6.2 compiled with 64 bit support, > > but when I try to compile numpy I run into errors that are a little > > beyond my experience: > > > gcc: build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.c > > In file included from numpy/core/include/numpy/ndarrayobject.h:26, > > ? ? ? ? ? ? ? ? from numpy/core/include/numpy/noprefix.h:7, > > ? ? ? ? ? ? ? ? from numpy/core/src/_sortmodule.c.src:29: > > numpy/core/include/numpy/npy_endian.h:33:10: error: #error Unknown > > CPU: can not set endianness > > lipo: can't figure out the architecture type of: /var/folders/ni/ni > > +DtdqFGMeSMH13AvkNkU+++TI/-Tmp-//ccJos8Iw.out > > In file included from numpy/core/include/numpy/ndarrayobject.h:26, > > ? ? ? ? ? ? ? ? from numpy/core/include/numpy/noprefix.h:7, > > ? ? ? ? ? ? ? ? from numpy/core/src/_sortmodule.c.src:29: > > numpy/core/include/numpy/npy_endian.h:33:10: error: #error Unknown > > CPU: can not set endianness > > lipo: can't figure out the architecture type of: /var/folders/ni/ni > > +DtdqFGMeSMH13AvkNkU+++TI/-Tmp-//ccJos8Iw.out > > error: Command "gcc -arch i386 -arch ppc -arch ppc64 -arch x86_64 - > > isysroot / -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g - > > fwrapv -O3 -Wall -Wstrict-prototypes -Inumpy/core/include -Ibuild/ > > src.macosx-10.5-universal-2.6/numpy/core/include/numpy -Inumpy/core/ > > src -Inumpy/core/include -I/Library/Frameworks/Python.framework/ > > Versions/2.6/include/python2.6 -c build/src.macosx-10.5-universal-2.6/ > > numpy/core/src/_sortmodule.c -o build/temp.macosx-10.5-universal-2.6/ > > build/src.macosx-10.5-universal-2.6/numpy/core/src/_sortmodule.o" > > failed with exit status 1 > > > and I haven't had any luck with the numpy .dmg files for mac. > > > I'll check out sage next and report back. ?Thanks for the tips! > > Adam > > > On May 19, 5:48?pm, David Warde-Farley wrote: > >> Hi Adam, > > >> On 17-Apr-09, at 12:38 PM, Keflavich wrote: > > >> > can't get a 64-bit version of python compiled and google has been > >> > unhelpful in resolving the problem. ?Is there a workaround to get 64 > > >> I have had a lot of success with (using the 2.6.2 sources) > > >> mkdir -p build && cd build && ./configure --with-framework- > >> name=Python64 --with-universal-archs=all --enable-framework --enable- > >> universalsdk=/ MACOSX_DEPLOYMENT_TARGET=10.5 && make && sudo make > >> install > > >> That builds a 4-way universal binary. --with-universal-archs=64-bit > >> will get you just the 64 bit stuff (note that a few of the make > >> install steps will fail because of Carbon deprecation but nothing > >> important as far as I can see). > > >> David > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-u... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > > SciPy-user mailing list > > SciPy-u... at scipy.org > >http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Fri May 22 14:43:37 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 May 2009 13:43:37 -0500 Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: <3f919b92-d7a6-4a82-9cb7-755172bd4af9@v23g2000pro.googlegroups.com> References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> <6595CCDD-785D-448E-AE21-1D184BEF6330@cs.toronto.edu> <3f919b92-d7a6-4a82-9cb7-755172bd4af9@v23g2000pro.googlegroups.com> Message-ID: <3d375d730905221143i55d2869ch27d4a3eee8f17085@mail.gmail.com> On Fri, May 22, 2009 at 10:40, Adam wrote: > Awesome, thanks! ?That fixed my numpy build errors. ?I then > successfully installed scipy and matplotlib. ?However, I still failed > at getting a nice pylab session running: ipython won't run because of > 'entry point' errors. ?Argh. ?Any tips there? ?I swear someday I'll > get a 64 bit pylab session running... > > Traceback (most recent call last): > ?File "/usr/local/bin/ipython", line 9, in > ? ?load_entry_point('ipython==0.9.1', 'console_scripts', 'ipython')() > ?File "build/bdist.macosx-10.5-universal/egg/pkg_resources.py", line > 277, in load_entry_point > ?File "build/bdist.macosx-10.5-universal/egg/pkg_resources.py", line > 2179, in load_entry_point > ImportError: Entry point ('console_scripts', 'ipython') not found Did you install IPython for this interpreter? Or is that ipython executable from a previous installation? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwf at cs.toronto.edu Fri May 22 15:57:36 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 22 May 2009 15:57:36 -0400 Subject: [SciPy-user] using UnivariateSpline Message-ID: <804EA61B-A5B1-4048-B034-651A5BE2A46E@cs.toronto.edu> I must be crazy, but how does one actually USE UnivariateSpline, etc. to do interpolation? How do I evaluate the spline at other data after it's fit? There seems to be no "evaluate" method or equivalent to splev. Thanks, David From robert.kern at gmail.com Fri May 22 15:57:14 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 May 2009 14:57:14 -0500 Subject: [SciPy-user] using UnivariateSpline In-Reply-To: <804EA61B-A5B1-4048-B034-651A5BE2A46E@cs.toronto.edu> References: <804EA61B-A5B1-4048-B034-651A5BE2A46E@cs.toronto.edu> Message-ID: <3d375d730905221257u1cf7ac50p80cfd5b6040ab92d@mail.gmail.com> On Fri, May 22, 2009 at 14:57, David Warde-Farley wrote: > I must be crazy, but how does one actually USE UnivariateSpline, etc. > to do interpolation? How do I evaluate the spline at other data after > it's fit? > > There seems to be no "evaluate" method or equivalent to splev. def __call__(self, x, nu=None): """ Evaluate spline (or its nu-th derivative) at positions x. Note: x can be unordered but the evaluation is more efficient if x is (partially) ordered. """ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Fri May 22 16:12:10 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 22 May 2009 16:12:10 -0400 Subject: [SciPy-user] using UnivariateSpline In-Reply-To: <804EA61B-A5B1-4048-B034-651A5BE2A46E@cs.toronto.edu> References: <804EA61B-A5B1-4048-B034-651A5BE2A46E@cs.toronto.edu> Message-ID: <1cd32cbb0905221312t3d4a108n6ee5d7e5fae0c4e@mail.gmail.com> On Fri, May 22, 2009 at 3:57 PM, David Warde-Farley wrote: > I must be crazy, but how does one actually USE UnivariateSpline, etc. > to do interpolation? read the source, look at the tests, scipy\interpolate\tests\test_fitpack.py, search the mailing lists and hope for the best (and file a bug report) below are some of my attempt of understanding what's going on with the spline classes > How do I evaluate the spline at other data after > it's fit? > > There seems to be no "evaluate" method or equivalent to splev. ------------------- """ try_spline.py """ import numpy as np from scipy import interpolate import matplotlib.pyplot as plt npoints = 51 x = np.linspace(0, 20, npoints) y = np.sqrt(x) + np.sin(x) + 0.2* np.random.randn(npoints) tck = interpolate.splrep(x, y) x2 = np.linspace(0, 20, 200) y2 = interpolate.splev(x2, tck) plt.plot(x, y, 'o', x2, y2, '-.') #plt.show() #tck2 = interpolate.splrep(x, y, t=tck[1]) #x3 = np.linspace(0, 20, 300) #y3 = interpolate.splev(x3, tck2) #plt.plot(x, y, 'o', x3, y3, '-.') # us = interpolate.UnivariateSpline(x,y) t = us.get_knots() print t #tck2 = interpolate.splrep(x2, y2, t=t, full_output=True) tck2 = interpolate.splrep(x, y, s=1, full_output=True) y2 = interpolate.splev(x2, tck2[0]) plt.plot(x, y, 'o', x2, y2, '-.') tt=x[1:-1:2] lsus=interpolate.LSQUnivariateSpline(x,y,tt) yh = lsus(x2) plt.figure() plt.plot(x, y, 'o', x2, yh, '-.') lsus=interpolate.UnivariateSpline(x,y,s=2) yh = lsus(x2) plt.figure() plt.plot(x, y, 'o', x2, yh, '-.') #derivatives doesn't take array arguments correctly print lsus.derivatives(x) #using fitpacks spalde directly works deri = np.array(interpolate.spalde(x, lsus._eval_args)) print deri[:10,:] print np.all(lsus(x) == deri[:,0]) print np.max(np.abs(lsus(x) - deri[:,0])) deri2 = np.array(map(lsus.derivatives,x)) print np.all(deri2 == deri) print np.max(np.abs(deri2 - deri)) print lsus.integral(x[0],x[-1]) #plt.show() example = 0 if example == 3: #copied from #http://groups.google.ca/group/scipy-user/browse_thread/thread/ded43ebce135c520/eccf1dd343456137?hl=en&lnk=gst&q=splrep#eccf1dd343456137 import scipy as sp x=sp.linspace(0,10,11) y=sp.sin(x) x2=sp.linspace(0,10,201) tck=sp.interpolate.splrep(x,y,k=3) y2=sp.interpolate.splev(x2,tck) f=sp.interpolate.interp1d(x,y,kind=3) y3=f(x2) ''' :members: __call__, derivatives, get_coeffs, get_knots, get_residual, integral, roots, set_smoothing_factor' ''' From mattknox.ca at gmail.com Fri May 22 16:16:43 2009 From: mattknox.ca at gmail.com (Matt Knox) Date: Fri, 22 May 2009 20:16:43 +0000 (UTC) Subject: [SciPy-user] TimeSeries concatenate References: <68846329-16FB-457F-822C-5E49ECAED26A@diablotech.com> Message-ID: > Question: What happens to duplicate dates? It seems that the data in > the first series is used. Is that the rule? One thing I would recommend (which is not obvious to new python users many times) is to check the function doc strings using the built in "help" function (see below). So to answer your question, yes that is the rule IF the `remove_duplicates` parameter is set to "True" (which is the default). - Matt >>> import scikits.timeseries as ts >>> help(ts.concatenate) Help on function concatenate in module scikits.timeseries.tseries: concatenate(series, axis=0, remove_duplicates=True, fill_missing=False) Joins series together. The series are joined in chronological order. Duplicated dates are handled with the `remove_duplicates` parameter. If `remove_duplicate` is False, duplicated dates are saved. Otherwise, only the first occurence of the date is conserved. Parameters ---------- series : {sequence} Sequence of time series to join axis : {0, None, int}, optional Axis along which to join remove_duplicates : {False, True}, optional Whether to remove duplicated dates. fill_missing : {False, True}, optional Whether to fill the missing dates with missing values. Examples -------- >>> a = time_series([1,2,3], start_date=now('D')) >>> b = time_series([10,20,30], start_date=now('D')+1) >>> c = concatenate((a,b)) >>> c._series masked_array(data = [ 1 2 3 30], mask = False, fill_value=999999) From dwf at cs.toronto.edu Fri May 22 16:26:48 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 22 May 2009 16:26:48 -0400 Subject: [SciPy-user] using UnivariateSpline In-Reply-To: <3d375d730905221257u1cf7ac50p80cfd5b6040ab92d@mail.gmail.com> References: <804EA61B-A5B1-4048-B034-651A5BE2A46E@cs.toronto.edu> <3d375d730905221257u1cf7ac50p80cfd5b6040ab92d@mail.gmail.com> Message-ID: On 22-May-09, at 3:57 PM, Robert Kern wrote: > On Fri, May 22, 2009 at 14:57, David Warde-Farley > wrote: >> I must be crazy, but how does one actually USE UnivariateSpline, etc. >> to do interpolation? How do I evaluate the spline at other data after >> it's fit? >> >> There seems to be no "evaluate" method or equivalent to splev. > > def __call__(self, x, nu=None): > """ Evaluate spline (or its nu-th derivative) at positions x. > Note: x can be unordered but the evaluation is more efficient > if x is (partially) ordered. I somehow completely missed this. I guess I was skipping over the __init__ method because I already understood it. :S Thanks Robert. David From keflavich at gmail.com Fri May 22 16:57:06 2009 From: keflavich at gmail.com (Adam) Date: Fri, 22 May 2009 13:57:06 -0700 (PDT) Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: <3d375d730905221143i55d2869ch27d4a3eee8f17085@mail.gmail.com> References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> <6595CCDD-785D-448E-AE21-1D184BEF6330@cs.toronto.edu> <3f919b92-d7a6-4a82-9cb7-755172bd4af9@v23g2000pro.googlegroups.com> <3d375d730905221143i55d2869ch27d4a3eee8f17085@mail.gmail.com> Message-ID: <97867327-59d4-4d5a-b53b-24827ff619d6@c7g2000prc.googlegroups.com> Thanks... I was missing something apallingly obvious. I had installed ipython correctly but had the wrong version in my path, and I was relying on the modification date to tell me whether I was using the right one... which was incorrect. Of course, that could never fix all of the problems. Now I have a font issues.... I will try to track it down: File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/font_manager.py", line 52, in from matplotlib import ft2font ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/ lib/python2.6/site-packages/matplotlib/ft2font.so, 2): Symbol not found: _FT_Attach_File Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/ lib/python2.6/site-packages/matplotlib/ft2font.so Expected in: dynamic lookup Adam On May 22, 12:43?pm, Robert Kern wrote: > On Fri, May 22, 2009 at 10:40, Adam wrote: > > Awesome, thanks! ?That fixed my numpy build errors. ?I then > > successfully installed scipy and matplotlib. ?However, I still failed > > at getting a nice pylab session running: ipython won't run because of > > 'entry point' errors. ?Argh. ?Any tips there? ?I swear someday I'll > > get a 64 bit pylab session running... > > > Traceback (most recent call last): > > ?File "/usr/local/bin/ipython", line 9, in > > ? ?load_entry_point('ipython==0.9.1', 'console_scripts', 'ipython')() > > ?File "build/bdist.macosx-10.5-universal/egg/pkg_resources.py", line > > 277, in load_entry_point > > ?File "build/bdist.macosx-10.5-universal/egg/pkg_resources.py", line > > 2179, in load_entry_point > > ImportError: Entry point ('console_scripts', 'ipython') not found > > Did you install IPython for this interpreter? Or is that ipython > executable from a previous installation? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Fri May 22 16:59:16 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 May 2009 15:59:16 -0500 Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: <97867327-59d4-4d5a-b53b-24827ff619d6@c7g2000prc.googlegroups.com> References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> <6595CCDD-785D-448E-AE21-1D184BEF6330@cs.toronto.edu> <3f919b92-d7a6-4a82-9cb7-755172bd4af9@v23g2000pro.googlegroups.com> <3d375d730905221143i55d2869ch27d4a3eee8f17085@mail.gmail.com> <97867327-59d4-4d5a-b53b-24827ff619d6@c7g2000prc.googlegroups.com> Message-ID: <3d375d730905221359s1ec1943chb80d6507678b405f@mail.gmail.com> On Fri, May 22, 2009 at 15:57, Adam wrote: > Thanks... I was missing something apallingly obvious. ?I had installed > ipython correctly but had the wrong version in my path, and I was > relying on the modification date to tell me whether I was using the > right one... which was incorrect. > > Of course, that could never fix all of the problems. ?Now I have a > font issues.... I will try to track it down: > > ?File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ > python2.6/site-packages/matplotlib/font_manager.py", line 52, in > > ? ?from matplotlib import ft2font > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/ > lib/python2.6/site-packages/matplotlib/ft2font.so, 2): Symbol not > found: _FT_Attach_File > ?Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/ > lib/python2.6/site-packages/matplotlib/ft2font.so > ?Expected in: dynamic lookup Most likely, you do not have a 64-bit build of the FreeType library. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From keflavich at gmail.com Fri May 22 17:29:01 2009 From: keflavich at gmail.com (Adam) Date: Fri, 22 May 2009 14:29:01 -0700 (PDT) Subject: [SciPy-user] 64 bit on Mac? In-Reply-To: <3d375d730905221359s1ec1943chb80d6507678b405f@mail.gmail.com> References: <60cc3bb5-ab28-42e6-874c-ef49dd2bf015@d2g2000pra.googlegroups.com> <6595CCDD-785D-448E-AE21-1D184BEF6330@cs.toronto.edu> <3f919b92-d7a6-4a82-9cb7-755172bd4af9@v23g2000pro.googlegroups.com> <3d375d730905221143i55d2869ch27d4a3eee8f17085@mail.gmail.com> <97867327-59d4-4d5a-b53b-24827ff619d6@c7g2000prc.googlegroups.com> <3d375d730905221359s1ec1943chb80d6507678b405f@mail.gmail.com> Message-ID: <622e6568-afa3-4dd1-8414-4f5640b07b45@p21g2000prn.googlegroups.com> Soudns right to me. I'm trying to figure out how to build a 64 bit version of the freetype library now. I'll report any more successes/ failures I run into. Thanks. Adam On May 22, 2:59?pm, Robert Kern wrote: > On Fri, May 22, 2009 at 15:57, Adam wrote: > > Thanks... I was missing something apallingly obvious. ?I had installed > > ipython correctly but had the wrong version in my path, and I was > > relying on the modification date to tell me whether I was using the > > right one... which was incorrect. > > > Of course, that could never fix all of the problems. ?Now I have a > > font issues.... I will try to track it down: > > > ?File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ > > python2.6/site-packages/matplotlib/font_manager.py", line 52, in > > > > ? ?from matplotlib import ft2font > > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/ > > lib/python2.6/site-packages/matplotlib/ft2font.so, 2): Symbol not > > found: _FT_Attach_File > > ?Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/ > > lib/python2.6/site-packages/matplotlib/ft2font.so > > ?Expected in: dynamic lookup > > Most likely, you do not have a 64-bit build of the FreeType library. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From erik.tollerud at gmail.com Fri May 22 17:57:27 2009 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Fri, 22 May 2009 14:57:27 -0700 Subject: [SciPy-user] scipy.interpolate spline class names In-Reply-To: <1cd32cbb0905201938n6e91cc93s5429c613a8221296@mail.gmail.com> References: <1cd32cbb0905201938n6e91cc93s5429c613a8221296@mail.gmail.com> Message-ID: So who has the power to update these docs, anyway? It doesn't seem that complicated to make the necessary clarifications... On Wed, May 20, 2009 at 7:38 PM, wrote: > On Wed, May 20, 2009 at 9:54 PM, Erik Tollerud wrote: >> I use the splines in scipy.interpolate quite a bit, and I particularly >> like ?the *UnivariateSpline and *BivariateSpline ?wrapper classes. >> However, I cannot for the life of me work out what gives with the >> names and documentation... As far as I can tell, the univariate >> splines are as follows: >> >> UnivariateSpline : A spline where the number of knots is chosen using >> the "smoothing factor" s >> LSQUnivariateSpline: A spline where the knots are explicitly specified > > At least the docs need a lot of improvement, I tried out the splines > for the first time a short time ago, and I only realized this for > LSQUnivariateSpline after receiving exceptions when I wanted to update > the knots as described in the docs. Also, the dispatch behaviour of > UnivariateSpline is not described. > The docs for the original wrappers, splrep, splev, sproot, spalde, > splint, is more informative. > > I was looking at these spline classes as a replacement for the spline > implementation in stats.models, but for a newbie to splines the > documentation is not very helpful. > > But the splines produce nice pictures. > > Josef > > >> InterpolatedUnivariateSpline: A spline with s=0 or t=[] (e.g. passes >> through all the fitting points) >> >> The documentation just says the second two "just have less error >> checking"... aren't they for very different purposes? ?And while I >> recognize that name changes at this stage might be uncalled for, the >> names are somewhat misleading, too... shouldn't they be >> "SmoothUnivariateSpline","KnotUnivariateSpline", and >> "InterpolatedUnivariateSpline" or something like that? >> >> It also seems there are similar versions for the *BivariateSpline >> classes, although it's unclear to me exactly what the raw >> BivariateSpline class does as compared to the SmoothBivariateSpline >> (and the RectBivariateSpline, at least, makes sense) >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Erik Tollerud Graduate Student Center For Cosmology Department of Physics and Astronomy 2142 Frederick Reines Hall University of California, Irvine Office Phone: (949)824-2587 Cell: (651)307-9409 etolleru at uci.edu http://ps.uci.edu/~etolleru From jsseabold at gmail.com Fri May 22 18:05:16 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 22 May 2009 18:05:16 -0400 Subject: [SciPy-user] scipy.interpolate spline class names In-Reply-To: References: <1cd32cbb0905201938n6e91cc93s5429c613a8221296@mail.gmail.com> Message-ID: On Fri, May 22, 2009 at 5:57 PM, Erik Tollerud wrote: > So who has the power to update these docs, anyway? It doesn't seem > that complicated to make the necessary clarifications... > Anyone can. You just need to register a name for the docs wiki and ping the ML to request editing rights. See here: My current understanding is that once changes are made in the documentation editor, then they are applied to the SVN source from time to time. Is this accurate? Skipper From pav at iki.fi Fri May 22 18:10:26 2009 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 22 May 2009 22:10:26 +0000 (UTC) Subject: [SciPy-user] scipy.interpolate spline class names References: <1cd32cbb0905201938n6e91cc93s5429c613a8221296@mail.gmail.com> Message-ID: Fri, 22 May 2009 18:05:16 -0400, Skipper Seabold wrote: [clip] > My current understanding is that once changes are made in the > documentation editor, then they are applied to the SVN source from time > to time. Is this accurate? Yes, they are applied manually, from time to time. Possibly not very frequently, but at least before releases. If you want to have some change applied ASAP, you can send mail to the scipy-dev and ask someone (= probably me) to commit the docs. -- Pauli Virtanen From erik.tollerud at gmail.com Fri May 22 18:16:31 2009 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Fri, 22 May 2009 15:16:31 -0700 Subject: [SciPy-user] scipy.interpolate spline class names In-Reply-To: References: <1cd32cbb0905201938n6e91cc93s5429c613a8221296@mail.gmail.com> Message-ID: What about changing the class names? Is this an unlikely proposition for backwards compatibility? I've been using some of these classes enough that I will probably attempt to add some functionality and post a patch on scipy-dev some time in the next few weeks, and if so I will probably include updated docs, if so. On Fri, May 22, 2009 at 3:10 PM, Pauli Virtanen wrote: > Fri, 22 May 2009 18:05:16 -0400, Skipper Seabold wrote: > [clip] >> My current understanding is that once changes are made in the >> documentation editor, then they are applied to the SVN source from time >> to time. Is this accurate? > > Yes, they are applied manually, from time to time. Possibly not very > frequently, but at least before releases. > > If you want to have some change applied ASAP, you can send mail to the > scipy-dev and ask someone (= probably me) to commit the docs. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Erik Tollerud Graduate Student Center For Cosmology Department of Physics and Astronomy 2142 Frederick Reines Hall University of California, Irvine Office Phone: (949)824-2587 Cell: (651)307-9409 etolleru at uci.edu http://ps.uci.edu/~etolleru From erik.tollerud at gmail.com Fri May 22 18:38:30 2009 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Fri, 22 May 2009 15:38:30 -0700 Subject: [SciPy-user] using UnivariateSpline In-Reply-To: References: <804EA61B-A5B1-4048-B034-651A5BE2A46E@cs.toronto.edu> <3d375d730905221257u1cf7ac50p80cfd5b6040ab92d@mail.gmail.com> Message-ID: These classes are indeed rather poorly documented, but once you get into them, they work very well. Also, be aware that the three *UnivariateSpline classes are only different in how they generate the knots: *UnivarateSpline: determines the number of knots by adding more knots until the smoothing condition (sum((w[i]*(y[i]-s(x[i])))**2,axis=0) <= s) is satisfied - s is specified in the constructor or the set_smoothing_factor method. *LSQUnivariateSpline: the knots are specified in a sequence provided to the constructor (t) *InterpolatedUnivatiateSpline: the spline is forced to pass through all the points (equivalent to s=0) But they are all evaluated by being called, as has already been explained. On Fri, May 22, 2009 at 1:26 PM, David Warde-Farley wrote: > On 22-May-09, at 3:57 PM, Robert Kern wrote: > >> On Fri, May 22, 2009 at 14:57, David Warde-Farley >> wrote: >>> I must be crazy, but how does one actually USE UnivariateSpline, etc. >>> to do interpolation? How do I evaluate the spline at other data after >>> it's fit? >>> >>> There seems to be no "evaluate" method or equivalent to splev. >> >> ? ?def __call__(self, x, nu=None): >> ? ? ? ?""" Evaluate spline (or its nu-th derivative) at positions x. >> ? ? ? ?Note: x can be unordered but the evaluation is more efficient >> ? ? ? ?if x is (partially) ordered. > > I somehow completely missed this. I guess I was skipping over the > __init__ method because I already understood it. :S > > Thanks Robert. > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Erik Tollerud Graduate Student Center For Cosmology Department of Physics and Astronomy 2142 Frederick Reines Hall University of California, Irvine Office Phone: (949)824-2587 Cell: (651)307-9409 etolleru at uci.edu http://ps.uci.edu/~etolleru From josef.pktd at gmail.com Fri May 22 19:22:14 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 22 May 2009 19:22:14 -0400 Subject: [SciPy-user] scipy.interpolate spline class names In-Reply-To: References: <1cd32cbb0905201938n6e91cc93s5429c613a8221296@mail.gmail.com> Message-ID: <1cd32cbb0905221622u6c5112b3v7b9ac72079119b0b@mail.gmail.com> On Fri, May 22, 2009 at 6:16 PM, Erik Tollerud wrote: > What about changing the class names? ?Is this an unlikely proposition > for backwards compatibility? > > I've been using some of these classes enough that I will probably > attempt to add some functionality and post a patch on scipy-dev some > time in the next few weeks, and if so I will probably include updated > docs, if so. > > On Fri, May 22, 2009 at 3:10 PM, Pauli Virtanen wrote: >> Fri, 22 May 2009 18:05:16 -0400, Skipper Seabold wrote: >> [clip] >>> My current understanding is that once changes are made in the >>> documentation editor, then they are applied to the SVN source from time >>> to time. Is this accurate? >> >> Yes, they are applied manually, from time to time. Possibly not very >> frequently, but at least before releases. >> >> If you want to have some change applied ASAP, you can send mail to the >> scipy-dev and ask someone (= probably me) to commit the docs. >> There is also still the open question how we get the information of the docstrings in class.__init__ into the sphinx docs. Josef From stefan at sun.ac.za Fri May 22 19:47:51 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 23 May 2009 01:47:51 +0200 Subject: [SciPy-user] using UnivariateSpline In-Reply-To: References: <804EA61B-A5B1-4048-B034-651A5BE2A46E@cs.toronto.edu> <3d375d730905221257u1cf7ac50p80cfd5b6040ab92d@mail.gmail.com> Message-ID: <9457e7c80905221647m51fa734ge125307b4c005ecb@mail.gmail.com> 2009/5/23 Erik Tollerud : > These classes are indeed rather poorly documented, but once you get > into them, they work very well. It would be great if you guys could improve the documentation as you figure out how to use these functions. Even if you only add an example or two, that would be useful. The docs are editable in a wiki-like fashion on http://docs.scipy.org Thanks! St?fan From pav at iki.fi Fri May 22 20:24:58 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 23 May 2009 00:24:58 +0000 (UTC) Subject: [SciPy-user] scipy.interpolate spline class names References: <1cd32cbb0905201938n6e91cc93s5429c613a8221296@mail.gmail.com> <1cd32cbb0905221622u6c5112b3v7b9ac72079119b0b@mail.gmail.com> Message-ID: Fri, 22 May 2009 19:22:14 -0400, josef.pktd wrote: [clip] > There is also still the open question how we get the information of the > docstrings in class.__init__ into the sphinx docs. The Numpy docstring standard dictated that the __init__ method should be documented in the main class docstring. I don't personally like this very much. Maybe we need to revise this? Anyway, the Sphinx dev version contains an improved version of autosummary that has features that could be used to address this. *** So I'd suggest currently just making a separate hand-written page for the interpolation class docs, making appropriate use of the autoclass:: and automethod:: directives. The main documentation page interpolate.rst could then contain the corresponding autosummary directives without the :toctree: argument. -- Pauli Virtanen From josef.pktd at gmail.com Fri May 22 21:03:42 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 22 May 2009 21:03:42 -0400 Subject: [SciPy-user] scipy.interpolate spline class names In-Reply-To: References: <1cd32cbb0905201938n6e91cc93s5429c613a8221296@mail.gmail.com> <1cd32cbb0905221622u6c5112b3v7b9ac72079119b0b@mail.gmail.com> Message-ID: <1cd32cbb0905221803q2903a8avb9ba35c8c5952505@mail.gmail.com> On Fri, May 22, 2009 at 8:24 PM, Pauli Virtanen wrote: > Fri, 22 May 2009 19:22:14 -0400, josef.pktd wrote: > [clip] >> There is also still the open question how we get the information of the >> docstrings in class.__init__ into the sphinx docs. > > The Numpy docstring standard dictated that the __init__ method should be > documented in the main class docstring. > > I don't personally like this very much. Maybe we need to revise this? > > Anyway, the Sphinx dev version contains an improved version of > autosummary that has features that could be used to address this. > > ? *** > > So I'd suggest currently just making a separate hand-written page for the > interpolation class docs, making appropriate use of the autoclass:: and > automethod:: directives. > > The main documentation page interpolate.rst could then contain the > corresponding autosummary directives without the :toctree: argument. > > -- > Pauli Virtanen Pauli, do you have an example how to do this? when I tried autoclass and automethod in the doc editor then it didn't produce the intended results. For example for the KroghInterpolator: Given last years discussion a lot of information was put into the __init__ What I would find very helpful would be if the link for KroghInterpolator in http://docs.scipy.org/scipy/docs/scipy-docs/interpolate.rst/ leads to the full autodocs of the class, with all or selected automethods. I would prefer one page per class for the class based modules such as the interpolator classes. I find the docs very well structured and accessible for functions but in many cases it doesn't provide a good structure for classes. If you have an example for how this can be done, then I could fix parts of the docs. The numpy docstring doesn't really define the structure for the sphinx documentation, or does it. I still appreciate the htmhelp files for windows a lot. It's very useful to have instantaneous search and access to the docs. That's why it's bugging me when the information is not in the docs, even though it can be accessed with >>> help(classname). But currently the docs don't produce the same result. Josef From ferrell at diablotech.com Fri May 22 23:59:48 2009 From: ferrell at diablotech.com (Robert Ferrell) Date: Fri, 22 May 2009 21:59:48 -0600 Subject: [SciPy-user] TimeSeries concatenate In-Reply-To: References: <68846329-16FB-457F-822C-5E49ECAED26A@diablotech.com> Message-ID: <499023C1-B84F-49B9-881E-32BB20E17FAD@diablotech.com> Thanks for the gentle reminder about help(). Sometimes I remember - hopefully more often now. That does indeed have exactly the information I was looking for. -robert On May 22, 2009, at 2:16 PM, Matt Knox wrote: >> Question: What happens to duplicate dates? It seems that the data in >> the first series is used. Is that the rule? > > One thing I would recommend (which is not obvious to new python > users many > times) is to check the function doc strings using the built in > "help" function > (see below). So to answer your question, yes that is the rule IF the > `remove_duplicates` parameter is set to "True" (which is the default). > > - Matt > >>>> import scikits.timeseries as ts >>>> help(ts.concatenate) > Help on function concatenate in module scikits.timeseries.tseries: > > concatenate(series, axis=0, remove_duplicates=True, > fill_missing=False) > Joins series together. > > The series are joined in chronological order. > Duplicated dates are handled with the `remove_duplicates` > parameter. > If `remove_duplicate` is False, duplicated dates are saved. > Otherwise, only the first occurence of the date is conserved. > > > Parameters > ---------- > series : {sequence} > Sequence of time series to join > axis : {0, None, int}, optional > Axis along which to join > remove_duplicates : {False, True}, optional > Whether to remove duplicated dates. > fill_missing : {False, True}, optional > Whether to fill the missing dates with missing values. > > Examples > -------- >>>> a = time_series([1,2,3], start_date=now('D')) >>>> b = time_series([10,20,30], start_date=now('D')+1) >>>> c = concatenate((a,b)) >>>> c._series > masked_array(data = [ 1 2 3 30], > mask = False, > fill_value=999999) > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Sat May 23 00:28:46 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 23 May 2009 00:28:46 -0400 Subject: [SciPy-user] improving docs for classes was: Re: scipy.interpolate spline class names Message-ID: <1cd32cbb0905222128y598e6e08q4fdc8d5956407a91@mail.gmail.com> I'm changing the thread title. On Fri, May 22, 2009 at 9:03 PM, wrote: > On Fri, May 22, 2009 at 8:24 PM, Pauli Virtanen wrote: >> Fri, 22 May 2009 19:22:14 -0400, josef.pktd wrote: >> [clip] >>> There is also still the open question how we get the information of the >>> docstrings in class.__init__ into the sphinx docs. >> >> The Numpy docstring standard dictated that the __init__ method should be >> documented in the main class docstring. >> >> I don't personally like this very much. Maybe we need to revise this? >> >> Anyway, the Sphinx dev version contains an improved version of >> autosummary that has features that could be used to address this. >> >> ? *** >> >> So I'd suggest currently just making a separate hand-written page for the >> interpolation class docs, making appropriate use of the autoclass:: and >> automethod:: directives. >> >> The main documentation page interpolate.rst could then contain the >> corresponding autosummary directives without the :toctree: argument. >> >> -- >> Pauli Virtanen > > Pauli, > do you have an example how to do this? when I tried autoclass and > automethod in the doc editor then it didn't produce the intended > results. > > For example for the KroghInterpolator: > Given last years discussion a lot of information was put into the __init__ > > What I would find very helpful would be if the link for > KroghInterpolator in > http://docs.scipy.org/scipy/docs/scipy-docs/interpolate.rst/ ?leads to > the full autodocs of the class, with all or selected automethods. I > would ?prefer one page per class for the class based modules such as > the interpolator classes. > > I find the docs very well structured and accessible for functions but > in many cases it doesn't provide a good structure for classes. > > If you have an example for how this can be done, then I could fix > parts of the docs. The numpy docstring doesn't really define the > structure for the sphinx documentation, or does it. > > I still appreciate the htmhelp files for windows a lot. It's very > useful to have instantaneous search and access to the docs. That's why > it's bugging me when the information is not in the docs, even though > it can be accessed with >>> help(classname). But currently the docs > don't produce the same result. > some more on the current status of the documentation of classes: The __call__ method also seems to be a second class citizen, not only in interpolate. I assume it is not automatically included in the autodoc. I just checked scipy.stats.rv_continuous and stats.kde.gaussian_kde. rv_continuous mentions the __call__ method in the class docstring, but there is no further reference, stats.kde.gaussian_kde is silent about it's __call__ methods Also the class docstrings don't link to or list the methods. For example rv_continuous lists all the important methods but there are no links. Additional to the classes in scipy, I checked numpy.DataSource and numpy.random.RandomState, and the help page for them is pretty uninformative, I don't know and cannot link to any of their methods. compare this for example with help(numpy.DataSource) I hope that by the end of this years gsoc, we will be adding many stats.model classes. So a good documentation pattern for classes would be very helpful for us. And if we allow for different array subclasses in the __init__ method, then we need the facility to document them. A similar example is scipy.signal.lti which allows 3 different constructors, see the doc page at http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.lti.html#scipy.signal.lti I don't know whether this just needs some additional rst files, changes in the automatic document creation or in the docstring standard. Of course there is the way of writing full rst docs as in http://docs.scipy.org/numpy/source/numpy/doc/source/reference/arrays.ndarray.rst, but that looks like a lot of additional work, which might not happen very soon. So I think that for small to medium sized classes we should find a way to create automatic class documentation including special methods __init__ and __call__ and maybe others, if these methods have a special meaning. __init__ is useful if we want to describe the constructor in more detail instead of copying all information into the class docs or to separate rst files. I'm still stuck getting members or methods added to an autoclass directive: http://docs.scipy.org/scipy/docs/scipy-docs/interpolate_UnivariateSpline.rst/ Josef From dwf at cs.toronto.edu Sat May 23 03:35:46 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 23 May 2009 03:35:46 -0400 Subject: [SciPy-user] using UnivariateSpline In-Reply-To: <9457e7c80905221647m51fa734ge125307b4c005ecb@mail.gmail.com> References: <804EA61B-A5B1-4048-B034-651A5BE2A46E@cs.toronto.edu> <3d375d730905221257u1cf7ac50p80cfd5b6040ab92d@mail.gmail.com> <9457e7c80905221647m51fa734ge125307b4c005ecb@mail.gmail.com> Message-ID: <4E497528-B09E-4CB3-902B-617337475023@cs.toronto.edu> On 22-May-09, at 7:47 PM, St?fan van der Walt wrote: > It would be great if you guys could improve the documentation as you > figure out how to use these functions. Even if you only add an > example or two, that would be useful. Was planning on doing just that. :) David From stefan at sun.ac.za Sat May 23 05:43:33 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 23 May 2009 11:43:33 +0200 Subject: [SciPy-user] scipy.interpolate spline class names In-Reply-To: References: <1cd32cbb0905201938n6e91cc93s5429c613a8221296@mail.gmail.com> <1cd32cbb0905221622u6c5112b3v7b9ac72079119b0b@mail.gmail.com> Message-ID: <9457e7c80905230243x607f0a6an36fa23c2aa08ac16@mail.gmail.com> 2009/5/23 Pauli Virtanen : > The Numpy docstring standard dictated that the __init__ method should be > documented in the main class docstring. > > I don't personally like this very much. Maybe we need to revise this? The rationale behind this was that you never call __init__ explicitly, but always construct an instance using MyClass(parameters). On the other hand, both "IPython" and "help" now show docstrings for both the class and __init__ (I can't recall whether this was always the case), so it probably won't matter much if we move it over. Regards St?fan From aisaac at american.edu Sat May 23 17:02:53 2009 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 23 May 2009 17:02:53 -0400 Subject: [SciPy-user] assign to diagonal values? In-Reply-To: References: <47A128A5.7010406@sci.utah.edu> Message-ID: <4A18647D.50207@american.edu> On 1/31/2008 1:37 AM Anne Archibald apparently wrote: > m[range(n),range(n)]=new_diagonal Will that work with range objects (in Python 3)? (Of course, arange could be used.) Just curious, Alan Isaac From robert.kern at gmail.com Sat May 23 17:05:01 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 23 May 2009 16:05:01 -0500 Subject: [SciPy-user] assign to diagonal values? In-Reply-To: <4A18647D.50207@american.edu> References: <47A128A5.7010406@sci.utah.edu> <4A18647D.50207@american.edu> Message-ID: <3d375d730905231405t474f1e63n67d68d55f317e20a@mail.gmail.com> On Sat, May 23, 2009 at 16:02, Alan G Isaac wrote: > On 1/31/2008 1:37 AM Anne Archibald apparently wrote: >> m[range(n),range(n)]=new_diagonal > > Will that work with range objects (in Python 3)? No. The automatic conversion to arrays does not consume iterators (nor will it when we port to Python 3). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Sat May 23 17:26:57 2009 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 23 May 2009 17:26:57 -0400 Subject: [SciPy-user] assign to diagonal values? In-Reply-To: <3d375d730905231405t474f1e63n67d68d55f317e20a@mail.gmail.com> References: <47A128A5.7010406@sci.utah.edu> <4A18647D.50207@american.edu> <3d375d730905231405t474f1e63n67d68d55f317e20a@mail.gmail.com> Message-ID: <4A186A21.5040102@american.edu> > On Sat, May 23, 2009 at 16:02, Alan G Isaac wrote: >> On 1/31/2008 1:37 AM Anne Archibald apparently wrote: >>> m[range(n),range(n)]=new_diagonal >> Will that work with range objects (in Python 3)? On 5/23/2009 5:05 PM Robert Kern apparently wrote: > No. The automatic conversion to arrays does not consume iterators (nor > will it when we port to Python 3). Sure, but range objects are not iterators. They are "almost" sequences. Python 3.0 (r30:67507, Dec 3 2008, 20:14:27) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> r = range(10) >>> next(r) Traceback (most recent call last): File "", line 1, in TypeError: range object is not an iterator >>> 5 in r True >>> list(r) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> list(r) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] Cheers, Alan Isaac From robert.kern at gmail.com Sat May 23 17:33:37 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 23 May 2009 16:33:37 -0500 Subject: [SciPy-user] assign to diagonal values? In-Reply-To: <4A186A21.5040102@american.edu> References: <47A128A5.7010406@sci.utah.edu> <4A18647D.50207@american.edu> <3d375d730905231405t474f1e63n67d68d55f317e20a@mail.gmail.com> <4A186A21.5040102@american.edu> Message-ID: <3d375d730905231433p5003abfak9dca94a975825a39@mail.gmail.com> On Sat, May 23, 2009 at 16:26, Alan G Isaac wrote: >> On Sat, May 23, 2009 at 16:02, Alan G Isaac wrote: >>> On 1/31/2008 1:37 AM Anne Archibald apparently wrote: >>>> m[range(n),range(n)]=new_diagonal >>> Will that work with range objects (in Python 3)? > > On 5/23/2009 5:05 PM Robert Kern apparently wrote: >> No. The automatic conversion to arrays does not consume iterators (nor >> will it when we port to Python 3). > > Sure, but range objects are not iterators. > They are "almost" sequences. The answer is still no. Perhaps someone will write special support for that type when we do the Python 3 port, but there's nothing in numpy that would make it work automatically. For example, xrange() does not work as an index with the current numpy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dmitrey15 at ukr.net Sun May 24 04:31:12 2009 From: dmitrey15 at ukr.net (Dmitrey Kroshko) Date: Sun, 24 May 2009 11:31:12 +0300 Subject: [SciPy-user] BOBYQA for scipy.optimize - has anyone willing to go for it? Message-ID: <4A1905D0.3030006@ukr.net> hi all, BOBYQA is state-of-the-art solver for NLP/NSP problems with box bounds lb <= x <= ub, when no user-supplied gradient/subgradient is available. License is BSD-like, language: fortran 77, author: Michael J.D. Powell. I have filed a ticket for scipy.optimize http://projects.scipy.org/scipy/ticket/950 Has anyone willing to go for it? Regards, D. From carlos.grohmann at gmail.com Sun May 24 22:07:55 2009 From: carlos.grohmann at gmail.com (=?ISO-8859-1?Q?Carlos_=22Gu=E2no=22_Grohmann?=) Date: Sun, 24 May 2009 23:07:55 -0300 Subject: [SciPy-user] eigenvector values (negative where it should be positive) Message-ID: Hello all I'm working on some structural geology data, using numpy (I'm following some class notes, so I can check my results). I have a set of directional data (azimuth/dip): 12 42 18 40 22 48 15 30 10 42 20 30 First I read the data and create a matrix with the direction cosines like this: #direction cosines relative to axis oriented north, east and down # phi = longitude = azimuth (dip direction) # theta = latitude = dip # xi = cos(theta[i])*cos(phi[i]) # yi = cos(theta[i])*sin(phi[i]) # zi = sin(theta[i]) # Tmat = orientation matrix T # Tmat = sum(xi2) sum(xi.yi) sum(xi.zi) # sum(yi.xi) sum(yi2) sum(yi.zi) # sum(zi.xi) sum(zi.yi) sum(zi2) This is the matrix: [[ 3.34172131 0.96327612 2.73061427] [ 0.96327612 0.29736701 0.78834422] [ 2.73061427 0.78834422 2.36091168]] So far so good, but according to my example, the eigenvectors should look like: Vector 1 Vector 2 Vector 3 X 0.749 -0.590 -0.300 Y 0.217 -0.210 0.953 Z 0.626 0.779 0.029 and I have this: [[-0.74913585 -0.59037777 0.30041565] [-0.21679731 -0.21002264 -0.95335692] [-0.62593482 0.77932315 -0.02934318]] So, the values are OK, but the negative signs I don't understand. any ideas are welcome TIA Carlos -- Carlos Henrique Grohmann - Geologist D.Sc. a.k.a. Guano - Linux User #89721 ResearcherID: A-9030-2008 carlos dot grohmann at gmail dot com http://www.igc.usp.br/pessoais/guano/ _________________ "Good morning, doctors. I have taken the liberty of removing Windows 95 from my hard drive." --The winning entry in a "What were HAL's first words" contest judged by 2001: A SPACE ODYSSEY creator Arthur C. Clarke Can?t stop the signal. From robert.kern at gmail.com Sun May 24 22:40:48 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 24 May 2009 21:40:48 -0500 Subject: [SciPy-user] eigenvector values (negative where it should be positive) In-Reply-To: References: Message-ID: <3d375d730905241940o6c1a311blad232886ea372799@mail.gmail.com> 2009/5/24 Carlos "Gu?no" Grohmann : > Hello all > > I'm working on some structural geology data, using numpy (I'm > following some class notes, so I can check my results). I have a set > of directional data (azimuth/dip): > > 12 42 > 18 40 > 22 48 > 15 30 > 10 42 > 20 30 > > First I read the data and create a matrix with the direction cosines like this: > > #direction cosines relative to axis oriented north, east and down > # phi = longitude = azimuth (dip direction) > # theta = latitude = dip > # xi = cos(theta[i])*cos(phi[i]) > # yi = cos(theta[i])*sin(phi[i]) > # zi = sin(theta[i]) > # Tmat = orientation matrix T > # Tmat = sum(xi2) ? ?sum(xi.yi) ? ?sum(xi.zi) > # ? ? ? ?sum(yi.xi) ?sum(yi2) ? ? ?sum(yi.zi) > # ? ? ? ?sum(zi.xi) ?sum(zi.yi) ? ?sum(zi2) > > This is the matrix: > > [[ 3.34172131 ?0.96327612 ?2.73061427] > ?[ 0.96327612 ?0.29736701 ?0.78834422] > ?[ 2.73061427 ?0.78834422 ?2.36091168]] > > > So far so good, but according to my example, the eigenvectors should look like: > > ?Vector 1 Vector 2 Vector 3 > X ?0.749 ? ?-0.590 ? -0.300 > Y ?0.217 ? ?-0.210 ? ?0.953 > Z ?0.626 ? ?0.779 ? ? 0.029 > > and I have this: > > [[-0.74913585 -0.59037777 ?0.30041565] > ?[-0.21679731 -0.21002264 -0.95335692] > ?[-0.62593482 ?0.77932315 -0.02934318]] > > > So, the values are OK, but the negative signs I don't understand. Eigenvectors are unique only up to a scale factor. They are typically reported as normalized to a magnitude of 1, but that still leaves it ambiguous. If v is an eigenvector, -v is also an eigenvector. Both norm(v) and norm(-v) == 1. Which one you get is dependent on the details of the implementation. Both are correct answers. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Mon May 25 11:32:33 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 25 May 2009 11:32:33 -0400 Subject: [SciPy-user] mathworks fileexchange with BSD license Message-ID: <1cd32cbb0905250832nd45503di2cc568189f94912c@mail.gmail.com> I just saw that mathworks fileexchange includes now a license statement and most of the recent code, I looked at, has a BSD license attached. This makes adapting code for the use with scipy much easier. Last time when I asked an author of a script about the license, I only got a "free for non-commercial use", now it is under BSD. Josef From loris.bennett at fu-berlin.de Tue May 26 02:24:06 2009 From: loris.bennett at fu-berlin.de (Loris Bennett) Date: Tue, 26 May 2009 08:24:06 +0200 Subject: [SciPy-user] Install failure on AIX 5.3 due to missing linker flag prefix for compiler Message-ID: <1243319046.4790.0.camel@localhost> Hi, I am trying to install SciPy 0.7.0 on AIX 5.3. I have managed to install NumPy 1.3.0 (although there was a minor problem there: http://bugs.python.org/issue941346 Now I am getting the following error when I try to install SciPy: g++ g++ -pthread -bI:/opt/sw/python/Python-2.6.2/lib/python2.6/config/python.exp build/temp.aix-5.3-2.6/scipy/interpolate/src/_interpolate.o -Lbuild/temp.aix-5.3-2.6 -o build/lib.aix-5.3-2.6/scipy/interpolate/_interpolate.so g++: '-b' must come at the start of the command line g++: '-b' must come at the start of the command line error: Command "g++ g++ -pthread -bI:/opt/sw/python/Python-2.6.2/lib/python2.6/config/python.exp build/temp.aix-5.3-2.6/scipy/interpolate/src/_interpolate.o -Lbuild/temp.aix-5.3-2.6 -o build/lib.aix-5.3-2.6/scipy/interpolate/_interpolate.so" failed with exit status 1 This is essentially the same problem, in that parameters which need to be passed to the linker need to be prefixed with "-Wl,", which does not happen. The problem has been reported here: http://www.mail-archive.com/numpy-discussion at scipy.org/msg02578.html but the solution there no longer applies to the current sources. Any help will be much appreciated. Loris -- Dr. Loris Bennett Computer Centre Freie Universit?t Berlin Berlin, Germany From jdgleeson at mac.com Tue May 26 12:05:22 2009 From: jdgleeson at mac.com (John Gleeson) Date: Tue, 26 May 2009 10:05:22 -0600 Subject: [SciPy-user] BOBYQA for scipy.optimize - has anyone willing to go for it? In-Reply-To: <4A1905D0.3030006@ukr.net> References: <4A1905D0.3030006@ukr.net> Message-ID: <19FC03D5-A4EF-435C-A218-088BF2217206@mac.com> On 2009-05-24, at 2:31 AM, Dmitrey Kroshko wrote: > hi all, > > BOBYQA is state-of-the-art solver for NLP/NSP problems with box bounds > lb <= x <= ub, when no user-supplied gradient/subgradient is > available. License is BSD-like, language: fortran 77, author: Michael > J.D. Powell. > > I have filed a ticket for scipy.optimize > http://projects.scipy.org/scipy/ticket/950 > Has anyone willing to go for it? > > Regards, D. > I would like to do this. I'll have some time in a couple days. John From arserlom at gmail.com Tue May 26 12:35:58 2009 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Tue, 26 May 2009 18:35:58 +0200 Subject: [SciPy-user] Import problem when using py2exe and scipy. Message-ID: Hello list. I've run into a problem when packaging a program that uses scipy with py2exe. When the program tries to "from scipy import interpolate" I get an "ImportError: cannot import name factorial" coming from line 2 in scipy\interpolate\polyint.py. This problem doesn't pop when I run the program normally (that is, before packing it with py2exe). As a workaround, I can either go back to using scipy 0.6 or comment out line 2 in scipy\interpolate\polyint.py. I'm not the only one with this problem, as you can see here: http://article.gmane.org/gmane.comp.python.py2exe/3324 I'm using python 2.5.4 on Windows XP. Armando. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.whitcomb at nrlmry.navy.mil Tue May 26 12:51:10 2009 From: tim.whitcomb at nrlmry.navy.mil (Whitcomb, Mr. Tim) Date: Tue, 26 May 2009 09:51:10 -0700 Subject: [SciPy-user] Install failure on AIX 5.3 due to missing linker flagprefix for compiler In-Reply-To: <1243319046.4790.0.camel@localhost> References: <1243319046.4790.0.camel@localhost> Message-ID: > Now I am getting the following error when I try to install SciPy: > > g++ g++ -pthread > > -bI:/opt/sw/python/Python-2.6.2/lib/python2.6/config/python.exp > build/temp.aix-5.3-2.6/scipy/interpolate/src/_interpolate.o > -Lbuild/temp.aix-5.3-2.6 -o > build/lib.aix-5.3-2.6/scipy/interpolate/_interpolate.so > g++: '-b' must come at the start of the command line > g++: '-b' must come at the start of the command line > error: Command "g++ g++ -pthread I ran into this issue with Scipy as well - the command *should* look something like /path/to/ld_so_aix [c++ compiler].... but gets changed to [c++ compiler] [c++ compiler] which I believe is an error. The fix that I used was to edit unixccompiler.py in the distutils package, and move the linker[i] = self.compiler_cxx[i] statement under the if os.path.basename(linker[0]) == "env" statement - this got rid of that issue. It also looks like it's including files using -bI:, which is more XL C++-ish than g++. I am very new to working on AIX machines, so I can't say if this is an error as well. Hopefully someone with more AIX experience than me can comment on these issues. On a side note, does numpy.test() crash with a MemoryError on your installation? Tim From mmanns at gmx.net Tue May 26 18:33:41 2009 From: mmanns at gmx.net (mmanns at gmx.net) Date: Wed, 27 May 2009 00:33:41 +0200 Subject: [SciPy-user] Print unicode objects in object arrays Message-ID: <20090527003341.06cc8a95@gmx.net> Hi Is there a way of printing unicode objects that are inside an object array? $ python Python 2.5.4 (r254:67916, Feb 17 2009, 20:16:45) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> u = [u'\u201e'] >>> u [u'\u201e'] >>> import numpy >>> a = numpy.array(u, dtype="O") >>> a Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.5/site-packages/numpy/core/numeric.py", line 1088, in array_repr ', ', "array(") File "/usr/lib/python2.5/site-packages/numpy/core/arrayprint.py", line 287, in array2string separator, prefix) File "/usr/lib/python2.5/site-packages/numpy/core/arrayprint.py", line 216, in _array2string _summaryEdgeItems, summary_insert)[:-1] File "/usr/lib/python2.5/site-packages/numpy/core/arrayprint.py", line 333, in _formatArray word = format_function(a[-1]) UnicodeEncodeError: 'ascii' codec can't encode character u'\u201e' in position 0: ordinal not in range(128) >>> numpy.__version__ '1.2.1' Thanks in advance Martin From robert.kern at gmail.com Tue May 26 18:43:22 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 26 May 2009 17:43:22 -0500 Subject: [SciPy-user] Print unicode objects in object arrays In-Reply-To: <20090527003341.06cc8a95@gmx.net> References: <20090527003341.06cc8a95@gmx.net> Message-ID: <3d375d730905261543s4836f030mc95547d87daf2783@mail.gmail.com> On Tue, May 26, 2009 at 17:33, wrote: > Hi > > Is there a way of printing unicode objects that are inside an object > array? > > $ python > Python 2.5.4 (r254:67916, Feb 17 2009, 20:16:45) > [GCC 4.3.3] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> u = [u'\u201e'] >>>> u > [u'\u201e'] >>>> import numpy >>>> a = numpy.array(u, dtype="O") >>>> a > Traceback (most recent call last): > ?File "", line 1, in > ?File "/usr/lib/python2.5/site-packages/numpy/core/numeric.py", line > 1088, in array_repr ', ', "array(") > ?File "/usr/lib/python2.5/site-packages/numpy/core/arrayprint.py", > line 287, in array2string separator, prefix) > ?File "/usr/lib/python2.5/site-packages/numpy/core/arrayprint.py", > line 216, in _array2string _summaryEdgeItems, summary_insert)[:-1] > ?File "/usr/lib/python2.5/site-packages/numpy/core/arrayprint.py", > line 333, in _formatArray word = format_function(a[-1]) > UnicodeEncodeError: 'ascii' codec can't encode character u'\u201e' in > position 0: ordinal not in range(128) >>>> numpy.__version__ > '1.2.1' Hmm, looks like we use str() for object arrays instead of repr(). That is unfortunate. You can work around this with a hack: In [18]: class array(np.ndarray): ....: _format = repr ....: ....: In [20]: a = empty(1, object) In [21]: a[0] = u'\u201e' In [22]: a.view(array) Out[22]: array([u'\u201e'], dtype=object) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jason-sage at creativetrax.com Wed May 27 04:27:32 2009 From: jason-sage at creativetrax.com (jason-sage at creativetrax.com) Date: Wed, 27 May 2009 03:27:32 -0500 Subject: [SciPy-user] fminbound now passes arrays, but used to pass numbers Message-ID: <4A1CF974.1060901@creativetrax.com> In changeset 5205 (29 Nov 2008), to resolve #544, someone added the following code to the fminbound function in optimize/optimize.py: x1 = atleast_1d(x1) x2 = atleast_1d(x2) if len(x1) != 1 or len(x2) != 1: raise ValueError, "Optimisation bounds must be scalars" \ " or length 1 arrays" An effect of the first two lines is that the x value passed to the function a few lines later is no longer a single number, but an ndarray. This messes things up for us in the Sage project, where the calculations in the function may or may not know how to deal with an ndarray. Can we make x1 and x2 numbers if they were originally numbers? Otherwise, we have to wrap all of our functions in a (slow) python call lambda x: f(x[0]). Thanks, Jason -- Jason Grout From stefan at sun.ac.za Wed May 27 05:29:47 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 27 May 2009 11:29:47 +0200 Subject: [SciPy-user] fminbound now passes arrays, but used to pass numbers In-Reply-To: <4A1CF974.1060901@creativetrax.com> References: <4A1CF974.1060901@creativetrax.com> Message-ID: <9457e7c80905270229o54cbfccexbf3d27eb9757b808@mail.gmail.com> Hi Jason 2009/5/27 : > In changeset 5205 (29 Nov 2008), to resolve #544, someone added the > following code to the fminbound function in optimize/optimize.py: > > x1 = atleast_1d(x1) > x2 = atleast_1d(x2) > if len(x1) != 1 or len(x2) != 1: > ? ? raise ValueError, "Optimisation bounds must be scalars" \ > ? ? ? ? ? " or length 1 arrays" > > An effect of the first two lines is that the x value passed to the > function a few lines later is no longer a single number, but an > ndarray. ?This messes things up for us in the Sage project, where the > calculations in the function may or may not know how to deal with an > ndarray. ?Can we make x1 and x2 numbers if they were originally > numbers? ?Otherwise, we have to wrap all of our functions in a (slow) > python call lambda x: f(x[0]). This should be fixed in http://projects.scipy.org/scipy/changeset/5790 Thanks, St?fan From devicerandom at gmail.com Wed May 27 08:24:49 2009 From: devicerandom at gmail.com (ms) Date: Wed, 27 May 2009 13:24:49 +0100 Subject: [SciPy-user] integrating a system of differential equations Message-ID: <4A1D3111.6060000@gmail.com> Hello, I have to integrate a huge system of differential equations. The system is such that for each time step, the solution of equation j-1 is a parameter for equation j, so they have to be all integrated together at the same time. I tried to do it myself but it seems whatever I do is by no means stable. Can one do that using odeint? I tried to look for odeint documentation but it's not very clear to me if it is possible (especially for someone like me not being exactly accustomed to numerical resolution of ODEs). Thanks! Massimo From jason-sage at creativetrax.com Wed May 27 09:33:42 2009 From: jason-sage at creativetrax.com (jason-sage at creativetrax.com) Date: Wed, 27 May 2009 08:33:42 -0500 Subject: [SciPy-user] fminbound now passes arrays, but used to pass numbers In-Reply-To: <9457e7c80905270229o54cbfccexbf3d27eb9757b808@mail.gmail.com> References: <4A1CF974.1060901@creativetrax.com> <9457e7c80905270229o54cbfccexbf3d27eb9757b808@mail.gmail.com> Message-ID: <4A1D4136.5070605@creativetrax.com> St?fan van der Walt wrote: > Hi Jason > > 2009/5/27 : > >> In changeset 5205 (29 Nov 2008), to resolve #544, someone added the >> following code to the fminbound function in optimize/optimize.py: >> >> x1 = atleast_1d(x1) >> x2 = atleast_1d(x2) >> if len(x1) != 1 or len(x2) != 1: >> raise ValueError, "Optimisation bounds must be scalars" \ >> " or length 1 arrays" >> >> An effect of the first two lines is that the x value passed to the >> function a few lines later is no longer a single number, but an >> ndarray. This messes things up for us in the Sage project, where the >> calculations in the function may or may not know how to deal with an >> ndarray. Can we make x1 and x2 numbers if they were originally >> numbers? Otherwise, we have to wrap all of our functions in a (slow) >> python call lambda x: f(x[0]). >> > > This should be fixed in > > http://projects.scipy.org/scipy/changeset/5790 > Thanks! I'll probably cherry-pick that patch for Sage for now, and look forward to upgrading to 0.7.1 soon when it comes out. Thanks, Jason From jason-sage at creativetrax.com Wed May 27 09:35:00 2009 From: jason-sage at creativetrax.com (jason-sage at creativetrax.com) Date: Wed, 27 May 2009 08:35:00 -0500 Subject: [SciPy-user] linear regression Message-ID: <4A1D4184.9020009@creativetrax.com> Is there a recommended way now of calculating the slope of a linear regression? Using the scipy.stats.linregress function gives a deprecation warning, apparently because that function uses the scipy.mean function: sage: import numpy sage: import scipy.stats sage: scipy.stats.linregress(numpy.asarray([4,3,2,1,2,3,4]), numpy.asarray([1,2,3,4,3,2,1])) /home/jason/download/sage-sage-4.0.alpha0.5/local/lib/python2.5/site-packages/scipy/stats/stats.py:420: DeprecationWarning: scipy.stats.mean is deprecated; please update your code to use numpy.mean. Please note that: - numpy.mean axis argument defaults to None, not 0 - numpy.mean has a ddof argument to replace bias in a more general manner. scipy.stats.mean(a, bias=True) can be replaced by numpy.mean(x, axis=0, ddof=1). axis=0, ddof=1).""", DeprecationWarning) (-1.0, 5.0, -1.0, 1.9206748078018268e-50, 0.0) This is scipy 0.7.0. Thanks, Jason -- Jason Grout From bsouthey at gmail.com Wed May 27 09:56:14 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 27 May 2009 08:56:14 -0500 Subject: [SciPy-user] linear regression In-Reply-To: <4A1D4184.9020009@creativetrax.com> References: <4A1D4184.9020009@creativetrax.com> Message-ID: <4A1D467E.2090301@gmail.com> jason-sage at creativetrax.com wrote: > Is there a recommended way now of calculating the slope of a linear > regression? Using the scipy.stats.linregress function gives a > deprecation warning, apparently because that function uses the > scipy.mean function: > > sage: import numpy > sage: import scipy.stats > sage: scipy.stats.linregress(numpy.asarray([4,3,2,1,2,3,4]), > numpy.asarray([1,2,3,4,3,2,1])) > /home/jason/download/sage-sage-4.0.alpha0.5/local/lib/python2.5/site-packages/scipy/stats/stats.py:420: > DeprecationWarning: scipy.stats.mean is deprecated; please update your > code to use numpy.mean. > Please note that: > - numpy.mean axis argument defaults to None, not 0 > - numpy.mean has a ddof argument to replace bias in a more general > manner. > scipy.stats.mean(a, bias=True) can be replaced by numpy.mean(x, > axis=0, ddof=1). > axis=0, ddof=1).""", DeprecationWarning) > (-1.0, 5.0, -1.0, 1.9206748078018268e-50, 0.0) > > > This is scipy 0.7.0. > > Thanks, > > Jason > > Hi, This should be addressed in the SVN version. Please note that you might see similar messages in other functions (var and samplevar) because any functions that are duplicated with numpy have been or should be depreciated in scipy. I think there are many people who would like this function to disappear because it is just simple linear regression (ie relationship between two variables - http://en.wikipedia.org/wiki/Simple_linear_regression). There are various options like optimize.leastsq and the OLS function at http://www.scipy.org/Cookbook/OLS. Hopefully Skipper's GSoC work using Jonathan Taylor's statistical models will provide a more general approach. Does Sage have any particular needs for regression? Bruce From devicerandom at gmail.com Wed May 27 10:05:03 2009 From: devicerandom at gmail.com (ms) Date: Wed, 27 May 2009 15:05:03 +0100 Subject: [SciPy-user] linear regression In-Reply-To: <4A1D4184.9020009@creativetrax.com> References: <4A1D4184.9020009@creativetrax.com> Message-ID: <4A1D488F.6070603@gmail.com> jason-sage at creativetrax.com ha scritto: > Is there a recommended way now of calculating the slope of a linear > regression? Using the scipy.stats.linregress function gives a > deprecation warning, apparently because that function uses the > scipy.mean function: I think you can use polyfit for doing linear regression, isn't it? m. From josef.pktd at gmail.com Wed May 27 10:01:03 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 10:01:03 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <4A1D488F.6070603@gmail.com> References: <4A1D4184.9020009@creativetrax.com> <4A1D488F.6070603@gmail.com> Message-ID: <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> On Wed, May 27, 2009 at 10:05 AM, ms wrote: > jason-sage at creativetrax.com ha scritto: >> Is there a recommended way now of calculating the slope of a linear >> regression? ?Using the scipy.stats.linregress function gives a >> deprecation warning, apparently because that function uses the >> scipy.mean function: > > I think you can use polyfit for doing linear regression, isn't it? but you don't get the slope coefficient and the standard errors, if you want more than just prediction. Josef From matthieu.brucher at gmail.com Wed May 27 10:05:56 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 27 May 2009 16:05:56 +0200 Subject: [SciPy-user] Segmentation fault with 0.7 Message-ID: Hi, I've also tested scipy 0.7 with the MKL (no choice, I don't have atlas or refblas installed, and I found a way of using the latest by preloading libmkl_core.so), and I got a segmentation fault on a LAPACK function: test_y_bad_size (test_fblas.TestZswap) ... ok test_y_stride (test_fblas.TestZswap) ... ok test_clapack_dsyev (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_dsyev Clapack empty, skip clapack test test_clapack_dsyevr (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_dsyevr Clapack empty, skip clapack test test_clapack_dsyevr_ranges (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_dsyevr_ranges Clapack empty, skip clapack test test_clapack_ssyev (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_ssyev Clapack empty, skip clapack test test_clapack_ssyevr (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_ssyevr Clapack empty, skip clapack test test_clapack_ssyevr_ranges (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_ssyevr_ranges Clapack empty, skip clapack test test_dsyev (test_esv.TestEsv) ... ok test_dsyevr (test_esv.TestEsv) ... Segmentation fault Is it a new function or something like that? I don't remember encoutering this error in previous packages (although I didn't always launched the full tests). Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From josef.pktd at gmail.com Wed May 27 09:59:22 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 09:59:22 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <4A1D4184.9020009@creativetrax.com> References: <4A1D4184.9020009@creativetrax.com> Message-ID: <1cd32cbb0905270659t3efa31cbrf9f7fa3a2753eb07@mail.gmail.com> On Wed, May 27, 2009 at 9:35 AM, wrote: > Is there a recommended way now of calculating the slope of a linear > regression? ?Using the scipy.stats.linregress function gives a > deprecation warning, apparently because that function uses the > scipy.mean function: > > sage: import numpy > sage: import scipy.stats > sage: scipy.stats.linregress(numpy.asarray([4,3,2,1,2,3,4]), > numpy.asarray([1,2,3,4,3,2,1])) > /home/jason/download/sage-sage-4.0.alpha0.5/local/lib/python2.5/site-packages/scipy/stats/stats.py:420: > DeprecationWarning: scipy.stats.mean is deprecated; please update your > code to use numpy.mean. > Please note that: > ? - numpy.mean axis argument defaults to None, not 0 > ? - numpy.mean has a ddof argument to replace bias in a more general > manner. > ? ? scipy.stats.mean(a, bias=True) can be replaced by numpy.mean(x, > axis=0, ddof=1). > ?axis=0, ddof=1).""", DeprecationWarning) > (-1.0, 5.0, -1.0, 1.9206748078018268e-50, 0.0) > > > This is scipy 0.7.0. > I backported a fix for incorrect slopes standard error (http://projects.scipy.org/scipy/ticket/874) together with the switch to using numpy versions of the depreciated stats function. However, not all usage of the depreciated functions has been backported to 0.7.1, but all are (supposed to be) fixed in the trunk for 0.8. So, these kind of depreciation warnings in 0.7.0 and 0.7.1 are just the result of unfinished conversion to numpy stats functions. Josef From mudit_19a at yahoo.com Wed May 27 10:13:13 2009 From: mudit_19a at yahoo.com (mudit sharma) Date: Wed, 27 May 2009 19:43:13 +0530 (IST) Subject: [SciPy-user] concave and convex function In-Reply-To: <3d375d730905181520n7311f516o292bdb18b71b385d@mail.gmail.com> References: <892795.73286.qm@web94915.mail.in2.yahoo.com> <1cd32cbb0905170032h685d781s67086670081e9e80@mail.gmail.com> <253017.92520.qm@web94903.mail.in2.yahoo.com> <3d375d730905181520n7311f516o292bdb18b71b385d@mail.gmail.com> Message-ID: <586133.68816.qm@web94909.mail.in2.yahoo.com> Thanks Robert. I appreciate your response. I found the solution finally, which is, using Savitzky Golay filter for smoothing as it preserves the shape. Then using peak and trough points detection algorithm. Some useful links here: http://terpconnect.umd.edu/~toh/spectrum/PeakFindingandMeasurement.htm. Unfortunately, all these matlab scripts so will have to write python equivalent. Mudit ----- Original Message ---- From: Robert Kern To: SciPy Users List Sent: Monday, 18 May, 2009 23:20:24 Subject: Re: [SciPy-user] concave and convex function On Mon, May 18, 2009 at 02:57, Sebastian Walter wrote: > On Sun, May 17, 2009 at 3:50 PM, mudit sharma wrote: >> >> Thanks for your response. >> >> By M & W curve I meant M & W shape curves( subset ) and by cycle I meant wave cycle. > Is that supposed to describe what is meant by M & W? Peak-trough-peak and trough-peak-trough patterns, respectively, like the shapes of the letters. > No offense, but > if you want help, you should > state your problem in a way that other ppl understand.... His actual question is reasonably well-worded (he wants to classify the signal into convex and concave portions), but you got distracted by the irrelevant portion. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From joshua.stults at gmail.com Wed May 27 10:16:53 2009 From: joshua.stults at gmail.com (Joshua Stults) Date: Wed, 27 May 2009 10:16:53 -0400 Subject: [SciPy-user] integrating a system of differential equations In-Reply-To: <4A1D3111.6060000@gmail.com> References: <4A1D3111.6060000@gmail.com> Message-ID: Massimo, If you're having stability problems, usually going to an implicit integration scheme will help. Looks like the scipy.integrate.ode class lets you choose an integration scheme based on backward difference formulas, which should be unconditionally stable. You'll just need to supply a system function and a Jacobian function. Hope that helps. On Wed, May 27, 2009 at 8:24 AM, ms wrote: > Hello, > > I have to integrate a huge system of differential equations. The system > is such that for each time step, the solution of equation j-1 is a > parameter for equation j, so they have to be all integrated together at > the same time. > > I tried to do it myself but it seems whatever I do is by no means stable. > > Can one do that using odeint? I tried to look for odeint documentation > but it's not very clear to me if it is possible (especially for someone > like me not being exactly accustomed to numerical resolution of ODEs). > > Thanks! > Massimo > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Joshua Stults Website: http://j-stults.blogspot.com From josef.pktd at gmail.com Wed May 27 10:27:25 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 10:27:25 -0400 Subject: [SciPy-user] concave and convex function In-Reply-To: <586133.68816.qm@web94909.mail.in2.yahoo.com> References: <892795.73286.qm@web94915.mail.in2.yahoo.com> <1cd32cbb0905170032h685d781s67086670081e9e80@mail.gmail.com> <253017.92520.qm@web94903.mail.in2.yahoo.com> <3d375d730905181520n7311f516o292bdb18b71b385d@mail.gmail.com> <586133.68816.qm@web94909.mail.in2.yahoo.com> Message-ID: <1cd32cbb0905270727q7294ffd0jd1ecf0858d94080e@mail.gmail.com> On Wed, May 27, 2009 at 10:13 AM, mudit sharma wrote: > > Thanks Robert. I appreciate your response. > > I found the solution finally, which is, using Savitzky Golay filter for smoothing as it preserves the shape. Then using peak and trough points detection algorithm. Some useful links here: > http://terpconnect.umd.edu/~toh/spectrum/PeakFindingandMeasurement.htm. Unfortunately, all these matlab scripts so will have to write python equivalent. > > Mudit > > > > ----- Original Message ---- > From: Robert Kern > To: SciPy Users List > Sent: Monday, 18 May, 2009 23:20:24 > Subject: Re: [SciPy-user] concave and convex function > > On Mon, May 18, 2009 at 02:57, Sebastian Walter > wrote: >> On Sun, May 17, 2009 at 3:50 PM, mudit sharma wrote: >>> >>> Thanks for your response. >>> >>> By M & W curve I meant M & W shape curves( subset ) and by cycle I meant wave cycle. >> Is that supposed to describe what is meant by M & W? > > Peak-trough-peak and trough-peak-trough patterns, respectively, like > the shapes of the letters. > >> No offense, but >> if you want help, you should >> state your problem in a way that other ppl understand.... > > His actual question is reasonably well-worded (he wants to classify > the signal into convex and concave portions), but you got distracted > by the irrelevant portion. > > -- I still don't see identifying peaks and troughs anywhere in the initial question. Identifying peaks and troughs is a question for zeros in the first derivative; identifying convex and concave regions is a question for zeros in the second derivative. There is an entire "industry" trying to do this for the business cycle. Josef From mudit_19a at yahoo.com Wed May 27 11:11:32 2009 From: mudit_19a at yahoo.com (mudit sharma) Date: Wed, 27 May 2009 20:41:32 +0530 (IST) Subject: [SciPy-user] concave and convex function In-Reply-To: <1cd32cbb0905270727q7294ffd0jd1ecf0858d94080e@mail.gmail.com> References: <892795.73286.qm@web94915.mail.in2.yahoo.com> <1cd32cbb0905170032h685d781s67086670081e9e80@mail.gmail.com> <253017.92520.qm@web94903.mail.in2.yahoo.com> <3d375d730905181520n7311f516o292bdb18b71b385d@mail.gmail.com> <586133.68816.qm@web94909.mail.in2.yahoo.com> <1cd32cbb0905270727q7294ffd0jd1ecf0858d94080e@mail.gmail.com> Message-ID: <470075.29661.qm@web94910.mail.in2.yahoo.com> M and W curves represents peak-trough-peak pattern, probably I should have used the later term. It's commonly used in finance to identify trend reversals. I could have used first and second derivative but it's doesn't give appropriate results when applied to noisy data. This can be avoided by identifying every peak and valley , then filtering out shallow peaks and valleys based on an arbitrary depth parameter. ----- Original Message ---- From: "josef.pktd at gmail.com" To: SciPy Users List Sent: Wednesday, 27 May, 2009 15:27:25 Subject: Re: [SciPy-user] concave and convex function On Wed, May 27, 2009 at 10:13 AM, mudit sharma wrote: > > Thanks Robert.. I appreciate your response. > > I found the solution finally, which is, using Savitzky Golay filter for smoothing as it preserves the shape. Then using peak and trough points detection algorithm. Some useful links here: > http://terpconnect.umd.edu/~toh/spectrum/PeakFindingandMeasurement.htm.. Unfortunately, all these matlab scripts so will have to write python equivalent. > > Mudit > > > > ----- Original Message ---- > From: Robert Kern > To: SciPy Users List > Sent: Monday, 18 May, 2009 23:20:24 > Subject: Re: [SciPy-user] concave and convex function > > On Mon, May 18, 2009 at 02:57, Sebastian Walter > wrote: >> On Sun, May 17, 2009 at 3:50 PM, mudit sharma wrote: >>> >>> Thanks for your response. >>> >>> By M & W curve I meant M & W shape curves( subset ) and by cycle I meant wave cycle. >> Is that supposed to describe what is meant by M & W? > > Peak-trough-peak and trough-peak-trough patterns, respectively, like > the shapes of the letters. > >> No offense, but >> if you want help, you should >> state your problem in a way that other ppl understand.... > > His actual question is reasonably well-worded (he wants to classify > the signal into convex and concave portions), but you got distracted > by the irrelevant portion. > > -- I still don't see identifying peaks and troughs anywhere in the initial question. Identifying peaks and troughs is a question for zeros in the first derivative; identifying convex and concave regions is a question for zeros in the second derivative. There is an entire "industry" trying to do this for the business cycle. Josef _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From devicerandom at gmail.com Wed May 27 11:30:56 2009 From: devicerandom at gmail.com (ms) Date: Wed, 27 May 2009 16:30:56 +0100 Subject: [SciPy-user] integrating a system of differential equations In-Reply-To: References: <4A1D3111.6060000@gmail.com> Message-ID: <4A1D5CB0.1010100@gmail.com> Hi Josh, Joshua Stults ha scritto: > Massimo, > > If you're having stability problems, usually going to an implicit > integration scheme will help. Looks like the scipy.integrate.ode > class lets you choose an integration scheme based on backward > difference formulas, which should be unconditionally stable. Oh, good. >You'll > just need to supply a system function and a Jacobian function. This is quite unclear to me. That is: - A single function should calculate the whole system? This is what is done of course, with each dy(j)/dt saved in a vector at index j for every j-th equation; but I am not sure it is doable in the way ode wants it -because I really don't understand how ode wants stuff. - As for the Jacobian, I'm lost. If it is the matrix described here: http://en.wikipedia.org/wiki/Jacobian_matrix I don't understand, looks redundant -it seems in my case it will be a vector of all my derivatives as a function of t (there are no other variables I'm integrating) -but isn't it the output of the above function? But it is all really new stuff for me, sorry. thanks, m. From davide.cittaro at ifom-ieo-campus.it Wed May 27 11:34:34 2009 From: davide.cittaro at ifom-ieo-campus.it (Davide Cittaro) Date: Wed, 27 May 2009 17:34:34 +0200 Subject: [SciPy-user] [half OT?] best way to store a spectrum Message-ID: <1A20D1A1-03C5-42A6-8E0E-25E06759D849@ifom-ieo-campus.it> Hi all, I have a bunch of spectra listed within a file (mass/intensity values). I'm planning to analyze them and match with theoretical spectra... In you opinion, which is the best way to store them for an efficient analysis? In the past this has been done with python arrays (i.e. a spectrum was an array of arrays, each peak was a mass/intensity array)... Thanks d /* Davide Cittaro Cogentech - Consortium for Genomic Technologies via adamello, 16 20139 Milano Italy tel.: +39(02)574303007 e-mail: davide.cittaro at ifom-ieo-campus.it */ -------------- next part -------------- An HTML attachment was scrubbed... URL: From joshua.stults at gmail.com Wed May 27 11:46:05 2009 From: joshua.stults at gmail.com (Joshua Stults) Date: Wed, 27 May 2009 11:46:05 -0400 Subject: [SciPy-user] integrating a system of differential equations In-Reply-To: <4A1D5CB0.1010100@gmail.com> References: <4A1D3111.6060000@gmail.com> <4A1D5CB0.1010100@gmail.com> Message-ID: Look at the example in the documentation: help(scipy.integrate.ode) That should give you an idea of how to define your system's function, it returns a vector since you are trying to integrate a system of ODEs. The Jacobian is a matrix of partial derivatives of your system function, call it f, with respect to all of your variables: J = df_i/dx_j J = [ df_1/dx_1 df_1/dx_2 ... df_1/dx_n] [ df_2/dx_1 df_2/dx_2 ... df_2/dx_n] [ ... ... ] [ df_n/dx_1 df_n/dx_2 ... df_n/dx_n] I'm assuming you are integrating something like: dy/dt = f(x) On Wed, May 27, 2009 at 11:30 AM, ms wrote: > Hi Josh, > > Joshua Stults ha scritto: >> Massimo, >> >> If you're having stability problems, usually going to an implicit >> integration scheme will help. ?Looks like the scipy.integrate.ode >> class lets you choose an integration scheme based on backward >> difference formulas, which should be unconditionally stable. > > Oh, good. > >>You'll >> just need to supply a system function and a Jacobian function. > > This is quite unclear to me. That is: > - A single function should calculate the whole system? This is what is > done of course, with each dy(j)/dt saved in a vector at index j for > every j-th equation; but I am not sure it is doable in the way ode wants > it -because I really don't understand how ode wants stuff. > > - As for the Jacobian, I'm lost. If it is the matrix described here: > > http://en.wikipedia.org/wiki/Jacobian_matrix > > I don't understand, looks redundant -it seems in my case it will be a > vector of all my derivatives as a function of t (there are no other > variables I'm integrating) -but isn't it the output of the above > function? But it is all really new stuff for me, sorry. > > thanks, > m. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Joshua Stults Website: http://j-stults.blogspot.com From devicerandom at gmail.com Wed May 27 11:54:47 2009 From: devicerandom at gmail.com (ms) Date: Wed, 27 May 2009 16:54:47 +0100 Subject: [SciPy-user] linear regression In-Reply-To: <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <4A1D488F.6070603@gmail.com> <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> Message-ID: <4A1D6247.5090004@gmail.com> josef.pktd at gmail.com ha scritto: > On Wed, May 27, 2009 at 10:05 AM, ms wrote: >> jason-sage at creativetrax.com ha scritto: >>> Is there a recommended way now of calculating the slope of a linear >>> regression? Using the scipy.stats.linregress function gives a >>> deprecation warning, apparently because that function uses the >>> scipy.mean function: >> I think you can use polyfit for doing linear regression, isn't it? > > but you don't get the slope coefficient and the standard errors, if > you want more than just prediction. You mean the correlation coefficient? This is numpy.corrcoef() or something like that. But for the std errors, you are right -I get them from odr usually for non linear fits, but maybe it's overkill for linear fit. m. From silva at lma.cnrs-mrs.fr Wed May 27 11:50:36 2009 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 27 May 2009 17:50:36 +0200 Subject: [SciPy-user] integrating a system of differential equations In-Reply-To: <4A1D5CB0.1010100@gmail.com> References: <4A1D3111.6060000@gmail.com> <4A1D5CB0.1010100@gmail.com> Message-ID: <1243439437.14846.51.camel@localhost.localdomain> Le mercredi 27 mai 2009 ? 16:30 +0100, ms a ?crit : > This is quite unclear to me. That is: > - A single function should calculate the whole system? This is what is > done of course, with each dy(j)/dt saved in a vector at index j for > every j-th equation; but I am not sure it is doable in the way ode wants > it -because I really don't understand how ode wants stuff. You need to write your system of differential equations as a system of first-order differential equations. if X=[X_1, ..., X_N] is the vector of unknown signals, the function you have to supply is the function that computes the time derivatives of these signals. def func_ode(X,t): dX = np.zeros_like(X) for n in xrange(len(X)): dX[n]=... return dX then you call the odeint routine giving an initial condition X0 and a time range TimeVec: import scipy.integrate as integrate X = integrate.odeint(func_ode, X0, TimeVec) > - As for the Jacobian, I'm lost. You do not have to provide the jacobian. The Ode Solver recommends but does not require it. -- Fabrice Silva LMA UPR CNRS 7051 From jsseabold at gmail.com Wed May 27 11:59:02 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 27 May 2009 11:59:02 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <4A1D6247.5090004@gmail.com> References: <4A1D4184.9020009@creativetrax.com> <4A1D488F.6070603@gmail.com> <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> <4A1D6247.5090004@gmail.com> Message-ID: On Wed, May 27, 2009 at 11:54 AM, ms wrote: > josef.pktd at gmail.com ha scritto: >> On Wed, May 27, 2009 at 10:05 AM, ms wrote: >>> jason-sage at creativetrax.com ha scritto: >>>> Is there a recommended way now of calculating the slope of a linear >>>> regression? ?Using the scipy.stats.linregress function gives a >>>> deprecation warning, apparently because that function uses the >>>> scipy.mean function: >>> I think you can use polyfit for doing linear regression, isn't it? >> >> but you don't get the slope coefficient and the standard errors, if >> you want more than just prediction. > > You mean the correlation coefficient? This is numpy.corrcoef() or > something like that. He means that polyfit does not provide the Betas in a linear fit of, for example, y = Beta * x + Beta2 * x**2 and their associated standard errors. It will only give you the predictions (ie., Y-hats) for your data based on the fit. Yes, my GSoC will be of interest to you, if you use SciPy for linear regression. Right now it's a bit slow going as I have comps looming over my head in the next week and much of the work is being done outside of SciPy until the code to be included is cleaned up and some design issues are settled, but significant strides will be made in the next few weeks. You can follow the progress here with some examples and tutorials (for usage and stats probably) . Posts will be more frequent over the next three months (I promise). Skipper From devicerandom at gmail.com Wed May 27 12:13:44 2009 From: devicerandom at gmail.com (ms) Date: Wed, 27 May 2009 17:13:44 +0100 Subject: [SciPy-user] integrating a system of differential equations In-Reply-To: <1243439437.14846.51.camel@localhost.localdomain> References: <4A1D3111.6060000@gmail.com> <4A1D5CB0.1010100@gmail.com> <1243439437.14846.51.camel@localhost.localdomain> Message-ID: <4A1D66B8.9030902@gmail.com> Fabrice Silva ha scritto: > Le mercredi 27 mai 2009 ? 16:30 +0100, ms a ?crit : > You need to write your system of differential equations as a system of > first-order differential equations. > if X=[X_1, ..., X_N] is the vector of unknown signals, the function you > have to supply is the function that computes the time derivatives of > these signals. > > def func_ode(X,t): > dX = np.zeros_like(X) > for n in xrange(len(X)): > dX[n]=... > return dX > > then you call the odeint routine giving an initial condition X0 and a > time range TimeVec: > import scipy.integrate as integrate > X = integrate.odeint(func_ode, X0, TimeVec) > >> - As for the Jacobian, I'm lost. > You do not have to provide the jacobian. The Ode Solver recommends but > does not require it. Ok, looks easier than I thought (sorry, but I'm multitasking a lot of things and I cannot concentrate as much as I should). thanks, m. From jsseabold at gmail.com Wed May 27 12:08:55 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 27 May 2009 12:08:55 -0400 Subject: [SciPy-user] linear regression In-Reply-To: References: <4A1D4184.9020009@creativetrax.com> <4A1D488F.6070603@gmail.com> <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> <4A1D6247.5090004@gmail.com> Message-ID: On Wed, May 27, 2009 at 11:59 AM, Skipper Seabold wrote: > On Wed, May 27, 2009 at 11:54 AM, ms wrote: >> josef.pktd at gmail.com ha scritto: >>> On Wed, May 27, 2009 at 10:05 AM, ms wrote: >>>> jason-sage at creativetrax.com ha scritto: >>>>> Is there a recommended way now of calculating the slope of a linear >>>>> regression? ?Using the scipy.stats.linregress function gives a >>>>> deprecation warning, apparently because that function uses the >>>>> scipy.mean function: >>>> I think you can use polyfit for doing linear regression, isn't it? >>> >>> but you don't get the slope coefficient and the standard errors, if >>> you want more than just prediction. >> >> You mean the correlation coefficient? This is numpy.corrcoef() or >> something like that. > > He means that polyfit does not provide the Betas in a linear fit of, > for example, y = Beta * x + Beta2 * x**2 and their associated standard > errors. ?It will only give you the predictions (ie., Y-hats) for your > data based on the fit. Err, sorry I don't think this isn't right for polyfit after having a look. One day I will learn to look before I leap... Have a look here From josef.pktd at gmail.com Wed May 27 12:19:35 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 12:19:35 -0400 Subject: [SciPy-user] linear regression In-Reply-To: References: <4A1D4184.9020009@creativetrax.com> <4A1D488F.6070603@gmail.com> <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> <4A1D6247.5090004@gmail.com> Message-ID: <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> On Wed, May 27, 2009 at 12:08 PM, Skipper Seabold wrote: > On Wed, May 27, 2009 at 11:59 AM, Skipper Seabold wrote: >> On Wed, May 27, 2009 at 11:54 AM, ms wrote: >>> josef.pktd at gmail.com ha scritto: >>>> On Wed, May 27, 2009 at 10:05 AM, ms wrote: >>>>> jason-sage at creativetrax.com ha scritto: >>>>>> Is there a recommended way now of calculating the slope of a linear >>>>>> regression? ?Using the scipy.stats.linregress function gives a >>>>>> deprecation warning, apparently because that function uses the >>>>>> scipy.mean function: >>>>> I think you can use polyfit for doing linear regression, isn't it? >>>> >>>> but you don't get the slope coefficient and the standard errors, if >>>> you want more than just prediction. >>> >>> You mean the correlation coefficient? This is numpy.corrcoef() or >>> something like that. >> >> He means that polyfit does not provide the Betas in a linear fit of, >> for example, y = Beta * x + Beta2 * x**2 and their associated standard >> errors. ?It will only give you the predictions (ie., Y-hats) for your >> data based on the fit. > > Err, sorry I don't think this isn't right for polyfit after having a > look. ?One day I will learn to look before I leap... > > Have a look here y = Beta0 + Beta1 * x + Beta2 * x**2 is the second order polynomial. I also should have looked, polyfit returns the polynomial coefficients but doesn't calculate the variance-covariance matrix or standard errors of the OLS estimate. Josef From warren.weckesser at gmail.com Wed May 27 12:34:50 2009 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 27 May 2009 11:34:50 -0500 Subject: [SciPy-user] integrating a system of differential equations In-Reply-To: <4A1D66B8.9030902@gmail.com> References: <4A1D3111.6060000@gmail.com> <4A1D5CB0.1010100@gmail.com> <1243439437.14846.51.camel@localhost.localdomain> <4A1D66B8.9030902@gmail.com> Message-ID: <114880320905270934s7bc09ebg5846bf6227cb7551@mail.gmail.com> There are also examples at scipy.org: http://www.scipy.org/LoktaVolterraTutorial http://www.scipy.org/Cookbook/CoupledSpringMassSystem On Wed, May 27, 2009 at 11:13 AM, ms wrote: > Fabrice Silva ha scritto: > > Le mercredi 27 mai 2009 ? 16:30 +0100, ms a ?crit : > > You need to write your system of differential equations as a system of > > first-order differential equations. > > if X=[X_1, ..., X_N] is the vector of unknown signals, the function you > > have to supply is the function that computes the time derivatives of > > these signals. > > > > def func_ode(X,t): > > dX = np.zeros_like(X) > > for n in xrange(len(X)): > > dX[n]=... > > return dX > > > > then you call the odeint routine giving an initial condition X0 and a > > time range TimeVec: > > import scipy.integrate as integrate > > X = integrate.odeint(func_ode, X0, TimeVec) > > > >> - As for the Jacobian, I'm lost. > > You do not have to provide the jacobian. The Ode Solver recommends but > > does not require it. > > Ok, looks easier than I thought (sorry, but I'm multitasking a lot of > things and I cannot concentrate as much as I should). > > thanks, > m. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Wed May 27 13:02:07 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 27 May 2009 12:02:07 -0500 Subject: [SciPy-user] [half OT?] best way to store a spectrum In-Reply-To: <1A20D1A1-03C5-42A6-8E0E-25E06759D849@ifom-ieo-campus.it> References: <1A20D1A1-03C5-42A6-8E0E-25E06759D849@ifom-ieo-campus.it> Message-ID: <4A1D720F.5080702@gmail.com> Davide Cittaro wrote: > Hi all, > I have a bunch of spectra listed within a file (mass/intensity > values). I'm planning to analyze them and match with theoretical > spectra... In you opinion, which is the best way to store them for an > efficient analysis? > In the past this has been done with python arrays (i.e. a spectrum was > an array of arrays, each peak was a mass/intensity array)... > > Thanks > > d > /* > Davide Cittaro > > Cogentech - Consortium for Genomic Technologies > via adamello, 16 > 20139 Milano > Italy > > tel.: +39(02)574303007 > e-mail: davide.cittaro at ifom-ieo-campus.it > > */ > > Can you please be more specific? Exactly what do you mean by 'analysis'? Do you actually use the intensity values or only those values above a set threshold? What do you really mean by a 'bunch of spectra'? Does each experimental spectrum have a unique corresponding theoretical spectrum? Do you compare the 'bunch of spectra' to a single theoretical spectrum? Do you compare the 'bunch of spectra' to a bunch of theoretical spectrum? What exactly do you mean by 'match'? To be efficient, you probably want to: 1) Vectorize the operations so you want to avoid looping over each spectrum. So a single large array may help. 2) Find a suitable approach for your analysis as there may be more than one approach. Especially getting as many of the calculations as possible into lapack functions rather than Python should be faster. 3) Try to factoring out constants. Bruce From devicerandom at gmail.com Wed May 27 12:35:19 2009 From: devicerandom at gmail.com (ms) Date: Wed, 27 May 2009 17:35:19 +0100 Subject: [SciPy-user] linear regression In-Reply-To: <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <4A1D488F.6070603@gmail.com> <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> <4A1D6247.5090004@gmail.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> Message-ID: <4A1D6BC7.2020402@gmail.com> josef.pktd at gmail.com ha scritto: >> Have a look here > > y = Beta0 + Beta1 * x + Beta2 * x**2 is the second order polynomial. > > I also should have looked, polyfit returns the polynomial coefficients > but doesn't calculate the variance-covariance matrix or standard > errors of the OLS estimate. AFAIK, the ODR fitting routines return all these parameters, so one can maybe use that for linear fitting too. m. From josef.pktd at gmail.com Wed May 27 14:28:00 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 14:28:00 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <4A1D6BC7.2020402@gmail.com> References: <4A1D4184.9020009@creativetrax.com> <4A1D488F.6070603@gmail.com> <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> <4A1D6247.5090004@gmail.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> Message-ID: <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> On Wed, May 27, 2009 at 12:35 PM, ms wrote: > josef.pktd at gmail.com ha scritto: >>> Have a look here >> >> y = Beta0 + Beta1 * x + Beta2 * x**2 ? is the second order polynomial. >> >> I also should have looked, polyfit returns the polynomial coefficients >> but doesn't calculate the variance-covariance matrix or standard >> errors of the OLS estimate. > > AFAIK, the ODR fitting routines return all these parameters, so one can > maybe use that for linear fitting too. you mean scipy.odr? I never looked at it in details. Conceptionally it is very similar to standard regression, but I've never seen an application for it, nor do I know the probability theoretic or econometric background of it. The results for many cases will be relatively close to standard least squares. A google search shows links to curve fitting but not to any econometric theory. On the other hand, there is a very large literature on how to treat measurement errors and endogeneity of regressors for (standard) least squares and maximum likelihood. The difference between curve fitting and (maybe prediction) and parameter estimation in many social/economic sciences is that we want to get a reliable parameter estimate and not just a well fitting curve. How much does the average lifetime income increase when finishing college compared to only finishing high school? Did the price of oil go up because of demand side or supply side effects? Did the availability of contraceptives decrease crime? I also haven't spend the time yet to figure out what scipy.maxentropy really does. Josef From jsseabold at gmail.com Wed May 27 14:35:45 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 27 May 2009 14:35:45 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <4A1D488F.6070603@gmail.com> <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> <4A1D6247.5090004@gmail.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> Message-ID: On Wed, May 27, 2009 at 2:28 PM, wrote: > On Wed, May 27, 2009 at 12:35 PM, ms wrote: >> josef.pktd at gmail.com ha scritto: >>>> Have a look here >>> >>> y = Beta0 + Beta1 * x + Beta2 * x**2 ? is the second order polynomial. >>> >>> I also should have looked, polyfit returns the polynomial coefficients >>> but doesn't calculate the variance-covariance matrix or standard >>> errors of the OLS estimate. >> >> AFAIK, the ODR fitting routines return all these parameters, so one can >> maybe use that for linear fitting too. > It does look like the ODR routines can be used for linear fitting, using the least squares fitting criterion (and OLS assumptions about the errors in your data being restricted to the dependent variable) as opposed to the ODR criterion. But I don't know too much about this either. From gary.pajer at gmail.com Wed May 27 14:50:20 2009 From: gary.pajer at gmail.com (Gary Pajer) Date: Wed, 27 May 2009 14:50:20 -0400 Subject: [SciPy-user] [half OT?] best way to store a spectrum In-Reply-To: <1A20D1A1-03C5-42A6-8E0E-25E06759D849@ifom-ieo-campus.it> References: <1A20D1A1-03C5-42A6-8E0E-25E06759D849@ifom-ieo-campus.it> Message-ID: <88fe22a0905271150l1eb7a7ccpf14abc0205973373@mail.gmail.com> On Wed, May 27, 2009 at 11:34 AM, Davide Cittaro < davide.cittaro at ifom-ieo-campus.it> wrote: > Hi all,I have a bunch of spectra listed within a file (mass/intensity > values). I'm planning to analyze them and match with theoretical spectra... > In you opinion, which is the best way to store them for an efficient > analysis? > In the past this has been done with python arrays (i.e. a spectrum was an > array of arrays, each peak was a mass/intensity array)... > > Thanks > > d > /* > Davide Cittaro > I note that you specifically ask about storage. I store spectra, too. Each one has 5,000 - 10,000 data points, and I have sequences of them. Up until recently I was simply storing them in numpy arrays. When the length of the sequence got up to several hundred I switched to hdf5/PyTables. The greatest advantage is that I don't worry so much about the structure of my saved datasets. I was starting to lose sleep. I can also more conveniently store the small bits of metadata, and other data. Now I'm looking into h5py to lower (?) the overhead. My primary need is storage, and I don't need PyTables rich abilites. hth, gary > > Cogentech - Consortium for Genomic Technologies > via adamello, 16 > 20139 Milano > Italy > > tel.: +39(02)574303007 > e-mail: davide.cittaro at ifom-ieo-campus.it > */ > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From devicerandom at gmail.com Wed May 27 15:00:25 2009 From: devicerandom at gmail.com (ms) Date: Wed, 27 May 2009 20:00:25 +0100 Subject: [SciPy-user] linear regression In-Reply-To: <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <4A1D488F.6070603@gmail.com> <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> <4A1D6247.5090004@gmail.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> Message-ID: <4A1D8DC9.2060504@gmail.com> josef.pktd at gmail.com ha scritto: > On Wed, May 27, 2009 at 12:35 PM, ms wrote: >> josef.pktd at gmail.com ha scritto: >>>> Have a look here >>> y = Beta0 + Beta1 * x + Beta2 * x**2 is the second order polynomial. >>> >>> I also should have looked, polyfit returns the polynomial coefficients >>> but doesn't calculate the variance-covariance matrix or standard >>> errors of the OLS estimate. >> AFAIK, the ODR fitting routines return all these parameters, so one can >> maybe use that for linear fitting too. > > you mean scipy.odr? Yes. I use it for non-linear fitting, and it gives parameters, standard deviations and covariance matrix. > I never looked at it in details. Conceptionally it is very similar to > standard regression, but I've never seen an application for it, nor do > I know the probability theoretic or econometric background of it. The > results for many cases will be relatively close to standard least > squares. You can explicitly tell the odr function to use least squares fitting. > A google search shows links to curve fitting but not to any > econometric theory. The math of fitting data to a function, as far as I know, is independent of the field of application. The functions and the data, of course, are not. :) > The difference between curve fitting and (maybe prediction) and > parameter estimation in many social/economic sciences is that we want > to get a reliable parameter estimate and not just a well fitting > curve. Ehm, I don't know what is your knowledge of sciences, but trust me, no one does curve fitting only to obtain "a well fitting curve": everyone wants parameters, obviously, and their statistics. scipy.odr gives me these nicely. > How much does the average lifetime income increase when finishing > college compared to only finishing high school? Did the price of oil > go up because of demand side or supply side effects? Did the > availability of contraceptives decrease crime? On a side note, you won't answer these interesting questions with linear fitting only, because correlation does not mean causation. m. From robert.kern at gmail.com Wed May 27 15:03:54 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 May 2009 14:03:54 -0500 Subject: [SciPy-user] linear regression In-Reply-To: <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <4A1D488F.6070603@gmail.com> <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> <4A1D6247.5090004@gmail.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> Message-ID: <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> On Wed, May 27, 2009 at 13:28, wrote: > On Wed, May 27, 2009 at 12:35 PM, ms wrote: >> josef.pktd at gmail.com ha scritto: >>>> Have a look here >>> >>> y = Beta0 + Beta1 * x + Beta2 * x**2 ? is the second order polynomial. >>> >>> I also should have looked, polyfit returns the polynomial coefficients >>> but doesn't calculate the variance-covariance matrix or standard >>> errors of the OLS estimate. >> >> AFAIK, the ODR fitting routines return all these parameters, so one can >> maybe use that for linear fitting too. > > you mean scipy.odr? > > I never looked at it in details. Conceptionally it is very similar to > standard regression, but I've never seen an application for it, nor do > I know the probability theoretic or econometric background of it. ODR is nonlinear least-squares with errors in both variables (e.g. minimizing the weighted sum of squared distances from each point to the corresponding closest points on the curve rather than "straight down" as in OLS). scipy.odr implements both ODR and OLS. It also implements implicit regression, where the relationship between variables is not expressed as "y=f(x)" but "f(x,y)=0" such as fitting an ellipse. > The > results for many cases will be relatively close to standard least > squares. > A google search shows links to curve fitting but not to any > econometric theory. On the other hand, there is a very large > literature on how to treat measurement errors and endogeneity of > regressors for (standard) least squares and maximum likelihood. The extension is straightforward. ODR is really just a generalization of least-squares. Unfortunately, the links to the relevant papers seem to have died. I've put them up here: http://www.mechanicalkern.com/static/odr_vcv.pdf http://www.mechanicalkern.com/static/odr_ams.pdf http://www.mechanicalkern.com/static/odrpack_guide.pdf -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Wed May 27 15:22:23 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 15:22:23 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <4A1D488F.6070603@gmail.com> <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> <4A1D6247.5090004@gmail.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> Message-ID: <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> On Wed, May 27, 2009 at 3:03 PM, Robert Kern wrote: > On Wed, May 27, 2009 at 13:28, ? wrote: >> On Wed, May 27, 2009 at 12:35 PM, ms wrote: >>> josef.pktd at gmail.com ha scritto: >>>>> Have a look here >>>> >>>> y = Beta0 + Beta1 * x + Beta2 * x**2 ? is the second order polynomial. >>>> >>>> I also should have looked, polyfit returns the polynomial coefficients >>>> but doesn't calculate the variance-covariance matrix or standard >>>> errors of the OLS estimate. >>> >>> AFAIK, the ODR fitting routines return all these parameters, so one can >>> maybe use that for linear fitting too. >> >> you mean scipy.odr? >> >> I never looked at it in details. Conceptionally it is very similar to >> standard regression, but I've never seen an application for it, nor do >> I know the probability theoretic or econometric background of it. > > ODR is nonlinear least-squares with errors in both variables (e.g. > minimizing the weighted sum of squared distances from each point to > the corresponding closest points on the curve rather than "straight > down" as in OLS). scipy.odr implements both ODR and OLS. It also > implements implicit regression, where the relationship between > variables is not expressed as "y=f(x)" but "f(x,y)=0" such as fitting > an ellipse. > >> The >> results for many cases will be relatively close to standard least >> squares. >> A google search shows links to curve fitting but not to any >> econometric theory. On the other hand, there is a very large >> literature on how to treat measurement errors and endogeneity of >> regressors for (standard) least squares and maximum likelihood. > > The extension is straightforward. ODR is really just a generalization > of least-squares. Unfortunately, the links to the relevant papers seem > to have died. I've put them up here: > > http://www.mechanicalkern.com/static/odr_vcv.pdf > http://www.mechanicalkern.com/static/odr_ams.pdf > http://www.mechanicalkern.com/static/odrpack_guide.pdf > Thanks for the links, I finally also found out that in Wikipedia it is under "Total Regression". Under "Errors-in-Variables model" it says " Error-in-variables models can be estimated in several different ways. Besides those outlined here, see: * total least squares for a method of fitting which does not arise from a statistical model; " >From a brief reading, I think that the main limitation is that it doesn't allow you to explicitly model the joint error structure. I looks like, this will be implicitly done by the scaling factors and other function parameters. But this is just my first impression. While in econometrics the most common methods are instrumental variables, and two-stage estimators, which both try to explicitly remove the randomness in the regressors (at least the part that is correlated with the regression error). I just looked at the published docs for odr and they could use quite a bit of reorganization (e.g docstring of odrpack is missing). Reading the source files is currently more informative. Josef From robert.kern at gmail.com Wed May 27 15:37:14 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 May 2009 14:37:14 -0500 Subject: [SciPy-user] linear regression In-Reply-To: <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> <4A1D6247.5090004@gmail.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> Message-ID: <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> On Wed, May 27, 2009 at 14:22, wrote: > On Wed, May 27, 2009 at 3:03 PM, Robert Kern wrote: >> On Wed, May 27, 2009 at 13:28, ? wrote: >>> On Wed, May 27, 2009 at 12:35 PM, ms wrote: >>>> josef.pktd at gmail.com ha scritto: >>>>>> Have a look here >>>>> >>>>> y = Beta0 + Beta1 * x + Beta2 * x**2 ? is the second order polynomial. >>>>> >>>>> I also should have looked, polyfit returns the polynomial coefficients >>>>> but doesn't calculate the variance-covariance matrix or standard >>>>> errors of the OLS estimate. >>>> >>>> AFAIK, the ODR fitting routines return all these parameters, so one can >>>> maybe use that for linear fitting too. >>> >>> you mean scipy.odr? >>> >>> I never looked at it in details. Conceptionally it is very similar to >>> standard regression, but I've never seen an application for it, nor do >>> I know the probability theoretic or econometric background of it. >> >> ODR is nonlinear least-squares with errors in both variables (e.g. >> minimizing the weighted sum of squared distances from each point to >> the corresponding closest points on the curve rather than "straight >> down" as in OLS). scipy.odr implements both ODR and OLS. It also >> implements implicit regression, where the relationship between >> variables is not expressed as "y=f(x)" but "f(x,y)=0" such as fitting >> an ellipse. >> >>> The >>> results for many cases will be relatively close to standard least >>> squares. >>> A google search shows links to curve fitting but not to any >>> econometric theory. On the other hand, there is a very large >>> literature on how to treat measurement errors and endogeneity of >>> regressors for (standard) least squares and maximum likelihood. >> >> The extension is straightforward. ODR is really just a generalization >> of least-squares. Unfortunately, the links to the relevant papers seem >> to have died. I've put them up here: >> >> http://www.mechanicalkern.com/static/odr_vcv.pdf >> http://www.mechanicalkern.com/static/odr_ams.pdf >> http://www.mechanicalkern.com/static/odrpack_guide.pdf >> > > Thanks for the links, I finally also found out that in Wikipedia it is > under "Total Regression". Under "Errors-in-Variables model" it says > > " > Error-in-variables models can be estimated in several different ways. > Besides those outlined here, see: > ? ? ? ?* total least squares for a method of fitting which does not > arise from a statistical model; > " > > >From a brief reading, I think that the main limitation is that it > doesn't allow you to explicitly model the joint error structure. I > looks like, this will be implicitly done by the scaling factors and > other function parameters. But this is just my first impression. For "y=f(x)" models, this is true. Both y and x can be multivariate, and you can express the covariance of the uncertainties for each, but not covariance between the y and x uncertainties. This is because of the numerical tricks used for efficient implementation. However, "f(x)=0" models can express covariances between all dimensions of x. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From davide.cittaro at ifom-ieo-campus.it Wed May 27 15:48:42 2009 From: davide.cittaro at ifom-ieo-campus.it (Davide Cittaro) Date: Wed, 27 May 2009 21:48:42 +0200 Subject: [SciPy-user] [half OT?] best way to store a spectrum In-Reply-To: <4A1D720F.5080702@gmail.com> References: <1A20D1A1-03C5-42A6-8E0E-25E06759D849@ifom-ieo-campus.it> <4A1D720F.5080702@gmail.com> Message-ID: <89B30405-32CD-4A96-A4BB-462FFC2D0E70@ifom-ieo-campus.it> On May 27, 2009, at 7:02 PM, Bruce Southey wrote: Hi > > > Can you please be more specific? > You're right :-) > Exactly what do you mean by 'analysis'? These spectra come from proteomic experiments in which peptides are fragmented into ion series which should be matched with theoretical spectra (predicted from peptide aminoacidic sequence) to identify sequence themselves... > Do you actually use the intensity values or only those values above a > set threshold? :-) In a first attempt I don't think, theoretical spectra are difficult to model on intensities. I may use intensity values to get only most intense peaks > What do you really mean by a 'bunch of spectra'? Thousands or dozen of thousands usually... > Does each experimental spectrum have a unique corresponding > theoretical > spectrum? No > Do you compare the 'bunch of spectra' to a single theoretical > spectrum? > Do you compare the 'bunch of spectra' to a bunch of theoretical > spectrum? I have to find which "theoretical" best matches with an experimental (or viceversa)... > What exactly do you mean by 'match'? > LOL! Sorry if I laugh... scoring a match is a story apart :-) > To be efficient, you probably want to: > 1) Vectorize the operations so you want to avoid looping over each > spectrum. So a single large array may help. > 2) Find a suitable approach for your analysis as there may be more > than > one approach. Especially getting as many of the calculations as > possible > into lapack functions rather than Python should be faster. > 3) Try to factoring out constants. Thanks d -------------- next part -------------- An HTML attachment was scrubbed... URL: From davide.cittaro at ifom-ieo-campus.it Wed May 27 15:51:53 2009 From: davide.cittaro at ifom-ieo-campus.it (Davide Cittaro) Date: Wed, 27 May 2009 21:51:53 +0200 Subject: [SciPy-user] [half OT?] best way to store a spectrum In-Reply-To: <88fe22a0905271150l1eb7a7ccpf14abc0205973373@mail.gmail.com> References: <1A20D1A1-03C5-42A6-8E0E-25E06759D849@ifom-ieo-campus.it> <88fe22a0905271150l1eb7a7ccpf14abc0205973373@mail.gmail.com> Message-ID: <2640D391-83F5-468E-8D66-A79A97863682@ifom-ieo-campus.it> On May 27, 2009, at 8:50 PM, Gary Pajer wrote: > > I note that you specifically ask about storage. > Well, I'm not that fluent with "Engrish"... I mean a way to represent a spectra with an appropriate data structure which can help in selecting single spectrum regions or peaks... > I store spectra, too. Each one has 5,000 - 10,000 data points, and > I have sequences of them. Up until recently I was simply storing > them in numpy arrays. Did you use arrays with (2,1) shape? or an array for mass and an array for intensity? > When the length of the sequence got up to several hundred I switched > to hdf5/PyTables. The greatest advantage is that I don't worry so > much about the structure of my saved datasets. I was starting to > lose sleep. I can also more conveniently store the small bits of > metadata, and other data. Now I'm looking into h5py to lower (?) > the overhead. My primary need is storage, and I don't need PyTables > rich abilites. > Thanks so much. As I will deal with storage issues I will definitely take a look to those (I probably need them for other projects ^__^) d -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed May 27 16:29:45 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 16:29:45 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <4A1D6247.5090004@gmail.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> Message-ID: <1cd32cbb0905271329o549be00duf7ad8e20ac87c3db@mail.gmail.com> On Wed, May 27, 2009 at 3:37 PM, Robert Kern wrote: > On Wed, May 27, 2009 at 14:22, ? wrote: >> On Wed, May 27, 2009 at 3:03 PM, Robert Kern wrote: >>> On Wed, May 27, 2009 at 13:28, ? wrote: >>>> On Wed, May 27, 2009 at 12:35 PM, ms wrote: >>>>> josef.pktd at gmail.com ha scritto: >>>>>>> Have a look here >>>>>> >>>>>> y = Beta0 + Beta1 * x + Beta2 * x**2 ? is the second order polynomial. >>>>>> >>>>>> I also should have looked, polyfit returns the polynomial coefficients >>>>>> but doesn't calculate the variance-covariance matrix or standard >>>>>> errors of the OLS estimate. >>>>> >>>>> AFAIK, the ODR fitting routines return all these parameters, so one can >>>>> maybe use that for linear fitting too. >>>> >>>> you mean scipy.odr? >>>> >>>> I never looked at it in details. Conceptionally it is very similar to >>>> standard regression, but I've never seen an application for it, nor do >>>> I know the probability theoretic or econometric background of it. >>> >>> ODR is nonlinear least-squares with errors in both variables (e.g. >>> minimizing the weighted sum of squared distances from each point to >>> the corresponding closest points on the curve rather than "straight >>> down" as in OLS). scipy.odr implements both ODR and OLS. It also >>> implements implicit regression, where the relationship between >>> variables is not expressed as "y=f(x)" but "f(x,y)=0" such as fitting >>> an ellipse. >>> >>>> The >>>> results for many cases will be relatively close to standard least >>>> squares. >>>> A google search shows links to curve fitting but not to any >>>> econometric theory. On the other hand, there is a very large >>>> literature on how to treat measurement errors and endogeneity of >>>> regressors for (standard) least squares and maximum likelihood. >>> >>> The extension is straightforward. ODR is really just a generalization >>> of least-squares. Unfortunately, the links to the relevant papers seem >>> to have died. I've put them up here: >>> >>> http://www.mechanicalkern.com/static/odr_vcv.pdf >>> http://www.mechanicalkern.com/static/odr_ams.pdf >>> http://www.mechanicalkern.com/static/odrpack_guide.pdf >>> >> >> Thanks for the links, I finally also found out that in Wikipedia it is >> under "Total Regression". Under "Errors-in-Variables model" it says >> >> " >> Error-in-variables models can be estimated in several different ways. >> Besides those outlined here, see: >> ? ? ? ?* total least squares for a method of fitting which does not >> arise from a statistical model; >> " >> >> >From a brief reading, I think that the main limitation is that it >> doesn't allow you to explicitly model the joint error structure. I >> looks like, this will be implicitly done by the scaling factors and >> other function parameters. But this is just my first impression. > > For "y=f(x)" models, this is true. Both y and x can be multivariate, > and you can express the covariance of the uncertainties for each, but > not covariance between the y and x uncertainties. This is because of > the numerical tricks used for efficient implementation In this case, OLS would still be unbiased in the linear case, but maybe not efficient. I don't know about the non-linear case. > . However, > "f(x)=0" models can express covariances between all dimensions of x. When I saw initially the implicit function estimation, I thought this might be pretty useful. But I will have to play with odr, to see how much it can be used for more "traditional" statistical analysis. Josef From bsouthey at gmail.com Wed May 27 16:44:50 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 27 May 2009 15:44:50 -0500 Subject: [SciPy-user] [half OT?] best way to store a spectrum In-Reply-To: <89B30405-32CD-4A96-A4BB-462FFC2D0E70@ifom-ieo-campus.it> References: <1A20D1A1-03C5-42A6-8E0E-25E06759D849@ifom-ieo-campus.it> <4A1D720F.5080702@gmail.com> <89B30405-32CD-4A96-A4BB-462FFC2D0E70@ifom-ieo-campus.it> Message-ID: <4A1DA642.2040705@gmail.com> Davide Cittaro wrote: > > On May 27, 2009, at 7:02 PM, Bruce Southey wrote: > Hi >> >> > >> Can you please be more specific? >> > > You're right :-) > >> Exactly what do you mean by 'analysis'? > > These spectra come from proteomic experiments in which peptides are > fragmented into ion series which should be matched with theoretical > spectra (predicted from peptide aminoacidic sequence) to identify > sequence themselves... > >> Do you actually use the intensity values or only those values above a >> set threshold? > > :-) In a first attempt I don't think, theoretical spectra are > difficult to model on intensities. I may use intensity values to get > only most intense peaks > >> What do you really mean by a 'bunch of spectra'? > > Thousands or dozen of thousands usually... > >> Does each experimental spectrum have a unique corresponding theoretical >> spectrum? > > No > >> Do you compare the 'bunch of spectra' to a single theoretical spectrum? >> Do you compare the 'bunch of spectra' to a bunch of theoretical >> spectrum? > > I have to find which "theoretical" best matches with an experimental > (or viceversa)... > >> What exactly do you mean by 'match'? >> > > LOL! Sorry if I laugh... scoring a match is a story apart :-) > >> To be efficient, you probably want to: >> 1) Vectorize the operations so you want to avoid looping over each >> spectrum. So a single large array may help. >> 2) Find a suitable approach for your analysis as there may be more than >> one approach. Especially getting as many of the calculations as possible >> into lapack functions rather than Python should be faster. >> 3) Try to factoring out constants. > > Thanks > > d > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi, I do have some very basic understanding of the problem. Without knowing the approach(es) that you are using, there is not a lot to add. Basically you need to store it in way that you can quickly access it in the desired format. For example, most approaches filter first on the overall 'protein' mass so it may be important to quickly retrieve spectra based on range of masses rather than going through each spectrum one by one. As Gary suggests, hdf5/PyTables may be beneficial. Bruce From gael.varoquaux at normalesup.org Wed May 27 16:50:50 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 27 May 2009 22:50:50 +0200 Subject: [SciPy-user] OLS matrix-f(x) = 0 problem (Was: linear regression) In-Reply-To: <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> References: <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> <4A1D6247.5090004@gmail.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> Message-ID: <20090527205050.GA26519@phare.normalesup.org> I have been fighting a bit with a OLS regression problem (my ignorance in regression is wide), and a remark by Robert just prompted me to ask the list: On Wed, May 27, 2009 at 02:37:14PM -0500, Robert Kern wrote: > "f(x)=0" models can express covariances between all dimensions of x. Sorry for asking you about my 'homework', but people seem so knowledgeable... I have a multivariate dataset X, and a given sparse, lower triangular, boolean, matrix T with an empty diagonal. I am interested in finding the matrix R for which support(R) == support(T), that is the OLS solution to: Y = np.dot(R, Y) I seems to me that the problem can be written in terms of a classic OLS problem, but I have played with it, and couldn't figure it out. I don't want to implement an optimisation routine of the L2 norm, because I have a large number of parameters, and the resulting optimisation will be dead slow. I am open to any suggestions, or references. Thanks a lot, Ga?l From robert.kern at gmail.com Wed May 27 16:55:18 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 May 2009 15:55:18 -0500 Subject: [SciPy-user] OLS matrix-f(x) = 0 problem (Was: linear regression) In-Reply-To: <20090527205050.GA26519@phare.normalesup.org> References: <1cd32cbb0905270701w7ba5874egd4c5062c5996b0e2@mail.gmail.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <20090527205050.GA26519@phare.normalesup.org> Message-ID: <3d375d730905271355l1b79aa1na8fcd191f7720684@mail.gmail.com> On Wed, May 27, 2009 at 15:50, Gael Varoquaux wrote: > I have been fighting a bit with a OLS regression problem (my ignorance in > regression is wide), and a remark by Robert just prompted me to ask the > list: > > On Wed, May 27, 2009 at 02:37:14PM -0500, Robert Kern wrote: >> "f(x)=0" models can express covariances between all dimensions of x. > > Sorry for asking you about my 'homework', but people seem so > knowledgeable... > > I have a multivariate dataset X, and a given sparse, lower triangular, > boolean, matrix T with an empty diagonal. I am interested in finding the > matrix R for which support(R) == support(T), that is the OLS solution to: > > Y = np.dot(R, Y) Where did Y come from? And where did X and T go? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jason-sage at creativetrax.com Wed May 27 17:04:27 2009 From: jason-sage at creativetrax.com (jason-sage at creativetrax.com) Date: Wed, 27 May 2009 16:04:27 -0500 Subject: [SciPy-user] linear regression In-Reply-To: <1cd32cbb0905270659t3efa31cbrf9f7fa3a2753eb07@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <1cd32cbb0905270659t3efa31cbrf9f7fa3a2753eb07@mail.gmail.com> Message-ID: <4A1DAADB.5090604@creativetrax.com> josef.pktd at gmail.com wrote: > On Wed, May 27, 2009 at 9:35 AM, wrote: > >> Is there a recommended way now of calculating the slope of a linear >> regression? Using the scipy.stats.linregress function gives a >> deprecation warning, apparently because that function uses the >> scipy.mean function: >> >> sage: import numpy >> sage: import scipy.stats >> sage: scipy.stats.linregress(numpy.asarray([4,3,2,1,2,3,4]), >> numpy.asarray([1,2,3,4,3,2,1])) >> /home/jason/download/sage-sage-4.0.alpha0.5/local/lib/python2.5/site-packages/scipy/stats/stats.py:420: >> DeprecationWarning: scipy.stats.mean is deprecated; please update your >> code to use numpy.mean. >> Please note that: >> - numpy.mean axis argument defaults to None, not 0 >> - numpy.mean has a ddof argument to replace bias in a more general >> manner. >> scipy.stats.mean(a, bias=True) can be replaced by numpy.mean(x, >> axis=0, ddof=1). >> axis=0, ddof=1).""", DeprecationWarning) >> (-1.0, 5.0, -1.0, 1.9206748078018268e-50, 0.0) >> >> >> This is scipy 0.7.0. >> >> > > I backported a fix for incorrect slopes standard error > (http://projects.scipy.org/scipy/ticket/874) > together with the switch to using numpy versions of the depreciated > stats function. > Thanks. I tested the fixes, and it's slower than np.polyfit, so for now (unless there is good reason not to), I'm moving the one call over to use np.polyfit. > However, not all usage of the depreciated functions has been > backported to 0.7.1, but all are (supposed to be) fixed in the trunk > for 0.8. > > So, these kind of depreciation warnings in 0.7.0 and 0.7.1 are just > the result of unfinished conversion to numpy stats functions. > > Thanks. I already fixed a lot of the deprecation warnings (by switching to the numpy functions) we received from the Sage doctests regarding the mean, variance, and std stats functions. Dare I ask for what the give-or-take-a-million-years deadline for 0.8 is? Thanks for a great project! Jason From gael.varoquaux at normalesup.org Wed May 27 17:53:40 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 27 May 2009 23:53:40 +0200 Subject: [SciPy-user] OLS matrix-f(x) = 0 problem (Was: linear regression) In-Reply-To: <3d375d730905271355l1b79aa1na8fcd191f7720684@mail.gmail.com> References: <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <20090527205050.GA26519@phare.normalesup.org> <3d375d730905271355l1b79aa1na8fcd191f7720684@mail.gmail.com> Message-ID: <20090527215340.GA12563@phare.normalesup.org> On Wed, May 27, 2009 at 03:55:18PM -0500, Robert Kern wrote: > On Wed, May 27, 2009 at 15:50, Gael Varoquaux > wrote: > > I have been fighting a bit with a OLS regression problem (my ignorance in > > regression is wide), and a remark by Robert just prompted me to ask the > > list: > > On Wed, May 27, 2009 at 02:37:14PM -0500, Robert Kern wrote: > >> "f(x)=0" models can express covariances between all dimensions of x. > > Sorry for asking you about my 'homework', but people seem so > > knowledgeable... > > I have a multivariate dataset X, and a given sparse, lower triangular, > > boolean, matrix T with an empty diagonal. I am interested in finding the > > matrix R for which support(R) == support(T), that is the OLS solution to: > > Y = np.dot(R, Y) > Where did Y come from? And where did X and T go? Darn, sorry. Y and X are the same thing: my data. T is only there to specify the support of R. Another way to put it is that I know that a large fraction of the coefficients of R are zeros. I have a hunch that I need to 'unroll' the non-zero coefficients, and get back to a simpler, and well-known OLS estimation problem, but I couldn't do it. Thanks, Ga?l From josef.pktd at gmail.com Wed May 27 18:27:29 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 18:27:29 -0400 Subject: [SciPy-user] OLS matrix-f(x) = 0 problem (Was: linear regression) In-Reply-To: <20090527215340.GA12563@phare.normalesup.org> References: <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <20090527205050.GA26519@phare.normalesup.org> <3d375d730905271355l1b79aa1na8fcd191f7720684@mail.gmail.com> <20090527215340.GA12563@phare.normalesup.org> Message-ID: <1cd32cbb0905271527u410fcee0oee1be6ca6545673b@mail.gmail.com> On Wed, May 27, 2009 at 5:53 PM, Gael Varoquaux wrote: > On Wed, May 27, 2009 at 03:55:18PM -0500, Robert Kern wrote: >> On Wed, May 27, 2009 at 15:50, Gael Varoquaux >> wrote: >> > I have been fighting a bit with a OLS regression problem (my ignorance in >> > regression is wide), and a remark by Robert just prompted me to ask the >> > list: > >> > On Wed, May 27, 2009 at 02:37:14PM -0500, Robert Kern wrote: >> >> "f(x)=0" models can express covariances between all dimensions of x. > >> > Sorry for asking you about my 'homework', but people seem so >> > knowledgeable... > >> > I have a multivariate dataset X, and a given sparse, lower triangular, >> > boolean, matrix T with an empty diagonal. I am interested in finding the >> > matrix R for which support(R) == support(T), that is the OLS solution to: > >> > Y = np.dot(R, Y) > >> Where did Y come from? And where did X and T go? > > Darn, sorry. Y and X are the same thing: my data. T is only there to > specify the support of R. Another way to put it is that I know that a > large fraction of the coefficients of R are zeros. > > I have a hunch that I need to 'unroll' the non-zero coefficients, and get > back to a simpler, and well-known OLS estimation problem, but I couldn't > do it. > Sounds like a recursive system of linear (simultaneous) equations with linear restrictions to me. If you want an unbiased estimator, then going row by row, and solving each linear OLS, linalg.lstsq, would be the standard way to go. Substuting the previous estimates of the Y's into the next step. There might also be a way to estimate all in one big OLS if you find the linear transformation matrix that removes the zeros from your R matrix. But here, I'm not sure how easy this is, and how to get back unbiased estimators. What are the dimension of your matrices? if Y is N by K, N observations and K regression equations, N>K, what is K? Josef From mudit_19a at yahoo.com Wed May 27 18:39:28 2009 From: mudit_19a at yahoo.com (mudit sharma) Date: Thu, 28 May 2009 04:09:28 +0530 (IST) Subject: [SciPy-user] concave and convex function In-Reply-To: <1cd32cbb0905270727q7294ffd0jd1ecf0858d94080e@mail.gmail.com> References: <892795.73286.qm@web94915.mail.in2.yahoo.com> <1cd32cbb0905170032h685d781s67086670081e9e80@mail.gmail.com> <253017.92520.qm@web94903.mail.in2.yahoo.com> <3d375d730905181520n7311f516o292bdb18b71b385d@mail.gmail.com> <586133.68816.qm@web94909.mail.in2.yahoo.com> <1cd32cbb0905270727q7294ffd0jd1ecf0858d94080e@mail.gmail.com> Message-ID: <400793.19218.qm@web94915.mail.in2.yahoo.com> my apologies Josef. I am going to have use the first derivative as my other approach is very time consuming. I was thinking of smooth data using savitzky golay filter, fitting 5 day time lag and narrow down the search based on best fit and work upward to higher frequency. Then use angle, depth and height parameters to find matches.Although, it's very useful in pattern recognitions( V, W, and M shape trend reversals) but very time consuming as I am dealing very large datasets. So I am just going to use first derivative and then fit just top and bottoms to remove noise. Mudit ----- Original Message ---- From: "josef.pktd at gmail.com" To: SciPy Users List Sent: Wednesday, 27 May, 2009 15:27:25 Subject: Re: [SciPy-user] concave and convex function On Wed, May 27, 2009 at 10:13 AM, mudit sharma wrote: > > Thanks Robert. I appreciate your response. > > I found the solution finally, which is, using Savitzky Golay filter for smoothing as it preserves the shape. Then using peak and trough points detection algorithm. Some useful links here: > http://terpconnect.umd.edu/~toh/spectrum/PeakFindingandMeasurement.htm. Unfortunately, all these matlab scripts so will have to write python equivalent. > > Mudit > > > > ----- Original Message ---- > From: Robert Kern > To: SciPy Users List > Sent: Monday, 18 May, 2009 23:20:24 > Subject: Re: [SciPy-user] concave and convex function > > On Mon, May 18, 2009 at 02:57, Sebastian Walter > wrote: >> On Sun, May 17, 2009 at 3:50 PM, mudit sharma wrote: >>> >>> Thanks for your response. >>> >>> By M & W curve I meant M & W shape curves( subset ) and by cycle I meant wave cycle.. >> Is that supposed to describe what is meant by M & W? > > Peak-trough-peak and trough-peak-trough patterns, respectively, like > the shapes of the letters. > >> No offense, but >> if you want help, you should >> state your problem in a way that other ppl understand.... > > His actual question is reasonably well-worded (he wants to classify > the signal into convex and concave portions), but you got distracted > by the irrelevant portion. > > -- I still don't see identifying peaks and troughs anywhere in the initial question. Identifying peaks and troughs is a question for zeros in the first derivative; identifying convex and concave regions is a question for zeros in the second derivative. There is an entire "industry" trying to do this for the business cycle. Josef _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From ndbecker2 at gmail.com Wed May 27 19:16:18 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 27 May 2009 19:16:18 -0400 Subject: [SciPy-user] noncentral F distribution? Message-ID: Does scipy have non central F distribution? (I need cdf for that) From josef.pktd at gmail.com Wed May 27 19:24:08 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 19:24:08 -0400 Subject: [SciPy-user] noncentral F distribution? In-Reply-To: References: Message-ID: <1cd32cbb0905271624w18e27595lec4fa8ec319baf63@mail.gmail.com> On Wed, May 27, 2009 at 7:16 PM, Neal Becker wrote: > Does scipy have non central F distribution? ?(I need cdf for that) > >>> print scipy.stats.ncf.extradoc Non-central F distribution ncf.pdf(x,df1,df2,nc) = exp(nc/2 + nc*df1*x/(2*(df1*x+df2))) * df1**(df1/2) * df2**(df2/2) * x**(df1/2-1) * (df2+df1*x)**(-(df1+df2)/2) * gamma(df1/2)*gamma(1+df2/2) * L^{v1/2-1}^{v2/2}(-nc*v1*x/(2*(v1*x+v2))) / (B(v1/2, v2/2) * gamma((v1+v2)/2)) for df1, df2, nc > 0. >>> scipy.stats.ncf.cdf > note 3rd or 4th moments are wrong Josef From robert.kern at gmail.com Wed May 27 19:35:10 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 May 2009 18:35:10 -0500 Subject: [SciPy-user] linear regression In-Reply-To: <1cd32cbb0905271329o549be00duf7ad8e20ac87c3db@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <1cd32cbb0905271329o549be00duf7ad8e20ac87c3db@mail.gmail.com> Message-ID: <3d375d730905271635n4dc6c478jf42120843496ea81@mail.gmail.com> On Wed, May 27, 2009 at 15:29, wrote: > On Wed, May 27, 2009 at 3:37 PM, Robert Kern wrote: >> For "y=f(x)" models, this is true. Both y and x can be multivariate, >> and you can express the covariance of the uncertainties for each, but >> not covariance between the y and x uncertainties. This is because of >> the numerical tricks used for efficient implementation > > In this case, OLS would still be unbiased in the linear case, but > maybe not efficient. Are you sure? I see significant deviations using a simple example (albeit one which is utterly rigged in ODR's favor). The X uncertainties start small and grow with increasing X. The Y uncertainties start large and shrink with increasing X. Plotting the estimates shows some strange structure in the OLS estimates. import numpy as np from scipy.odr import RealData, ODR from scipy.odr.models import unilinear beta_true = np.array([-0.47960828215176365, 5.47674024758398481]) p_x = np.array([0.,.9,1.8,2.6,3.3,4.4,5.2,6.1,6.5,7.4]) p_y = beta_true[0] * p_x + beta_true[1] p_sx = np.array([.03,.03,.04,.035,.07,.11,.13,.22,.74,1.]) p_sy = np.array([1.,.74,.5,.35,.22,.22,.12,.12,.1,.04]) def random_betas(n=500, prng=np.random): """ Compute random parameter vectors from both ODR and OLS by generating random data and fitting it. """ odr_betas = [] ols_betas = [] for i in range(n): x = np.random.normal(p_x, p_sx) y = np.random.normal(p_y, p_sy) # ODR: data = RealData(x, y, sx=p_sx, sy=p_sy) odr = ODR(data, unilinear, beta0=[1., 1.]) odr_out = odr.run() odr_betas.append(odr_out.beta) # Weighted OLS: A = np.ones((len(x), 2)) A[:,0] = x # Weight by the Y error. A /= p_sy[:,np.newaxis] b, res, rank, s = np.linalg.lstsq(A, y/p_sy) ols_betas.append(b) # Alternately: #ols = ODR(data, unilinear, beta0=[1., 1.]) #ols.set_job(fit_type=2) #ols_out = ols.run() #ols_betas.append(ols_out.beta) return np.array(odr_betas).T, np.array(ols_betas).T -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndbecker2 at gmail.com Wed May 27 19:38:29 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 27 May 2009 19:38:29 -0400 Subject: [SciPy-user] noncentral F distribution? References: <1cd32cbb0905271624w18e27595lec4fa8ec319baf63@mail.gmail.com> Message-ID: josef.pktd at gmail.com wrote: > On Wed, May 27, 2009 at 7:16 PM, Neal Becker wrote: >> Does scipy have non central F distribution? (I need cdf for that) >> > >>>> print scipy.stats.ncf.extradoc > > > Non-central F distribution > > ncf.pdf(x,df1,df2,nc) = exp(nc/2 + nc*df1*x/(2*(df1*x+df2))) > * df1**(df1/2) * df2**(df2/2) * x**(df1/2-1) > * (df2+df1*x)**(-(df1+df2)/2) > * gamma(df1/2)*gamma(1+df2/2) > * L^{v1/2-1}^{v2/2}(-nc*v1*x/(2*(v1*x+v2))) > / (B(v1/2, v2/2) * gamma((v1+v2)/2)) > for df1, df2, nc > 0. > >>>> scipy.stats.ncf.cdf > at 0x021DDE90>> > > note 3rd or 4th moments are wrong > > Josef I found the page: http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ncf.html#scipy.stats.ncf but I don't know what the parameters mean. I was looking for something like: http://www.boost.org/doc/libs/1_39_0/libs/math/doc/sf_and_dist/html/math_toolkit/dist/dist_ref/dists/nc_f_dist.html There, a ncf is constructed with 3 parameters, v1, v2, lambda. Then the cdf is given as a function of a single variable, x. In scipy.stats.ncf, there are many constructor parameters. Which correspond to the v1,v2,lambda I was looking for? scipy.stats.ncf(momtype=1, a=None, b=None, xa=-10.0, xb=10.0, xtol=1e-14, badvalue=None, name=None, longname=None, shapes=None, extradoc=None) In scipy.stats.ncf the cdf has ncf.cdf(x,dfn,dfd,nc,loc=0,scale=1) again, I don't know what they mean. I think x is my x, but I don't know what the others are. I haven't used scipy stats before, so maybe I'm just not familiar with the interface. (I'm hoping I don't have to go back to program in c++ for this calculation) From robert.kern at gmail.com Wed May 27 19:42:54 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 May 2009 18:42:54 -0500 Subject: [SciPy-user] noncentral F distribution? In-Reply-To: References: <1cd32cbb0905271624w18e27595lec4fa8ec319baf63@mail.gmail.com> Message-ID: <3d375d730905271642v68972ec7m9a53da52f65e0849@mail.gmail.com> On Wed, May 27, 2009 at 18:38, Neal Becker wrote: > josef.pktd at gmail.com wrote: > >> On Wed, May 27, 2009 at 7:16 PM, Neal Becker wrote: >>> Does scipy have non central F distribution? ?(I need cdf for that) >>> >> >>>>> print scipy.stats.ncf.extradoc >> >> >> Non-central F distribution >> >> ncf.pdf(x,df1,df2,nc) = exp(nc/2 + nc*df1*x/(2*(df1*x+df2))) >> ? ? ? ? ? ? ? ? * df1**(df1/2) * df2**(df2/2) * x**(df1/2-1) >> ? ? ? ? ? ? ? ? * (df2+df1*x)**(-(df1+df2)/2) >> ? ? ? ? ? ? ? ? * gamma(df1/2)*gamma(1+df2/2) >> ? ? ? ? ? ? ? ? * L^{v1/2-1}^{v2/2}(-nc*v1*x/(2*(v1*x+v2))) >> ? ? ? ? ? ? ? ? / (B(v1/2, v2/2) * gamma((v1+v2)/2)) >> for df1, df2, nc > 0. >> >>>>> scipy.stats.ncf.cdf >> > at 0x021DDE90>> >> >> note 3rd or 4th moments are wrong >> >> Josef > > I found the page: > http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ncf.html#scipy.stats.ncf > > but I don't know what the parameters mean. > > I was looking for something like: > http://www.boost.org/doc/libs/1_39_0/libs/math/doc/sf_and_dist/html/math_toolkit/dist/dist_ref/dists/nc_f_dist.html > > There, a ncf is constructed with 3 parameters, v1, v2, lambda. These correspond to df1, df2, and nc in the same order. > Then the cdf is given as a function of a single variable, x. > > In scipy.stats.ncf, there are many constructor parameters. ?Which correspond > to the v1,v2,lambda I was looking for? > > scipy.stats.ncf(momtype=1, a=None, b=None, xa=-10.0, xb=10.0, xtol=1e-14, > badvalue=None, name=None, longname=None, shapes=None, extradoc=None) > > > In scipy.stats.ncf the cdf has > ncf.cdf(x,dfn,dfd,nc,loc=0,scale=1) > > again, I don't know what they mean. ?I think x is my x, Yes. > but I don't know > what the others are. Just ignore loc and scale. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Wed May 27 19:47:48 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 19:47:48 -0400 Subject: [SciPy-user] noncentral F distribution? In-Reply-To: <3d375d730905271642v68972ec7m9a53da52f65e0849@mail.gmail.com> References: <1cd32cbb0905271624w18e27595lec4fa8ec319baf63@mail.gmail.com> <3d375d730905271642v68972ec7m9a53da52f65e0849@mail.gmail.com> Message-ID: <1cd32cbb0905271647j219682b9k311399c18e77b7fa@mail.gmail.com> On Wed, May 27, 2009 at 7:42 PM, Robert Kern wrote: > On Wed, May 27, 2009 at 18:38, Neal Becker wrote: >> josef.pktd at gmail.com wrote: >> >>> On Wed, May 27, 2009 at 7:16 PM, Neal Becker wrote: >>>> Does scipy have non central F distribution? ?(I need cdf for that) >>>> >>> >>>>>> print scipy.stats.ncf.extradoc >>> >>> >>> Non-central F distribution >>> >>> ncf.pdf(x,df1,df2,nc) = exp(nc/2 + nc*df1*x/(2*(df1*x+df2))) >>> ? ? ? ? ? ? ? ? * df1**(df1/2) * df2**(df2/2) * x**(df1/2-1) >>> ? ? ? ? ? ? ? ? * (df2+df1*x)**(-(df1+df2)/2) >>> ? ? ? ? ? ? ? ? * gamma(df1/2)*gamma(1+df2/2) >>> ? ? ? ? ? ? ? ? * L^{v1/2-1}^{v2/2}(-nc*v1*x/(2*(v1*x+v2))) >>> ? ? ? ? ? ? ? ? / (B(v1/2, v2/2) * gamma((v1+v2)/2)) >>> for df1, df2, nc > 0. >>> >>>>>> scipy.stats.ncf.cdf >>> >> at 0x021DDE90>> >>> >>> note 3rd or 4th moments are wrong >>> >>> Josef >> >> I found the page: >> http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ncf.html#scipy.stats.ncf >> >> but I don't know what the parameters mean. >> >> I was looking for something like: >> http://www.boost.org/doc/libs/1_39_0/libs/math/doc/sf_and_dist/html/math_toolkit/dist/dist_ref/dists/nc_f_dist.html >> >> There, a ncf is constructed with 3 parameters, v1, v2, lambda. > > These correspond to df1, df2, and nc in the same order. > >> Then the cdf is given as a function of a single variable, x. >> >> In scipy.stats.ncf, there are many constructor parameters. ?Which correspond >> to the v1,v2,lambda I was looking for? >> >> scipy.stats.ncf(momtype=1, a=None, b=None, xa=-10.0, xb=10.0, xtol=1e-14, >> badvalue=None, name=None, longname=None, shapes=None, extradoc=None) >> >> >> In scipy.stats.ncf the cdf has >> ncf.cdf(x,dfn,dfd,nc,loc=0,scale=1) >> >> again, I don't know what they mean. ?I think x is my x, > > Yes. > >> but I don't know >> what the others are. > > Just ignore loc and scale. > and you have broadcasting on the arguments (for most distribution and cases) >>> scipy.stats.ncf.cdf(np.linspace(0,5,3), [[20],[30]], 10, 0.5) array([[ 0. , 0.92573897, 0.99322105], [ 0. , 0.9327125 , 0.99435371]]) Josef From ndbecker2 at gmail.com Wed May 27 19:47:37 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 27 May 2009 19:47:37 -0400 Subject: [SciPy-user] noncentral F distribution? References: <1cd32cbb0905271624w18e27595lec4fa8ec319baf63@mail.gmail.com> <3d375d730905271642v68972ec7m9a53da52f65e0849@mail.gmail.com> Message-ID: Robert Kern wrote: > On Wed, May 27, 2009 at 18:38, Neal Becker wrote: >> josef.pktd at gmail.com wrote: >> >>> On Wed, May 27, 2009 at 7:16 PM, Neal Becker >>> wrote: >>>> Does scipy have non central F distribution? (I need cdf for that) >>>> >>> >>>>>> print scipy.stats.ncf.extradoc >>> >>> >>> Non-central F distribution >>> >>> ncf.pdf(x,df1,df2,nc) = exp(nc/2 + nc*df1*x/(2*(df1*x+df2))) >>> * df1**(df1/2) * df2**(df2/2) * x**(df1/2-1) >>> * (df2+df1*x)**(-(df1+df2)/2) >>> * gamma(df1/2)*gamma(1+df2/2) >>> * L^{v1/2-1}^{v2/2}(-nc*v1*x/(2*(v1*x+v2))) >>> / (B(v1/2, v2/2) * gamma((v1+v2)/2)) >>> for df1, df2, nc > 0. >>> >>>>>> scipy.stats.ncf.cdf >>> >> at 0x021DDE90>> >>> >>> note 3rd or 4th moments are wrong >>> >>> Josef >> >> I found the page: >> http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ncf.html#scipy.stats.ncf >> >> but I don't know what the parameters mean. >> >> I was looking for something like: >> http://www.boost.org/doc/libs/1_39_0/libs/math/doc/sf_and_dist/html/math_toolkit/dist/dist_ref/dists/nc_f_dist.html >> >> There, a ncf is constructed with 3 parameters, v1, v2, lambda. > > These correspond to df1, df2, and nc in the same order. > >> Then the cdf is given as a function of a single variable, x. >> >> In scipy.stats.ncf, there are many constructor parameters. Which >> correspond to the v1,v2,lambda I was looking for? >> >> scipy.stats.ncf(momtype=1, a=None, b=None, xa=-10.0, xb=10.0, xtol=1e-14, >> badvalue=None, name=None, longname=None, shapes=None, extradoc=None) >> >> >> In scipy.stats.ncf the cdf has >> ncf.cdf(x,dfn,dfd,nc,loc=0,scale=1) >> >> again, I don't know what they mean. I think x is my x, > > Yes. > >> but I don't know >> what the others are. > > Just ignore loc and scale. > Thanks! Just one more. What are dfn, dfd? The doc calls them "shape parameters", but I don't know what that means. From robert.kern at gmail.com Wed May 27 19:52:08 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 May 2009 18:52:08 -0500 Subject: [SciPy-user] noncentral F distribution? In-Reply-To: References: <1cd32cbb0905271624w18e27595lec4fa8ec319baf63@mail.gmail.com> <3d375d730905271642v68972ec7m9a53da52f65e0849@mail.gmail.com> Message-ID: <3d375d730905271652k38b040e2s4a9658f3842516c7@mail.gmail.com> On Wed, May 27, 2009 at 18:47, Neal Becker wrote: > Thanks! ?Just one more. ?What are dfn, dfd? ?The doc calls them "shape > parameters", but I don't know what that means. A "shape parameter" is a generic term for any parameter that is not a location or scale parameter. In this case, dfn is the parameter for the degrees of freedom in the numerator of the expression for the F distribution (whether it is noncentral or not) and dfd is the degrees of freedom in the denominator. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndbecker2 at gmail.com Wed May 27 19:59:36 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 27 May 2009 19:59:36 -0400 Subject: [SciPy-user] noncentral F distribution? References: <1cd32cbb0905271624w18e27595lec4fa8ec319baf63@mail.gmail.com> <3d375d730905271642v68972ec7m9a53da52f65e0849@mail.gmail.com> <3d375d730905271652k38b040e2s4a9658f3842516c7@mail.gmail.com> Message-ID: Robert Kern wrote: > On Wed, May 27, 2009 at 18:47, Neal Becker wrote: > >> Thanks! Just one more. What are dfn, dfd? The doc calls them "shape >> parameters", but I don't know what that means. > > A "shape parameter" is a generic term for any parameter that is not a > location or scale parameter. In this case, dfn is the parameter for > the degrees of freedom in the numerator of the expression for the F > distribution (whether it is noncentral or not) and dfd is the degrees > of freedom in the denominator. > I think I get it now. I had assumed that you must first construct an instance of a ncf object (specifying parameters) and then call the cdf method (specifying x). Now I see that you simply call: ncf.cdf (x, dfn, dfd, nc) Is that correct? BTW, I was confused by: scipy.stats.ncf(momtype=1, a=None, b=None, xa=-10.0, xb=10.0, xtol=1e-14, badvalue=None, name=None, longname=None, shapes=None, extradoc=None) which is the first thing seen in the doc. It appears to be the constructor declaration? These parameters don't seem to be defined anywhere. From robert.kern at gmail.com Wed May 27 20:07:32 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 May 2009 19:07:32 -0500 Subject: [SciPy-user] noncentral F distribution? In-Reply-To: References: <1cd32cbb0905271624w18e27595lec4fa8ec319baf63@mail.gmail.com> <3d375d730905271642v68972ec7m9a53da52f65e0849@mail.gmail.com> <3d375d730905271652k38b040e2s4a9658f3842516c7@mail.gmail.com> Message-ID: <3d375d730905271707m52d389ddga0ff8975589b7edb@mail.gmail.com> On Wed, May 27, 2009 at 18:59, Neal Becker wrote: > Robert Kern wrote: > >> On Wed, May 27, 2009 at 18:47, Neal Becker wrote: >> >>> Thanks! ?Just one more. ?What are dfn, dfd? ?The doc calls them "shape >>> parameters", but I don't know what that means. >> >> A "shape parameter" is a generic term for any parameter that is not a >> location or scale parameter. In this case, dfn is the parameter for >> the degrees of freedom in the numerator of the expression for the F >> distribution (whether it is noncentral or not) and dfd is the degrees >> of freedom in the denominator. > > I think I get it now. ?I had assumed that you must first construct an > instance of a ncf object (specifying parameters) and then call the cdf > method (specifying x). > > Now I see that you simply call: > > ncf.cdf (x, dfn, dfd, nc) > > Is that correct? You can do either, actually. ncf(dfn, dfd, nc).cdf(x) ncf.cdf(x, dfn, dfd, nc) The rv_continuous docstring is a bit clearer on this point than the individual distributions' docstrings. > BTW, I was confused by: > > scipy.stats.ncf(momtype=1, a=None, b=None, xa=-10.0, xb=10.0, xtol=1e-14, > badvalue=None, name=None, longname=None, shapes=None, extradoc=None) > > ?which is the first thing seen in the doc. ?It appears to be the constructor > declaration? ?These parameters don't seem to be defined anywhere. Heh. Yeah. The thing this, scipy.stats.ncf is actually an instance of a class, not a class itself. The doc generator is picking up the __init__ of the class rather than the __call__. But even then, __call__ just takes *args, **kwds and parses them according to the data it is configured with. The doc generator will probably need some special support to document the distributions properly. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Wed May 27 20:24:36 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 20:24:36 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <3d375d730905271635n4dc6c478jf42120843496ea81@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <1cd32cbb0905271329o549be00duf7ad8e20ac87c3db@mail.gmail.com> <3d375d730905271635n4dc6c478jf42120843496ea81@mail.gmail.com> Message-ID: <1cd32cbb0905271724x1bba5c96xa347185e69185463@mail.gmail.com> On Wed, May 27, 2009 at 7:35 PM, Robert Kern wrote: > On Wed, May 27, 2009 at 15:29, ? wrote: >> On Wed, May 27, 2009 at 3:37 PM, Robert Kern wrote: > >>> For "y=f(x)" models, this is true. Both y and x can be multivariate, >>> and you can express the covariance of the uncertainties for each, but >>> not covariance between the y and x uncertainties. This is because of >>> the numerical tricks used for efficient implementation >> >> In this case, OLS would still be unbiased in the linear case, but >> maybe not efficient. > > Are you sure? I see significant deviations using a simple example > (albeit one which is utterly rigged in ODR's favor). The X > uncertainties start small and grow with increasing X. The Y > uncertainties start large and shrink with increasing X. Plotting the > estimates shows some strange structure in the OLS estimates. > > > import numpy as np > > from scipy.odr import RealData, ODR > from scipy.odr.models import unilinear > > > beta_true = np.array([-0.47960828215176365, ?5.47674024758398481]) > p_x = np.array([0.,.9,1.8,2.6,3.3,4.4,5.2,6.1,6.5,7.4]) > p_y = beta_true[0] * p_x + beta_true[1] > p_sx = np.array([.03,.03,.04,.035,.07,.11,.13,.22,.74,1.]) > p_sy = np.array([1.,.74,.5,.35,.22,.22,.12,.12,.1,.04]) > > > def random_betas(n=500, prng=np.random): > ? ?""" Compute random parameter vectors from both ODR and OLS by generating > ? ?random data and fitting it. > ? ?""" > ? ?odr_betas = [] > ? ?ols_betas = [] > ? ?for i in range(n): > ? ? ? ?x = np.random.normal(p_x, p_sx) > ? ? ? ?y = np.random.normal(p_y, p_sy) > > ? ? ? ?# ODR: > ? ? ? ?data = RealData(x, y, sx=p_sx, sy=p_sy) > ? ? ? ?odr = ODR(data, unilinear, beta0=[1., 1.]) > ? ? ? ?odr_out = odr.run() > ? ? ? ?odr_betas.append(odr_out.beta) > > ? ? ? ?# Weighted OLS: > ? ? ? ?A = np.ones((len(x), 2)) > ? ? ? ?A[:,0] = x > ? ? ? ?# Weight by the Y error. > ? ? ? ?A /= p_sy[:,np.newaxis] > ? ? ? ?b, res, rank, s = np.linalg.lstsq(A, y/p_sy) > ? ? ? ?ols_betas.append(b) > > ? ? ? ?# Alternately: > ? ? ? ?#ols = ODR(data, unilinear, beta0=[1., 1.]) > ? ? ? ?#ols.set_job(fit_type=2) > ? ? ? ?#ols_out = ols.run() > ? ? ? ?#ols_betas.append(ols_out.beta) > ? ?return np.array(odr_betas).T, np.array(ols_betas).T > after removing the weighting in your example to get plain OLS, I get >>> bodr, bols = random_betas(5000) >>> bols.mean(1) array([-0.4757033 , 5.46418868]) >>> bodr.mean(1) array([-0.48364392, 5.49508047]) >>> bodr.mean(1)-beta_true array([-0.00403564, 0.01834022]) >>> bols.mean(1)-beta_true array([ 0.00390498, -0.01255157]) I don't see yet why the results with weighted ols are much worse. I also confirmed with by inhouse econometrician, whether it's really unbiased and not just asympotically unbiased. As long as the measurement error in the x regressors are uncorrelated with the regression error, ols is unbiased y = X*b + u E(X*u)=0 that's the part that is used in the proof of unbiasedness. Josef From josef.pktd at gmail.com Wed May 27 20:40:39 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 20:40:39 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <1cd32cbb0905271724x1bba5c96xa347185e69185463@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <1cd32cbb0905271329o549be00duf7ad8e20ac87c3db@mail.gmail.com> <3d375d730905271635n4dc6c478jf42120843496ea81@mail.gmail.com> <1cd32cbb0905271724x1bba5c96xa347185e69185463@mail.gmail.com> Message-ID: <1cd32cbb0905271740w2b089d01w4f78e47755a507ad@mail.gmail.com> On Wed, May 27, 2009 at 8:24 PM, wrote: > On Wed, May 27, 2009 at 7:35 PM, Robert Kern wrote: >> On Wed, May 27, 2009 at 15:29, ? wrote: >>> On Wed, May 27, 2009 at 3:37 PM, Robert Kern wrote: >> >>>> For "y=f(x)" models, this is true. Both y and x can be multivariate, >>>> and you can express the covariance of the uncertainties for each, but >>>> not covariance between the y and x uncertainties. This is because of >>>> the numerical tricks used for efficient implementation >>> >>> In this case, OLS would still be unbiased in the linear case, but >>> maybe not efficient. >> >> Are you sure? I see significant deviations using a simple example >> (albeit one which is utterly rigged in ODR's favor). The X >> uncertainties start small and grow with increasing X. The Y >> uncertainties start large and shrink with increasing X. Plotting the >> estimates shows some strange structure in the OLS estimates. >> >> >> import numpy as np >> >> from scipy.odr import RealData, ODR >> from scipy.odr.models import unilinear >> >> >> beta_true = np.array([-0.47960828215176365, ?5.47674024758398481]) >> p_x = np.array([0.,.9,1.8,2.6,3.3,4.4,5.2,6.1,6.5,7.4]) >> p_y = beta_true[0] * p_x + beta_true[1] >> p_sx = np.array([.03,.03,.04,.035,.07,.11,.13,.22,.74,1.]) >> p_sy = np.array([1.,.74,.5,.35,.22,.22,.12,.12,.1,.04]) >> >> >> def random_betas(n=500, prng=np.random): >> ? ?""" Compute random parameter vectors from both ODR and OLS by generating >> ? ?random data and fitting it. >> ? ?""" >> ? ?odr_betas = [] >> ? ?ols_betas = [] >> ? ?for i in range(n): >> ? ? ? ?x = np.random.normal(p_x, p_sx) >> ? ? ? ?y = np.random.normal(p_y, p_sy) >> >> ? ? ? ?# ODR: >> ? ? ? ?data = RealData(x, y, sx=p_sx, sy=p_sy) >> ? ? ? ?odr = ODR(data, unilinear, beta0=[1., 1.]) >> ? ? ? ?odr_out = odr.run() >> ? ? ? ?odr_betas.append(odr_out.beta) >> >> ? ? ? ?# Weighted OLS: >> ? ? ? ?A = np.ones((len(x), 2)) >> ? ? ? ?A[:,0] = x >> ? ? ? ?# Weight by the Y error. >> ? ? ? ?A /= p_sy[:,np.newaxis] >> ? ? ? ?b, res, rank, s = np.linalg.lstsq(A, y/p_sy) >> ? ? ? ?ols_betas.append(b) >> >> ? ? ? ?# Alternately: >> ? ? ? ?#ols = ODR(data, unilinear, beta0=[1., 1.]) >> ? ? ? ?#ols.set_job(fit_type=2) >> ? ? ? ?#ols_out = ols.run() >> ? ? ? ?#ols_betas.append(ols_out.beta) >> ? ?return np.array(odr_betas).T, np.array(ols_betas).T >> > > after removing the weighting in your example to get plain OLS, I get > >>>> bodr, bols = random_betas(5000) >>>> bols.mean(1) > array([-0.4757033 , ?5.46418868]) >>>> bodr.mean(1) > array([-0.48364392, ?5.49508047]) >>>> bodr.mean(1)-beta_true > array([-0.00403564, ?0.01834022]) >>>> bols.mean(1)-beta_true > array([ 0.00390498, -0.01255157]) > > I don't see yet why the results with weighted ols are much worse. I > also confirmed with by inhouse econometrician, whether it's really > unbiased and not just asympotically unbiased. > As long as the measurement error in the x regressors are uncorrelated > with the regression error, ols is unbiased > y = X*b + u ? E(X*u)=0 ?that's the part that is used in the proof of > unbiasedness. The variance of the error term in the regression equation is a linear combination of the true error (p_sy) and the measurement error in x (p_sx) So the correct weighting would be according to p_sy**2 + beta**2 * p_sx**2, which is in practice not possible since we don't know beta, or maybe iterative would work (at least something like this should be correct) I haven't tried it yet with your example. Josef From robert.kern at gmail.com Wed May 27 20:40:44 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 May 2009 19:40:44 -0500 Subject: [SciPy-user] linear regression In-Reply-To: <1cd32cbb0905271724x1bba5c96xa347185e69185463@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <1cd32cbb0905271329o549be00duf7ad8e20ac87c3db@mail.gmail.com> <3d375d730905271635n4dc6c478jf42120843496ea81@mail.gmail.com> <1cd32cbb0905271724x1bba5c96xa347185e69185463@mail.gmail.com> Message-ID: <3d375d730905271740n22ec5128l21822b1f8edeaefa@mail.gmail.com> On Wed, May 27, 2009 at 19:24, wrote: > after removing the weighting in your example to get plain OLS, I get > >>>> bodr, bols = random_betas(5000) >>>> bols.mean(1) > array([-0.4757033 , ?5.46418868]) >>>> bodr.mean(1) > array([-0.48364392, ?5.49508047]) >>>> bodr.mean(1)-beta_true > array([-0.00403564, ?0.01834022]) >>>> bols.mean(1)-beta_true > array([ 0.00390498, -0.01255157]) I suspect the unweighted OLS performs well because of the structure of the uncertainties. If you replace p_sy with a uniform 0.03, for example, and use unweighted OLS, you get a lopsided distribution though not one with a pronounced spur like the weighted OLS. The mean does not appear to be converging, either. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed May 27 21:03:57 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 May 2009 20:03:57 -0500 Subject: [SciPy-user] linear regression In-Reply-To: <1cd32cbb0905271740w2b089d01w4f78e47755a507ad@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <1cd32cbb0905271329o549be00duf7ad8e20ac87c3db@mail.gmail.com> <3d375d730905271635n4dc6c478jf42120843496ea81@mail.gmail.com> <1cd32cbb0905271724x1bba5c96xa347185e69185463@mail.gmail.com> <1cd32cbb0905271740w2b089d01w4f78e47755a507ad@mail.gmail.com> Message-ID: <3d375d730905271803o35fd2285kc3e09ea28903fd27@mail.gmail.com> On Wed, May 27, 2009 at 19:40, wrote: > The variance of the error term in the regression equation is a linear > combination of the true error (p_sy) and the measurement error in x > (p_sx) > > So the correct weighting would be according to ?p_sy**2 + beta**2 * > p_sx**2, which is in practice not possible since we don't know beta, > or maybe iterative would work ?(at least something like this should be > correct) Yes! This is precisely what ODR does for you in the linear case, all in one shot. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Wed May 27 21:39:29 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 May 2009 21:39:29 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <3d375d730905271803o35fd2285kc3e09ea28903fd27@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <1cd32cbb0905271329o549be00duf7ad8e20ac87c3db@mail.gmail.com> <3d375d730905271635n4dc6c478jf42120843496ea81@mail.gmail.com> <1cd32cbb0905271724x1bba5c96xa347185e69185463@mail.gmail.com> <1cd32cbb0905271740w2b089d01w4f78e47755a507ad@mail.gmail.com> <3d375d730905271803o35fd2285kc3e09ea28903fd27@mail.gmail.com> Message-ID: <1cd32cbb0905271839ldef873k5f663d37ef7f8dca@mail.gmail.com> On Wed, May 27, 2009 at 9:03 PM, Robert Kern wrote: > On Wed, May 27, 2009 at 19:40, ? wrote: > >> The variance of the error term in the regression equation is a linear >> combination of the true error (p_sy) and the measurement error in x >> (p_sx) >> >> So the correct weighting would be according to ?p_sy**2 + beta**2 * >> p_sx**2, which is in practice not possible since we don't know beta, >> or maybe iterative would work ?(at least something like this should be >> correct) > > Yes! This is precisely what ODR does for you in the linear case, all > in one shot. But if I have another instrument/measurement for x, I know how to use Instrumental Variables regression, and I can remove bias causing correlation between x and the regression error. I don't doubt it's ability to handle some cases very well, what I meant more was that as an estimation framework it hasn't caught on. There is a huge literature on least squares and, for this case more appropriate, generalized methods of moments and except for a chapter in a econometrics textbook, I haven't seen much on orthogonal least squares. And my impression is, that this is, because the method takes care of the measurement errors (semi)automatically, it requires less explicit modeling of the error structure, and is also more limited in incorporating additional information on the stochastic properties of the regressors. Using a simple 2 step iteration for estimating the weights, I get almost the same mean squared error as odr, but the bias stays higher, which I don't understand. Josef # Weighted OLS: p_su = np.sqrt((beta_true[0] * p_sx)**2 + p_sy**2) A = np.ones((len(x), 2)) A[:,0] = x b, res, rank, s = np.linalg.lstsq(A, y) p_su = np.sqrt((b[0] * p_sx)**2 + p_sy**2) A2 = A/p_su[:,np.newaxis] b, res, rank, s = np.linalg.lstsq(A2, y/p_su) ols_betas.append(b) ---------- bodr,bols = random_betas(10000) print "Estimate" print "odr", bodr.mean(1) print "ols", bols.mean(1) print "Bias" print "odr", bodr.mean(1)-beta_true print "ols", bols.mean(1)-beta_true print "MSE" print "odr", ((bodr-beta_true[:,np.newaxis])**2).mean(1) print "ols", ((bols-beta_true[:,np.newaxis])**2).mean(1) Estimate odr [-0.48114078 5.48110708] ols [-0.46611397 5.40642845] Bias odr [-0.0015325 0.00436683] ols [ 0.01349432 -0.07031179] MSE odr [ 0.00328421 0.08457911] ols [ 0.0035286 0.09229023] From abielr at gmail.com Thu May 28 00:21:25 2009 From: abielr at gmail.com (Abiel Reinhart) Date: Thu, 28 May 2009 00:21:25 -0400 Subject: [SciPy-user] Number of periods in a year in scikits.timeseries Message-ID: Is there an automated way to determine the number of periods in a year for a given frequency in scikits.timeseries? For instance, a monthly series has 12 periods in a year, and a daily series has 365.25. It would not be too hard just to create a table for this, but I wanted to first check if there was an automated way. This is useful when you want to be able to take annualized percent changes or differences. Thank you. Abiel From josef.pktd at gmail.com Thu May 28 01:01:49 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 28 May 2009 01:01:49 -0400 Subject: [SciPy-user] linear regression In-Reply-To: <1cd32cbb0905271839ldef873k5f663d37ef7f8dca@mail.gmail.com> References: <4A1D4184.9020009@creativetrax.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <1cd32cbb0905271329o549be00duf7ad8e20ac87c3db@mail.gmail.com> <3d375d730905271635n4dc6c478jf42120843496ea81@mail.gmail.com> <1cd32cbb0905271724x1bba5c96xa347185e69185463@mail.gmail.com> <1cd32cbb0905271740w2b089d01w4f78e47755a507ad@mail.gmail.com> <3d375d730905271803o35fd2285kc3e09ea28903fd27@mail.gmail.com> <1cd32cbb0905271839ldef873k5f663d37ef7f8dca@mail.gmail.com> Message-ID: <1cd32cbb0905272201i5170a832yc5fd4288f4291a23@mail.gmail.com> On Wed, May 27, 2009 at 9:39 PM, wrote: > On Wed, May 27, 2009 at 9:03 PM, Robert Kern wrote: >> On Wed, May 27, 2009 at 19:40, ? wrote: >> >>> The variance of the error term in the regression equation is a linear >>> combination of the true error (p_sy) and the measurement error in x >>> (p_sx) >>> >>> So the correct weighting would be according to ?p_sy**2 + beta**2 * >>> p_sx**2, which is in practice not possible since we don't know beta, >>> or maybe iterative would work ?(at least something like this should be >>> correct) >> >> Yes! This is precisely what ODR does for you in the linear case, all >> in one shot. > Using a simple 2 step iteration for estimating the weights, I get > almost the same mean squared error as odr, but the bias stays higher, > which I don't understand. So, Robert you were right about the bias. Since the bias didn't want to go away, especially for large measurement errors, I had to look up some textbooks. In the case of measurement errors the observed regressors are (always) correlated with the error in the regression equation, even if the true (unobserved) variable is not. The reference model I had in mind, was random regressors which are observed. If the observed regressors are uncorrelated with the error term, then there is no bias. Models with measurement errors show similar symptoms as for example models with endogeneity bias, and the standard econometrics text book solution is still instrumental variables. Given that the symptoms and standard treatment are (mostly) the same, I had the wrong intuition that the decease is also the same. So, in all the variations of your example that I tried, bias goes in favor of odr compared to ols. The MSEs are essentially the same, but I assume there are cases where the MSE also deteriorates. Josef From matthieu.brucher at gmail.com Thu May 28 03:29:57 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 28 May 2009 09:29:57 +0200 Subject: [SciPy-user] Segmentation fault with 0.7 In-Reply-To: References: Message-ID: Hi, It seems I used gfortran instead of Ifort, so I'm trying to get it to work, but: - -arch SSE2 is no longer a valid option (I had to modify numpy for this - I get a lot of problem because -fPIC is missing from the command line Matthieu 2009/5/27 Matthieu Brucher : > Hi, > > I've also tested scipy 0.7 with the MKL (no choice, I don't have atlas > or refblas installed, and I found a way of using the latest by > preloading libmkl_core.so), and I got a segmentation fault on a LAPACK > function: > > test_y_bad_size (test_fblas.TestZswap) ... ok > test_y_stride (test_fblas.TestZswap) ... ok > test_clapack_dsyev (test_esv.TestEsv) ... SKIP: Skipping test: > test_clapack_dsyev > Clapack empty, skip clapack test > test_clapack_dsyevr (test_esv.TestEsv) ... SKIP: Skipping test: > test_clapack_dsyevr > Clapack empty, skip clapack test > test_clapack_dsyevr_ranges (test_esv.TestEsv) ... SKIP: Skipping test: > test_clapack_dsyevr_ranges > Clapack empty, skip clapack test > test_clapack_ssyev (test_esv.TestEsv) ... SKIP: Skipping test: > test_clapack_ssyev > Clapack empty, skip clapack test > test_clapack_ssyevr (test_esv.TestEsv) ... SKIP: Skipping test: > test_clapack_ssyevr > Clapack empty, skip clapack test > test_clapack_ssyevr_ranges (test_esv.TestEsv) ... SKIP: Skipping test: > test_clapack_ssyevr_ranges > Clapack empty, skip clapack test > test_dsyev (test_esv.TestEsv) ... ok > test_dsyevr (test_esv.TestEsv) ... Segmentation fault > > Is it a new function or something like that? I don't remember > encoutering this error in previous packages (although I didn't always > launched the full tests). > > Matthieu > -- > Information System Engineer, Ph.D. > Website: http://matthieu-brucher.developpez.com/ > Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn: http://www.linkedin.com/in/matthieubrucher > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From sandeep.prasad at tcs.com Thu May 28 03:19:27 2009 From: sandeep.prasad at tcs.com (sandeep.prasad at tcs.com) Date: Thu, 28 May 2009 12:49:27 +0530 Subject: [SciPy-user] Cov of 2 vectors Message-ID: Dear All, How do i compute the covar of 2 vectors. The cov module in scipy always returns a 2 by 2matrix even if the vectors are of len >2 , how is this possible?? However for the covariance of x and y i need a single number, are there any built in functions for this?? Regards, Sandeep Prasad Tata Consultancy Services Plot No 1, Survey No. 64/2, Software Units Layout Serilingampally Mandal, Madhapur Hyderabad,Andhra Pradesh India Ph:- 04066673582 Cell:- 9640795927 Mailto: sandeep.prasad at tcs.com Website: http://www.tcs.com ____________________________________________ Experience certainty. IT Services Business Solutions Outsourcing ____________________________________________ =====-----=====-----===== Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.sinclair.za at gmail.com Thu May 28 03:48:54 2009 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Thu, 28 May 2009 09:48:54 +0200 Subject: [SciPy-user] Cov of 2 vectors In-Reply-To: References: Message-ID: <6a17e9ee0905280048y10a260c9odcf8b222eb24a60e@mail.gmail.com> >2009/5/28 : > How do i compute the covar of 2 vectors. The cov module in scipy always > returns a 2 by 2matrix even if the vectors are of len >2 , how is this > possible?? > > However for the covariance of x and y i need a single number, are there any > built in functions for this?? You're getting the full covariance matrix from cov. Since you only have two vectors this is always a 2x2 matrix. The off-diagonal terms [at locations (0, 1) & (1, 0)] are the covariance of your two vectors and the on-diagonal terms represent the variance of each vector. See: http://docs.scipy.org/doc/numpy-1.3.x/reference/generated/numpy.cov.html#numpy.cov http://en.wikipedia.org/wiki/Covariance_matrix http://mathworld.wolfram.com/CovarianceMatrix.html Cheers, Scott From devicerandom at gmail.com Thu May 28 08:22:30 2009 From: devicerandom at gmail.com (ms) Date: Thu, 28 May 2009 13:22:30 +0100 Subject: [SciPy-user] integrating a system of differential equations In-Reply-To: <114880320905270934s7bc09ebg5846bf6227cb7551@mail.gmail.com> References: <4A1D3111.6060000@gmail.com> <4A1D5CB0.1010100@gmail.com> <1243439437.14846.51.camel@localhost.localdomain> <4A1D66B8.9030902@gmail.com> <114880320905270934s7bc09ebg5846bf6227cb7551@mail.gmail.com> Message-ID: <4A1E8206.1020505@gmail.com> Warren Weckesser ha scritto: > There are also examples at scipy.org: > > http://www.scipy.org/LoktaVolterraTutorial > http://www.scipy.org/Cookbook/CoupledSpringMassSystem > Thanks, the Lotka-Volterra example is what I needed. It seems to give reasonable results now. Thanks a lot. m. From josef.pktd at gmail.com Thu May 28 10:56:48 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 28 May 2009 10:56:48 -0400 Subject: [SciPy-user] noncentral F distribution? In-Reply-To: <3d375d730905271707m52d389ddga0ff8975589b7edb@mail.gmail.com> References: <1cd32cbb0905271624w18e27595lec4fa8ec319baf63@mail.gmail.com> <3d375d730905271642v68972ec7m9a53da52f65e0849@mail.gmail.com> <3d375d730905271652k38b040e2s4a9658f3842516c7@mail.gmail.com> <3d375d730905271707m52d389ddga0ff8975589b7edb@mail.gmail.com> Message-ID: <1cd32cbb0905280756w19cc4af0qcfb2f3f22570092b@mail.gmail.com> On Wed, May 27, 2009 at 8:07 PM, Robert Kern wrote: > On Wed, May 27, 2009 at 18:59, Neal Becker wrote: >> Robert Kern wrote: >> >>> On Wed, May 27, 2009 at 18:47, Neal Becker wrote: >>> >>>> Thanks! ?Just one more. ?What are dfn, dfd? ?The doc calls them "shape >>>> parameters", but I don't know what that means. >>> >>> A "shape parameter" is a generic term for any parameter that is not a >>> location or scale parameter. In this case, dfn is the parameter for >>> the degrees of freedom in the numerator of the expression for the F >>> distribution (whether it is noncentral or not) and dfd is the degrees >>> of freedom in the denominator. >> >> I think I get it now. ?I had assumed that you must first construct an >> instance of a ncf object (specifying parameters) and then call the cdf >> method (specifying x). >> >> Now I see that you simply call: >> >> ncf.cdf (x, dfn, dfd, nc) >> >> Is that correct? > > You can do either, actually. > > ?ncf(dfn, dfd, nc).cdf(x) > ?ncf.cdf(x, dfn, dfd, nc) > > The rv_continuous docstring is a bit clearer on this point than the > individual distributions' docstrings. > >> BTW, I was confused by: >> >> scipy.stats.ncf(momtype=1, a=None, b=None, xa=-10.0, xb=10.0, xtol=1e-14, >> badvalue=None, name=None, longname=None, shapes=None, extradoc=None) >> >> ?which is the first thing seen in the doc. ?It appears to be the constructor >> declaration? ?These parameters don't seem to be defined anywhere. > > Heh. Yeah. The thing this, scipy.stats.ncf is actually an instance of > a class, not a class itself. The doc generator is picking up the > __init__ of the class rather than the __call__. But even then, > __call__ just takes *args, **kwds and parses them according to the > data it is configured with. The doc generator will probably need some > special support to document the distributions properly. > I usually do the following, so I didn't see that help(..) doesn't include the generated docstring >>> print scipy.stats.ncf.__doc__ A non-central F distribution continuous random variable. Continuous random variables are defined from a standard form and may require some shape parameters to complete its specification. Any optional keyword parameters can be passed to the methods of the RV object as given below: Methods ------- ncf.rvs(dfn,dfd,nc,loc=0,scale=1,size=1) - random variates ncf.pdf(x,dfn,dfd,nc,loc=0,scale=1) - probability density function ncf.cdf(x,dfn,dfd,nc,loc=0,scale=1) - cumulative density function ... From pgmdevlist at gmail.com Thu May 28 11:46:49 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 28 May 2009 11:46:49 -0400 Subject: [SciPy-user] Number of periods in a year in scikits.timeseries In-Reply-To: References: Message-ID: <7BF30BF0-2449-4ADF-A117-F42F91F0F38E@gmail.com> On May 28, 2009, at 12:21 AM, Abiel Reinhart wrote: > Is there an automated way to determine the number of periods in a year > for a given frequency in scikits.timeseries? Nope, not yet. A dictionary would probably do the trick, something along the lines of: annmulti = {_c.FR_ANN:1., _c.FR_QTR:4., _c.FR_MTH:12., _c.FR_WK :365.25/7., _c.FR_DAY:365.25, _c.FR_HR :8766., _c.FR_MIN:525960., _c.FR_SEC:31557600.} From ndbecker2 at gmail.com Thu May 28 13:56:56 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 28 May 2009 13:56:56 -0400 Subject: [SciPy-user] Strange discontinuity in noncentral chisquare Message-ID: def pmiss2 (x, esnodB, N): esno = 10**(0.1 * esnodB) * N var = 1/esno _lambda = 1/(0.5*var) return ncx2.cdf (x, 2, _lambda) x = np.arange (0, 50, 0.1) p1 = [pmiss2 (e, 3.5, 24) for e in x] What's with this strange discontinuity? print p1: ... 3.475382846574262e-21, 4.2226227741362447e-21, 5.1248671653198949e-21, 6.2130949241675783e-21, 7.5242411687161146e-21, 9.1022970542215721e-21, 5.8787514615651664e-09, 6.2565279721932619e-09, 6.656924144742753e-09, 7.0811937641411923e-09, 7.5306544171300622e-09, 8.0066904400027596e-09, 8.5107559880675483e-09, 9.044378231221098e-09, 9.6091606801490766e-09, ... From robert.kern at gmail.com Thu May 28 14:52:15 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 28 May 2009 13:52:15 -0500 Subject: [SciPy-user] Strange discontinuity in noncentral chisquare In-Reply-To: References: Message-ID: <3d375d730905281152p1b9745c1y12b3bce4bd956c90@mail.gmail.com> On Thu, May 28, 2009 at 12:56, Neal Becker wrote: > def pmiss2 (x, esnodB, N): > ? ?esno = 10**(0.1 * esnodB) * N > ? ?var = 1/esno > ? ?_lambda = 1/(0.5*var) > > ? ?return ncx2.cdf (x, 2, _lambda) > > x = np.arange (0, 50, 0.1) > p1 = [pmiss2 (e, 3.5, 24) for e in x] > > What's with this strange discontinuity? > print p1: > ... > 3.475382846574262e-21, > ?4.2226227741362447e-21, > ?5.1248671653198949e-21, > ?6.2130949241675783e-21, > ?7.5242411687161146e-21, > ?9.1022970542215721e-21, > ?5.8787514615651664e-09, > ?6.2565279721932619e-09, > ?6.656924144742753e-09, > ?7.0811937641411923e-09, > ?7.5306544171300622e-09, > ?8.0066904400027596e-09, > ?8.5107559880675483e-09, > ?9.044378231221098e-09, > ?9.6091606801490766e-09, > ... Dunno. The CDF is just scipy.special.chndtr(), so you will have to dive through its code to see what's up. Between 22.3 and 22.4 is probably where the code changes from one approximation to another. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Thu May 28 14:53:31 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 28 May 2009 14:53:31 -0400 Subject: [SciPy-user] Strange discontinuity in noncentral chisquare In-Reply-To: References: Message-ID: <1cd32cbb0905281153x78b1afeby7e87d3e4c6b98529@mail.gmail.com> On Thu, May 28, 2009 at 1:56 PM, Neal Becker wrote: > def pmiss2 (x, esnodB, N): > ? ?esno = 10**(0.1 * esnodB) * N > ? ?var = 1/esno > ? ?_lambda = 1/(0.5*var) > > ? ?return ncx2.cdf (x, 2, _lambda) > > x = np.arange (0, 50, 0.1) > p1 = [pmiss2 (e, 3.5, 24) for e in x] > > What's with this strange discontinuity? > print p1: > ... > 3.475382846574262e-21, > ?4.2226227741362447e-21, > ?5.1248671653198949e-21, > ?6.2130949241675783e-21, > ?7.5242411687161146e-21, > ?9.1022970542215721e-21, > ?5.8787514615651664e-09, > ?6.2565279721932619e-09, > ?6.656924144742753e-09, > ?7.0811937641411923e-09, > ?7.5306544171300622e-09, > ?8.0066904400027596e-09, > ?8.5107559880675483e-09, > ?9.044378231221098e-09, > ?9.6091606801490766e-09, > ... > This must be a bug in scipy.special.chndtr class ncx2_gen(rv_continuous): def _cdf(self, x, df, nc): return special.chndtr(x,df,nc) here is the isolated example >>> ncx2.cdf (np.arange (20, 25, 0.2), 2, 1.07458615e+02) array([ 8.53614872e-23, 1.31445107e-22, 2.01359832e-22, 3.06896021e-22, 4.65417198e-22, 7.02373115e-22, 1.05489191e-21, 1.57689175e-21, 2.34632278e-21, 3.47538283e-21, 5.12486714e-21, 7.52424113e-21, 5.87875088e-09, 6.65692349e-09, 7.53065368e-09, 8.51075516e-09, 9.60915975e-09, 1.08390218e-08, 1.22148313e-08, 1.37525362e-08, 1.54696748e-08, 1.73854823e-08, 1.95211903e-08, 2.18999761e-08, 2.45472869e-08]) this uses numerical integration of the pdf, which is slow but doesn't have the discontinuity >>> ncx2.veccdf(np.arange (20, 25, 0.2), 2, 1.07458615e+02) array([ 1.21805117e-09, 1.39734318e-09, 1.60114784e-09, 1.83255976e-09, 2.09503169e-09, 2.39241204e-09, 2.72898607e-09, 3.10952087e-09, 3.53931444e-09, 4.02424929e-09, 4.57085090e-09, 5.18635130e-09, 5.87875834e-09, 6.65693102e-09, 7.53066129e-09, 8.51076283e-09, 9.60916749e-09, 1.08390296e-08, 1.22148392e-08, 1.37525442e-08, 1.54696828e-08, 1.73855268e-08, 1.95212353e-08, 2.19000216e-08, 2.45473328e-08]) Thanks for reporting, the test suite is not so fine tuned to catch these cases. The quick fix would be to replace the call to special with the numerical integration, but this will make the cdf much slower ( for the cases where it is correct). I did this for some other distributions. But for the use as the distribution of a test statistic the jump is irrelevant, if you reject a hypotheses with 1e-9 or 1e-21 doesn't really make a difference. Josef From robert.kern at gmail.com Thu May 28 15:03:25 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 28 May 2009 14:03:25 -0500 Subject: [SciPy-user] Strange discontinuity in noncentral chisquare In-Reply-To: <1cd32cbb0905281153x78b1afeby7e87d3e4c6b98529@mail.gmail.com> References: <1cd32cbb0905281153x78b1afeby7e87d3e4c6b98529@mail.gmail.com> Message-ID: <3d375d730905281203x5920da2bn10ccf1dc5aeb0496@mail.gmail.com> On Thu, May 28, 2009 at 13:53, wrote: > On Thu, May 28, 2009 at 1:56 PM, Neal Becker wrote: >> def pmiss2 (x, esnodB, N): >> ? ?esno = 10**(0.1 * esnodB) * N >> ? ?var = 1/esno >> ? ?_lambda = 1/(0.5*var) >> >> ? ?return ncx2.cdf (x, 2, _lambda) >> >> x = np.arange (0, 50, 0.1) >> p1 = [pmiss2 (e, 3.5, 24) for e in x] >> >> What's with this strange discontinuity? >> print p1: >> ... >> 3.475382846574262e-21, >> ?4.2226227741362447e-21, >> ?5.1248671653198949e-21, >> ?6.2130949241675783e-21, >> ?7.5242411687161146e-21, >> ?9.1022970542215721e-21, >> ?5.8787514615651664e-09, >> ?6.2565279721932619e-09, >> ?6.656924144742753e-09, >> ?7.0811937641411923e-09, >> ?7.5306544171300622e-09, >> ?8.0066904400027596e-09, >> ?8.5107559880675483e-09, >> ?9.044378231221098e-09, >> ?9.6091606801490766e-09, >> ... >> > > This must be a bug in scipy.special.chndtr I notice the following snippets of code, which appear guilty: C .. Statement Functions .. LOGICAL qsmall C .. C .. Statement Function definitions .. qsmall(xx) = sum .LT. 1.0D-20 .OR. xx .LT. eps*sum # That is a feature of Fortran I knew nothing about. 60 IF (qsmall(term)) GO TO 80 80 cum = sum RETURN -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Thu May 28 15:37:30 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 28 May 2009 21:37:30 +0200 Subject: [SciPy-user] OLS matrix-f(x) = 0 problem (Was: linear regression) In-Reply-To: <1cd32cbb0905271527u410fcee0oee1be6ca6545673b@mail.gmail.com> References: <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <4A1D6BC7.2020402@gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <20090527205050.GA26519@phare.normalesup.org> <3d375d730905271355l1b79aa1na8fcd191f7720684@mail.gmail.com> <20090527215340.GA12563@phare.normalesup.org> <1cd32cbb0905271527u410fcee0oee1be6ca6545673b@mail.gmail.com> Message-ID: <20090528193730.GC18415@phare.normalesup.org> On Wed, May 27, 2009 at 06:27:29PM -0400, josef.pktd at gmail.com wrote: > Sounds like a recursive system of linear (simultaneous) equations with > linear restrictions to me. If you want an unbiased estimator, then > going row by row, and solving each linear OLS, linalg.lstsq, would be > the standard way to go. Substuting the previous estimates of the Y's > into the next step. Oups, I realise I forgot to answer. You are right, this is a way to interpret it, and I was solving the system as you suggest. What didn't like is that the solution I was getting was dependant on the order of the variables, but I had forgotten that the lower triangular matrix was an approximation. The non-permutation-invariance came from this approximation, not the way I was solving the system. Unfortunately, it seems that the solution to the complete problem is still an open research question (FYI the problem is to find the OLS solution to "M X = X + e", with M definite positive, and with a given support. X's dimension are everywhere between (50, 50) to (300, 500), including the bad situation (300, 50). This is related sparse covariance matrix estimation. I don't think there is (yet) an easy answer. Thanks for your answer, it brought me back to Earth, making me realize that I was already doing the right thing, and look for the problem elsewhere. Ga?l From josef.pktd at gmail.com Thu May 28 16:43:33 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 28 May 2009 16:43:33 -0400 Subject: [SciPy-user] OLS matrix-f(x) = 0 problem (Was: linear regression) In-Reply-To: <20090528193730.GC18415@phare.normalesup.org> References: <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <1cd32cbb0905271128g131434e9p98cafd914f674c89@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <20090527205050.GA26519@phare.normalesup.org> <3d375d730905271355l1b79aa1na8fcd191f7720684@mail.gmail.com> <20090527215340.GA12563@phare.normalesup.org> <1cd32cbb0905271527u410fcee0oee1be6ca6545673b@mail.gmail.com> <20090528193730.GC18415@phare.normalesup.org> Message-ID: <1cd32cbb0905281343k1208fb2fud7e20d5d823d4e60@mail.gmail.com> On Thu, May 28, 2009 at 3:37 PM, Gael Varoquaux wrote: > On Wed, May 27, 2009 at 06:27:29PM -0400, josef.pktd at gmail.com wrote: >> Sounds like a recursive system of linear (simultaneous) equations with >> linear restrictions to me. If you want an unbiased estimator, then >> going row by row, and solving each linear OLS, linalg.lstsq, would be >> the standard way to go. Substuting the previous estimates of the Y's >> into the next step. > > Oups, I realise I forgot to answer. > > You are right, this is a way to interpret it, and I was solving the > system as you suggest. What didn't like is that the solution I was > getting was dependant on the order of the variables, but I had forgotten > that the lower triangular matrix was an approximation. The > non-permutation-invariance came from this approximation, not the way I > was solving the system. > > Unfortunately, it seems that the solution to the complete problem is > still an open research question (FYI the problem is to find the OLS > solution to "M X = X + e", with M definite positive, and with a given > support. > > X's dimension are everywhere between (50, 50) to (300, 500), including > the bad situation (300, 50). > > This is related sparse covariance matrix estimation. I don't think there > is (yet) an easy answer. > > Thanks for your answer, it brought me back to Earth, making me realize > that I was already doing the right thing, and look for the problem > elsewhere. > > Ga?l I'm not sure I understand anymore. When estimating the parameters of a simultaneous system of equations with least squares, we need a lot of identifying restrictions, the lower triangular parameter matrix is the simplest one. And you don't get permutation invariance because the sequence of your equation is what identifies the parameters. In your case, you need to have enough identifying restrictions on the support of M, and given that you don't have any additional exogenous variables the identifying restrictions might require that it can be reordered to a lower triangular form. (Disclaimer: After I mixed up the bias yesterday, I should mentioned that I haven't looked at this in a pretty long time.) For the rest I'm a bit vague: If you don't want to impose the sequential identifying restriction, than you are just looking for a subspace that spans your X matrix with certain properties. Given that you have an X that can have more rows than columns and reversed, you have either more or fewer equations than unknowns, which should already create a large multiplicity of solutions for some cases. Also I expect your X'X (or in numpy X.T * X) matrix to be singular. (maybe it is X*X.T in your notation) So I would think that the solution will depend more on the eigenvector decomposition, or SVD, or pinv of X'X, and there might be many possibilities to span the space of X. I'm not sure how to get the subspace that satisfies your support in M restrictions, if M is not lower triangular. I don't really understand what permutation-invariance you want, but if you want to impose some kind of symmetry maybe this gives you identification of a unique solution. Josef From josephsmidt at gmail.com Thu May 28 17:22:50 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Thu, 28 May 2009 14:22:50 -0700 Subject: [SciPy-user] pcolor plot with points from a data file? Message-ID: <142682e10905281422j63282a45g66804f11bc68fe4b@mail.gmail.com> Hi, I need a plot that looks just like this: http://matplotlib.sourceforge.net/examples/pylab_examples/pcolor_demo.html. However, I need it to be done for a 100x100 matrix I am providing from a data file called clout.dat that looks like: 1.22 1.22 1.24 ... 6.78 ... 3.43 3.46 3.52 ... 1.11 Is there a way to make that same type of plot for using a data file like this? Thanks. Joseph Smidt -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From robert.kern at gmail.com Thu May 28 17:26:27 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 28 May 2009 16:26:27 -0500 Subject: [SciPy-user] pcolor plot with points from a data file? In-Reply-To: <142682e10905281422j63282a45g66804f11bc68fe4b@mail.gmail.com> References: <142682e10905281422j63282a45g66804f11bc68fe4b@mail.gmail.com> Message-ID: <3d375d730905281426x660b1bebgc31852e5fab635e8@mail.gmail.com> On Thu, May 28, 2009 at 16:22, Joseph Smidt wrote: > Hi, > > ? ?I need a plot that looks just like this: > http://matplotlib.sourceforge.net/examples/pylab_examples/pcolor_demo.html. You will want to ask on the matplotlib list: https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Thu May 28 20:10:32 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 28 May 2009 20:10:32 -0400 Subject: [SciPy-user] OLS matrix-f(x) = 0 problem (Was: linear regression) In-Reply-To: <1cd32cbb0905281343k1208fb2fud7e20d5d823d4e60@mail.gmail.com> References: <1cd32cbb0905270919ye63b42ch4600cd65e0c04b8b@mail.gmail.com> <3d375d730905271203k53760a3eq8ac3a6ba6a6acd67@mail.gmail.com> <1cd32cbb0905271222o3a5d1c73s64cd934734c7165a@mail.gmail.com> <3d375d730905271237w31e0a628rfe0a5d801f380136@mail.gmail.com> <20090527205050.GA26519@phare.normalesup.org> <3d375d730905271355l1b79aa1na8fcd191f7720684@mail.gmail.com> <20090527215340.GA12563@phare.normalesup.org> <1cd32cbb0905271527u410fcee0oee1be6ca6545673b@mail.gmail.com> <20090528193730.GC18415@phare.normalesup.org> <1cd32cbb0905281343k1208fb2fud7e20d5d823d4e60@mail.gmail.com> Message-ID: <1cd32cbb0905281710q30b12d31pc2a43cf2aa829183@mail.gmail.com> On Thu, May 28, 2009 at 4:43 PM, wrote: > On Thu, May 28, 2009 at 3:37 PM, Gael Varoquaux > wrote: >> On Wed, May 27, 2009 at 06:27:29PM -0400, josef.pktd at gmail.com wrote: >>> Sounds like a recursive system of linear (simultaneous) equations with >>> linear restrictions to me. If you want an unbiased estimator, then >>> going row by row, and solving each linear OLS, linalg.lstsq, would be >>> the standard way to go. Substuting the previous estimates of the Y's >>> into the next step. >> >> Oups, I realise I forgot to answer. >> >> You are right, this is a way to interpret it, and I was solving the >> system as you suggest. What didn't like is that the solution I was >> getting was dependant on the order of the variables, but I had forgotten >> that the lower triangular matrix was an approximation. The >> non-permutation-invariance came from this approximation, not the way I >> was solving the system. >> >> Unfortunately, it seems that the solution to the complete problem is >> still an open research question (FYI the problem is to find the OLS >> solution to "M X = X + e", with M definite positive, and with a given >> support. >> >> X's dimension are everywhere between (50, 50) to (300, 500), including >> the bad situation (300, 50). >> >> This is related sparse covariance matrix estimation. I don't think there >> is (yet) an easy answer. >> >> Thanks for your answer, it brought me back to Earth, making me realize >> that I was already doing the right thing, and look for the problem >> elsewhere. >> >> Ga?l > > I'm not sure I understand anymore. > > When estimating the parameters of a simultaneous system of equations > with least squares, we need a lot of identifying restrictions, the > lower triangular parameter matrix is the simplest one. And you don't > get permutation invariance because the sequence of your equation is > what identifies the parameters. In your case, you need to have enough > identifying restrictions on the support of M, and given that you don't > have any additional exogenous variables the identifying restrictions > might require that it can be reordered to a lower triangular form. > (Disclaimer: After I mixed up the bias yesterday, I should mentioned > that I haven't looked at this in a pretty long time.) > > For the rest I'm a bit vague: > If you don't want to impose the sequential identifying restriction, > than you are just looking for a subspace that spans your X matrix with > certain properties. > > Given that you have an X that can have more rows than columns and > reversed, you have either more or fewer equations than unknowns, which > should already create a large multiplicity of solutions for some > cases. Also I expect your X'X (or in numpy X.T * X) matrix to be > singular. ?(maybe it is X*X.T in your notation) > So I would think that the solution will depend more on the eigenvector > decomposition, or SVD, or pinv of X'X, and there might be many > possibilities to span the space of X. I'm not sure how to get the > subspace that satisfies your support in M restrictions, if M is not > lower triangular. Now this just sounds like a description for principal component or factor analysis to me. ??? Josef > > I don't really understand what permutation-invariance you want, but if > you want to impose some kind of symmetry maybe this gives you > identification of a unique solution. > > Josef > From loris.bennett at fu-berlin.de Fri May 29 06:09:27 2009 From: loris.bennett at fu-berlin.de (Loris Bennett) Date: Fri, 29 May 2009 12:09:27 +0200 Subject: [SciPy-user] Install failure on AIX 5.3 due to missing linker flagprefix for compiler In-Reply-To: References: <1243319046.4790.0.camel@localhost> Message-ID: <1243591767.4512.4.camel@localhost> On Tue, 2009-05-26 at 09:51 -0700, Whitcomb, Mr. Tim wrote: > > Now I am getting the following error when I try to install SciPy: > > > > g++ g++ -pthread > > > > -bI:/opt/sw/python/Python-2.6.2/lib/python2.6/config/python.exp > > build/temp.aix-5.3-2.6/scipy/interpolate/src/_interpolate.o > > -Lbuild/temp.aix-5.3-2.6 -o > > build/lib.aix-5.3-2.6/scipy/interpolate/_interpolate.so > > g++: '-b' must come at the start of the command line > > g++: '-b' must come at the start of the command line > > error: Command "g++ g++ -pthread > > I ran into this issue with Scipy as well - the command *should* look > something like > /path/to/ld_so_aix [c++ compiler].... > but gets changed to > [c++ compiler] [c++ compiler] > which I believe is an error. > > The fix that I used was to edit unixccompiler.py in the distutils > package, and move the > linker[i] = self.compiler_cxx[i] > statement under the > if os.path.basename(linker[0]) == "env" > statement - this got rid of that issue. This worked. Thanks. > It also looks like it's including files using -bI:, which is more XL > C++-ish than g++. I am very new to working on AIX machines, so I can't > say if this is an error as well. Hopefully someone with more AIX > experience than me can comment on these issues. > > On a side note, does numpy.test() crash with a MemoryError on your > installation? I haven't installed nose, so I couldn't do numpy.test(), but I tried the test program you posted and also got a MemoryError. > Tim -- Dr. Loris Bennett (Mr.) Freie Universit?t Berlin ZEDAT - Zentraleinrichtung f?r Datenverarbeitung / Computer Center Compute & Media Service Fabeckstr. 32, Room 221 D-14195 Berlin Tel ++49 30 838 51024 Fax ++49 30 838 56721 Email loris.bennett at fu-berlin.de Web www.zedat.fu-berlin.de From loris.bennett at fu-berlin.de Fri May 29 08:33:28 2009 From: loris.bennett at fu-berlin.de (Dr. Loris Bennett) Date: Fri, 29 May 2009 14:33:28 +0200 Subject: [SciPy-user] Compile Error: error: expected `)' before 'PRIdPTR' Message-ID: <87octcozqf.fsf@slate.zedat.fu-berlin.de> Hi, I am still trying to install SciPy 0.7.0 on AIX 5.3. I have managed to install NumPy 1.3.0. When I try to install SciPym I get scipy/sparse/sparsetools/csr_wrap.cxx: In function 'int require_size(PyArrayObject*, npy_intp*, int)': scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected `)' before 'PRIdPTR' >From the reading the list I see that this problem has occurred before and has apparently been fixed. Any ideas why this is still happening? Cheers Loris From ndbecker2 at gmail.com Fri May 29 14:41:08 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 29 May 2009 14:41:08 -0400 Subject: [SciPy-user] Strange discontinuity in noncentral chisquare References: <1cd32cbb0905281153x78b1afeby7e87d3e4c6b98529@mail.gmail.com> <3d375d730905281203x5920da2bn10ccf1dc5aeb0496@mail.gmail.com> Message-ID: Should I file a bug report, or assume it's already being addressed? From robert.kern at gmail.com Fri May 29 14:42:44 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 29 May 2009 13:42:44 -0500 Subject: [SciPy-user] Strange discontinuity in noncentral chisquare In-Reply-To: References: <1cd32cbb0905281153x78b1afeby7e87d3e4c6b98529@mail.gmail.com> <3d375d730905281203x5920da2bn10ccf1dc5aeb0496@mail.gmail.com> Message-ID: <3d375d730905291142i771e31f3vc1952330f5d3f52b@mail.gmail.com> On Fri, May 29, 2009 at 13:41, Neal Becker wrote: > Should I file a bug report, or assume it's already being addressed? File a bug report. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dmitrey15 at ukr.net Fri May 29 14:51:51 2009 From: dmitrey15 at ukr.net (Dmitrey) Date: Fri, 29 May 2009 21:51:51 +0300 Subject: [SciPy-user] how to find projection of a point to subspace? Message-ID: <4A202EC7.8070905@ukr.net> hi all, Suppose I have point x = [x0, ..., xn-1] and linear subspace defined by Ax=b, A is m x n matrix, b is vector of length m. What is the best way to find projection of the point x to the subspace? (Suitable for ill-conditioned cases) Does anyone have some code (preferably Python or MATLAB)? Search in google yields 1st place for the article http://cat.inist.fr/?aModele=afficheN&cpsidt=18593975 but the one is unavailable for free. Thank you in advance, D. From josef.pktd at gmail.com Fri May 29 15:01:33 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 29 May 2009 15:01:33 -0400 Subject: [SciPy-user] Strange discontinuity in noncentral chisquare In-Reply-To: <3d375d730905291142i771e31f3vc1952330f5d3f52b@mail.gmail.com> References: <1cd32cbb0905281153x78b1afeby7e87d3e4c6b98529@mail.gmail.com> <3d375d730905281203x5920da2bn10ccf1dc5aeb0496@mail.gmail.com> <3d375d730905291142i771e31f3vc1952330f5d3f52b@mail.gmail.com> Message-ID: <1cd32cbb0905291201s7fec7557ief12e6711e1f0c2a@mail.gmail.com> On Fri, May 29, 2009 at 2:42 PM, Robert Kern wrote: > On Fri, May 29, 2009 at 13:41, Neal Becker wrote: >> Should I file a bug report, or assume it's already being addressed? > > File a bug report. > pv did it already http://projects.scipy.org/scipy/ticket/955 Josef From nahumoz at gmail.com Sat May 30 06:29:30 2009 From: nahumoz at gmail.com (Oz Nahum) Date: Sat, 30 May 2009 12:29:30 +0200 Subject: [SciPy-user] creating an array with changing resolution In-Reply-To: <6ec71d090905300325t6d51bc65g6f1b33a1cf126b54@mail.gmail.com> References: <6ec71d090905300325t6d51bc65g6f1b33a1cf126b54@mail.gmail.com> Message-ID: <6ec71d090905300329y631adc9csb2a97cef0846dd45@mail.gmail.com> Hi everyone I have an X,Y domain I would like to explore in different detail levels. In octave, I can define a vector like: v = [1:1:40,40:0.1:50,50:1:100] is this the only way to do it in numpy ? v = numpy.r_[arange(0.0,40.0, 1),arange(40.0,50.0, 0.1),arange(50.0,100.0, 1)] Not that it's bad, but i just thought maybe there's another way to do it. Thanks in advance for any suggestion Oz. -- ---- Imagine there's no countries It isn't hard to do Nothing to kill or die for And no religion too Imagine all the people Living life in peace -- ---- Imagine there's no countries It isn't hard to do Nothing to kill or die for And no religion too Imagine all the people Living life in peace -------------- next part -------------- An HTML attachment was scrubbed... URL: From eike.welk at gmx.net Sat May 30 07:32:56 2009 From: eike.welk at gmx.net (Eike Welk) Date: Sat, 30 May 2009 13:32:56 +0200 Subject: [SciPy-user] creating an array with changing resolution In-Reply-To: <6ec71d090905300329y631adc9csb2a97cef0846dd45@mail.gmail.com> References: <6ec71d090905300325t6d51bc65g6f1b33a1cf126b54@mail.gmail.com> <6ec71d090905300329y631adc9csb2a97cef0846dd45@mail.gmail.com> Message-ID: <200905301332.57341.eike.welk@gmx.net> On Saturday 30 May 2009, Oz Nahum wrote: > In octave, I can define a vector like: > > v = [1:1:40,40:0.1:50,50:1:100] > > is this the only way to do it in numpy ? > v = numpy.r_[arange(0.0,40.0, 1),arange(40.0,50.0, > 0.1),arange(50.0,100.0, 1)] You can also write: In [30]:r_[1:10:3, 10:20:2] Out[30]:array([ 1, 4, 7, 10, 12, 14, 16, 18]) HTH, Eike. From nahumoz at gmail.com Sat May 30 07:48:33 2009 From: nahumoz at gmail.com (Oz Nahum) Date: Sat, 30 May 2009 13:48:33 +0200 Subject: [SciPy-user] creating an array with changing resolution Message-ID: <6ec71d090905300448s6154438fu9f87fd435d79cf89@mail.gmail.com> >You can also write: >In [30]:r_[1:10:3, 10:20:2] >Out[30]:array([ 1, 4, 7, 10, 12, 14, 16, 18]) Thanks Eike, That does the trick, Oz. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey15 at ukr.net Sat May 30 13:04:07 2009 From: dmitrey15 at ukr.net (Dmitrey) Date: Sat, 30 May 2009 20:04:07 +0300 Subject: [SciPy-user] f2py problem Message-ID: <4A216707.6090405@ukr.net> Hi all, I try to construct a Python wrapper around the fortran routine toms587, you can download it here: ftp://ftp.linux.kiev.ua/pub/projects/openopt/files/soft/toms587.f So I invoke f2py -c -m toms587 toms587.f Then I try to run the code: from toms587 import lsei from numpy import * W = array([8.94558004, 7.3286058, 3.0, 2.05011463, 0.43314039, 70.31767878, -12.29662251, -13.91359675, 75.74379415]) me,ma,mg,n,prgopt,xf,rnorme,rnorml,mode,ws,ip = 0, 3, 0, 2, ravel(1.), ravel(0.), -15.0, -15.0, -15, ravel(-15.), array((-15, -15, -15)) lsei(W,me,ma,mg,n,prgopt,xf,rnorme,rnorml,mode,ws,ip) StdErr: *** glibc detected *** /usr/bin/python: free(): invalid next size (fast): 0x0000000000cfb390 *** Any ideas? I use KUBUNTU 9.04, AMD 3800+ X2. Thank you in advance, D. From krisphoenix89 at gmail.com Sat May 30 13:57:21 2009 From: krisphoenix89 at gmail.com (Krishna Bhagavatula) Date: Sat, 30 May 2009 23:27:21 +0530 Subject: [SciPy-user] Stineman interpolation Message-ID: <8d63a2500905301057y710b984cxf6820ff00b2d46b4@mail.gmail.com> Hi, Stineman interpolation is supposed to be a well behaved method of interpolation. Can someone explain why it's behaving wildly in a simple case: x = (0, 10, 70, 100) y = (0, 535, 595, 1000) xx = arange(0,100,1) yy = stineman_interp(xx,x,y,yp=None) plot(x,y,'x') plot(xx,yy) Are there any exceptions when it does not behave well? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdh2358 at gmail.com Sat May 30 14:09:06 2009 From: jdh2358 at gmail.com (John Hunter) Date: Sat, 30 May 2009 13:09:06 -0500 Subject: [SciPy-user] Stineman interpolation In-Reply-To: <8d63a2500905301057y710b984cxf6820ff00b2d46b4@mail.gmail.com> References: <8d63a2500905301057y710b984cxf6820ff00b2d46b4@mail.gmail.com> Message-ID: <88e473830905301109k5a9b40a6ta2a9d84aa5a97917@mail.gmail.com> On Sat, May 30, 2009 at 12:57 PM, Krishna Bhagavatula wrote: > Hi, > > Stineman interpolation is supposed to be a well behaved method of > interpolation. > > Can someone explain why it's behaving wildly in a simple case: > > x = (0, 10, 70, 100) > y = (0, 535, 595, 1000) > xx = arange(0,100,1) > yy = stineman_interp(xx,x,y,yp=None) > plot(x,y,'x') > plot(xx,yy) > > Are there any exceptions when it does not behave well? I believe you are using the matplotlib.mlab stineman_interp funciton -- as far as I know this function is not in scipy. If so, you should direct your question to matplotlib-users http://lists.sourceforge.net/mailman/listinfo/matplotlib-users In your post, you may also want to CC the original author Norbert.Nemec at physik.uni-regensburg.de and describe more precisely what you mean by "behaving wildly" JDH From josef.pktd at gmail.com Sat May 30 14:15:41 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 30 May 2009 14:15:41 -0400 Subject: [SciPy-user] Stineman interpolation In-Reply-To: <88e473830905301109k5a9b40a6ta2a9d84aa5a97917@mail.gmail.com> References: <8d63a2500905301057y710b984cxf6820ff00b2d46b4@mail.gmail.com> <88e473830905301109k5a9b40a6ta2a9d84aa5a97917@mail.gmail.com> Message-ID: <1cd32cbb0905301115r354588b0n39b7326db2d32a9f@mail.gmail.com> On Sat, May 30, 2009 at 2:09 PM, John Hunter wrote: > On Sat, May 30, 2009 at 12:57 PM, Krishna Bhagavatula > wrote: >> Hi, >> >> Stineman interpolation is supposed to be a well behaved method of >> interpolation. >> >> Can someone explain why it's behaving wildly in a simple case: >> >> x = (0, 10, 70, 100) >> y = (0, 535, 595, 1000) >> xx = arange(0,100,1) > >> yy = stineman_interp(xx,x,y,yp=None) >> plot(x,y,'x') >> plot(xx,yy) >> >> Are there any exceptions when it does not behave well? > > I believe you are using the matplotlib.mlab stineman_interp funciton > -- as far as I know this function is not in scipy. ?If so, you should > direct your question to matplotlib-users thanks for the location info, namespaces are pretty nice. I didn't find it in numpy/scipy Josef > > http://lists.sourceforge.net/mailman/listinfo/matplotlib-users > > In your post, you may also want to CC the original author > Norbert.Nemec at physik.uni-regensburg.de and describe more precisely > what you mean by "behaving wildly" > > JDH From krisphoenix89 at gmail.com Sat May 30 14:24:26 2009 From: krisphoenix89 at gmail.com (Krishna Bhagavatula) Date: Sat, 30 May 2009 23:54:26 +0530 Subject: [SciPy-user] Stineman interpolation In-Reply-To: <88e473830905301109k5a9b40a6ta2a9d84aa5a97917@mail.gmail.com> References: <8d63a2500905301057y710b984cxf6820ff00b2d46b4@mail.gmail.com> <88e473830905301109k5a9b40a6ta2a9d84aa5a97917@mail.gmail.com> Message-ID: <8d63a2500905301124j525b57aewa508ad9a20f78d78@mail.gmail.com> I'm using Pylab. I posted the question here because I thought Pylab comes under scipy after reading about Pylab in scipy.org. I just found out Stineman interpolation is not there in scipy, but matplotlib and since I imported * from pylab, it was working. I was under the impression that the function was in scipy. Thank you for the pointer. On Sat, May 30, 2009 at 11:39 PM, John Hunter wrote: > On Sat, May 30, 2009 at 12:57 PM, Krishna Bhagavatula > wrote: > > Hi, > > > > Stineman interpolation is supposed to be a well behaved method of > > interpolation. > > > > Can someone explain why it's behaving wildly in a simple case: > > > > x = (0, 10, 70, 100) > > y = (0, 535, 595, 1000) > > xx = arange(0,100,1) > > > yy = stineman_interp(xx,x,y,yp=None) > > plot(x,y,'x') > > plot(xx,yy) > > > > Are there any exceptions when it does not behave well? > > I believe you are using the matplotlib.mlab stineman_interp funciton > -- as far as I know this function is not in scipy. If so, you should > direct your question to matplotlib-users > > http://lists.sourceforge.net/mailman/listinfo/matplotlib-users > > In your post, you may also want to CC the original author > Norbert.Nemec at physik.uni-regensburg.de and describe more precisely > what you mean by "behaving wildly" > > JDH > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat May 30 16:36:22 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 30 May 2009 15:36:22 -0500 Subject: [SciPy-user] how to find projection of a point to subspace? In-Reply-To: <4A202EC7.8070905@ukr.net> References: <4A202EC7.8070905@ukr.net> Message-ID: <3d375d730905301336k1de5a56bobfec2f22941dad13@mail.gmail.com> On Fri, May 29, 2009 at 13:51, Dmitrey wrote: > hi all, > > Suppose I have point x = [x0, ..., xn-1] and linear subspace defined by > Ax=b, A is m x n matrix, b is vector of length m. For clarity, I will use "y" to refer to the vector you want to project and "x" to refer to any vector in the subspace defined by "dot(A, x) = b". And I assume that m < n. > What is the best way to find projection of the point x to the > subspace? (Suitable for ill-conditioned cases) The subspace is parallel to the null space of A. You just need to translate it by a vector, which can be any solution to dot(A,x)=b. np.linalg.lstsq(A,b) gives you the minimum-norm x that satisfies this equation, let's call it x0. To find the null space, use the SVD; namely the vectors will be Vh[m:] where Vh is the third output from np.linalg.svd(A). Then the projection of y onto your subspace is x0 + (Vh[i:] * dot(Vh[i:], y-x0)[:,np.newaxis]).sum(axis=0) What this does is make x0 the origin temporarily; compute the decomposition of the projection onto the null space orthonormal frame; assembles the decomposition back into the original coordinates; then restores the origin. For a well-conditioned A, i==m. For ill-conditioned A, find the first i such that s[i]/s[0] < epsilon, where s is the vector of singular values (i.e. the second output from np.linalg.svd(A)). Also, use np.linalg.lstsq(A, b, rcond=s[i]) in order to find x0 in the ill-conditioned case. import numpy as np def project_subspace(y, A, b, eps=np.finfo(float).eps): """ Project a vector onto the subspace defined by "dot(A,x) = b". """ m, n = A.shape u, s, vh = np.linalg.svd(A) # Find the first singular value to drop below the cutoff. bad = (s < s[0] * eps) i = bad.searchsorted(1) if i < m: rcond = s[i] else: rcond = -1 x0 = np.linalg.lstsq(A, b, rcond=rcond)[0] null_space = vh[i:] y_proj = x0 + (null_space * np.dot(null_space, y-x0)[:,np.newaxis]).sum(axis=0) return y_proj -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dmitrey15 at ukr.net Sun May 31 04:12:53 2009 From: dmitrey15 at ukr.net (Dmitrey) Date: Sun, 31 May 2009 11:12:53 +0300 Subject: [SciPy-user] how to find projection of a point to subspace? In-Reply-To: <3d375d730905301336k1de5a56bobfec2f22941dad13@mail.gmail.com> References: <4A202EC7.8070905@ukr.net> <3d375d730905301336k1de5a56bobfec2f22941dad13@mail.gmail.com> Message-ID: <4A223C05.7030903@ukr.net> Thank you, St?fan van der Walt and Andrew York have sent me a letter with that article mentioned, where iterative algorithm is proposed and declared as more efficient than direct approach. D. Robert Kern wrote: > On Fri, May 29, 2009 at 13:51, Dmitrey wrote: > >> hi all, >> >> Suppose I have point x = [x0, ..., xn-1] and linear subspace defined by >> Ax=b, A is m x n matrix, b is vector of length m. >> > > For clarity, I will use "y" to refer to the vector you want to project > and "x" to refer to any vector in the subspace defined by "dot(A, x) = > b". And I assume that m < n. > > >> What is the best way to find projection of the point x to the >> subspace? (Suitable for ill-conditioned cases) >> > > The subspace is parallel to the null space of A. You just need to > translate it by a vector, which can be any solution to dot(A,x)=b. > np.linalg.lstsq(A,b) gives you the minimum-norm x that satisfies this > equation, let's call it x0. To find the null space, use the SVD; > namely the vectors will be Vh[m:] where Vh is the third output from > np.linalg.svd(A). Then the projection of y onto your subspace is > > x0 + (Vh[i:] * dot(Vh[i:], y-x0)[:,np.newaxis]).sum(axis=0) > > What this does is make x0 the origin temporarily; compute the > decomposition of the projection onto the null space orthonormal frame; > assembles the decomposition back into the original coordinates; then > restores the origin. > > For a well-conditioned A, i==m. For ill-conditioned A, find the first > i such that s[i]/s[0] < epsilon, where s is the vector of singular > values (i.e. the second output from np.linalg.svd(A)). Also, use > np.linalg.lstsq(A, b, rcond=s[i]) in order to find x0 in the > ill-conditioned case. > > > import numpy as np > > > def project_subspace(y, A, b, eps=np.finfo(float).eps): > """ Project a vector onto the subspace defined by "dot(A,x) = b". > """ > m, n = A.shape > u, s, vh = np.linalg.svd(A) > # Find the first singular value to drop below the cutoff. > bad = (s < s[0] * eps) > i = bad.searchsorted(1) > if i < m: > rcond = s[i] > else: > rcond = -1 > x0 = np.linalg.lstsq(A, b, rcond=rcond)[0] > null_space = vh[i:] > y_proj = x0 + (null_space * np.dot(null_space, > y-x0)[:,np.newaxis]).sum(axis=0) > return y_proj > > From wierob83 at googlemail.com Sun May 31 06:59:23 2009 From: wierob83 at googlemail.com (wierob) Date: Sun, 31 May 2009 12:59:23 +0200 Subject: [SciPy-user] How to use pcolor and scatter plot in one image? Message-ID: <4A22630B.1080909@googlemail.com> Hi, how can I use pcolor and a scatter plot in one image? I have a scatter plot where a lot of data points are so close to each other that they are drawn as (almost) one point in the scatter plot. So I'm trying to visualize which area of the scatter plot contains the most data points. Using pcolor I can draw a gird where each cell visualizes the relative number of data points by a different color. If I try to draw the scatter plot and the grid in the same image, only the scatter plot will be drawn. Regardless of the invocation order of plot and pcolor. ... plot(...) pcolor(...) show() ... pcolor(...) plot(...) show() Both return only the scatter plot. I'm new to Scipy. What am I doing wrong? Thanks in advance. kind regards robert From emmanuelle.gouillart at normalesup.org Sun May 31 08:10:19 2009 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Sun, 31 May 2009 14:10:19 +0200 Subject: [SciPy-user] How to use pcolor and scatter plot in one image? In-Reply-To: <4A22630B.1080909@googlemail.com> References: <4A22630B.1080909@googlemail.com> Message-ID: <20090531121019.GA21468@phare.normalesup.org> Hi Rob, could you give a little more details about what you're doing? If your problem is specifically a plot issue, you should rather write to the matplotlib mailing-list (http://sourceforge.net/mail/?group_id=80706). I tried to reproduce what you describe and as far as I'm concerned, I don't have any problem plotting on the same figure a scatter plot of the data and a grid of the coarsened density of points. See below for the code I used. Is it what you want to do? Cheers, Emmanuelle *** import numpy as np import pylab as pl N = 1000 n = 10 np.random.seed(3)#use always the same seed x, y = np.random.randn(2, N)/10 +0.5 X, Y = np.mgrid[0:1:n*1j, 0:1:n*1j] xfloor = X[:,0][np.floor(n*x).astype(int)] yfloor = Y[0][np.floor(n*y).astype(int)] z = xfloor + n*yfloor Z = X + n*Y histo = np.histogram(z.ravel(), bins=r_[Z.T.ravel(),2*n**2]) pl.pcolor(X-1./(2*n), Y-1./(2*n), histo[0].reshape((n,n))) #shifted to # have centered bins pl.scatter(x, y) pl.show() On Sun, May 31, 2009 at 12:59:23PM +0200, wierob wrote: > Hi, > how can I use pcolor and a scatter plot in one image? > I have a scatter plot where a lot of data points are so close to each > other that they are drawn as (almost) one point in the scatter plot. So > I'm trying to visualize which area of the scatter plot contains the most > data points. Using pcolor I can draw a gird where each cell visualizes > the relative number of data points by a different color. > If I try to draw the scatter plot and the grid in the same image, only > the scatter plot will be drawn. Regardless of the invocation order of > plot and pcolor. > ... > plot(...) > pcolor(...) > show() > ... > pcolor(...) > plot(...) > show() > Both return only the scatter plot. > I'm new to Scipy. What am I doing wrong? > Thanks in advance. > kind regards > robert > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From emmanuelle.gouillart at normalesup.org Sun May 31 08:41:51 2009 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Sun, 31 May 2009 14:41:51 +0200 Subject: [SciPy-user] How to use pcolor and scatter plot in one image? In-Reply-To: <20090531121019.GA21468@phare.normalesup.org> References: <4A22630B.1080909@googlemail.com> <20090531121019.GA21468@phare.normalesup.org> Message-ID: <20090531124151.GA9852@phare.normalesup.org> Oops sorry, there was a bug in my code as I interchanged x and y (this proves once again that tests should always be based on non-symmetric data!) Here is the corrected version Emmanuelle import numpy as np import pylab as pl N = 1000 n = 10 np.random.seed(3) x, y = np.random.randn(2, N)/10 +0.5 y -= 0.1 X, Y = np.mgrid[0:1:n*1j, 0:1:n*1j] xfloor = X[:,0][np.floor(n*x).astype(int)] yfloor = Y[0][np.floor(n*y).astype(int)] z = yfloor + n*xfloor Z = Y + n*X histo = np.histogram(z.ravel(), bins=r_[Z.ravel(),2*n**2]) pl.pcolor(X-1./(2*n), Y-1./(2*n), histo[0].reshape((n,n))) pl.scatter(x, y) show() On Sun, May 31, 2009 at 02:10:19PM +0200, Emmanuelle Gouillart wrote: > *** > import numpy as np > import pylab as pl > N = 1000 > n = 10 > np.random.seed(3)#use always the same seed > x, y = np.random.randn(2, N)/10 +0.5 > X, Y = np.mgrid[0:1:n*1j, 0:1:n*1j] > xfloor = X[:,0][np.floor(n*x).astype(int)] > yfloor = Y[0][np.floor(n*y).astype(int)] > z = xfloor + n*yfloor > Z = X + n*Y > histo = np.histogram(z.ravel(), bins=r_[Z.T.ravel(),2*n**2]) > pl.pcolor(X-1./(2*n), Y-1./(2*n), histo[0].reshape((n,n))) #shifted to > # have centered bins > pl.scatter(x, y) > pl.show() > On Sun, May 31, 2009 at 12:59:23PM +0200, wierob wrote: > > Hi, > > how can I use pcolor and a scatter plot in one image? > > I have a scatter plot where a lot of data points are so close to each > > other that they are drawn as (almost) one point in the scatter plot. So > > I'm trying to visualize which area of the scatter plot contains the most > > data points. Using pcolor I can draw a gird where each cell visualizes > > the relative number of data points by a different color. > > If I try to draw the scatter plot and the grid in the same image, only > > the scatter plot will be drawn. Regardless of the invocation order of > > plot and pcolor. > > ... > > plot(...) > > pcolor(...) > > show() > > ... > > pcolor(...) > > plot(...) > > show() > > Both return only the scatter plot. > > I'm new to Scipy. What am I doing wrong? > > Thanks in advance. > > kind regards > > robert > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From andrewenoble at gmail.com Sun May 31 12:17:41 2009 From: andrewenoble at gmail.com (physeco) Date: Sun, 31 May 2009 09:17:41 -0700 (PDT) Subject: [SciPy-user] basic usage of fmin_tnc and fmin_l_bfgs_b Message-ID: <23798939.post@talk.nabble.com> I'm new to multidimensional optimization with scipy. Sorry for asking the simple question, but I can't figure out the syntax for fmin_tnc and fmin_l_bfgs_b. Here's a simple example of the error I'm getting: Executing: >def f(x): return (x[0]*x[1]-1)**2+1 >g=0.1,0.1 >b=[(-10,10),(-10,10)] >so.fmin_tnc(f,g,bounds=b) Leads to the error: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- /usr/lib/python2.5/site-packages/scipy/optimize/tnc.py in fmin_tnc(func, x0, fprime, args, approx_grad, bounds, epsilon, scale, offset, messages, maxCGit, maxfun, eta, stepmx, accuracy, fmin, ftol, xtol, pgtol, rescale) 244 rc, nf, x = moduleTNC.minimize(func_and_grad, x0, low, up, scale, offset, 245 messages, maxCGit, maxfun, eta, stepmx, accuracy, --> 246 fmin, ftol, xtol, pgtol, rescale) 247 return array(x), nf, rc 248 /usr/lib/python2.5/site-packages/scipy/optimize/tnc.py in func_and_grad(x) 203 def func_and_grad(x): 204 x = asarray(x) --> 205 f, g = func(x, *args) 206 return f, list(g) 207 else: : 'numpy.float64' object is not iterable --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- I get a similar error from so.fmin_l_bfgs_b(f,g,bounds=b). If you can point out my mistake, it would be greatly appreciate. Thank you! -- View this message in context: http://www.nabble.com/basic-usage-of-fmin_tnc-and-fmin_l_bfgs_b-tp23798939p23798939.html Sent from the Scipy-User mailing list archive at Nabble.com. From josef.pktd at gmail.com Sun May 31 12:54:31 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 31 May 2009 12:54:31 -0400 Subject: [SciPy-user] basic usage of fmin_tnc and fmin_l_bfgs_b In-Reply-To: <23798939.post@talk.nabble.com> References: <23798939.post@talk.nabble.com> Message-ID: <1cd32cbb0905310954j43e07587qda0ee21befa334e1@mail.gmail.com> On Sun, May 31, 2009 at 12:17 PM, physeco wrote: > > I'm new to multidimensional optimization with scipy. ?Sorry for asking the > simple question, but I can't figure out the syntax for fmin_tnc and > fmin_l_bfgs_b. ?Here's a simple example of the error I'm getting: > > Executing: >>def f(x): > ? ? ? ?return (x[0]*x[1]-1)**2+1 >>g=0.1,0.1 >>b=[(-10,10),(-10,10)] >>so.fmin_tnc(f,g,bounds=b) > > Leads to the error: > --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > /usr/lib/python2.5/site-packages/scipy/optimize/tnc.py in fmin_tnc(func, x0, > fprime, args, approx_grad, bounds, epsilon, scale, offset, messages, > maxCGit, maxfun, eta, stepmx, accuracy, fmin, ftol, xtol, pgtol, rescale) > ? ?244 ? ? rc, nf, x = moduleTNC.minimize(func_and_grad, x0, low, up, > scale, offset, > ? ?245 ? ? ? ? ? ? messages, maxCGit, maxfun, eta, stepmx, accuracy, > --> 246 ? ? ? ? ? ? fmin, ftol, xtol, pgtol, rescale) > ? ?247 ? ? return array(x), nf, rc > ? ?248 > > /usr/lib/python2.5/site-packages/scipy/optimize/tnc.py in func_and_grad(x) > ? ?203 ? ? ? ? def func_and_grad(x): > ? ?204 ? ? ? ? ? ? x = asarray(x) > --> 205 ? ? ? ? ? ? f, g = func(x, *args) > ? ?206 ? ? ? ? ? ? return f, list(g) > ? ?207 ? ? else: > > : 'numpy.float64' object is not iterable > --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > I get a similar error from so.fmin_l_bfgs_b(f,g,bounds=b). ?If you can point > out my mistake, it would be greatly appreciate. > > Thank you! > -- > View this message in context: http://www.nabble.com/basic-usage-of-fmin_tnc-and-fmin_l_bfgs_b-tp23798939p23798939.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > from the description, the function needs to return both, the function value and gradient values: func : callable func(x, *args) Function to minimize. Should return f and g, where f is the value of the function and g its gradient (a list of floats). If the function returns None, the minimization is aborted. import scipy.optimize as so def f(x): return (x[0]*x[1]-1)**2+1, [(x[0]*x[1]-1)*x[1], (x[0]*x[1]-1)*x[0]] g = np.array([0.1,0.1]) b=[(-10,10),(-10,10)] so.fmin_tnc(f,g,bounds=b) Josef From josef.pktd at gmail.com Sun May 31 13:03:49 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 31 May 2009 13:03:49 -0400 Subject: [SciPy-user] basic usage of fmin_tnc and fmin_l_bfgs_b In-Reply-To: <1cd32cbb0905310954j43e07587qda0ee21befa334e1@mail.gmail.com> References: <23798939.post@talk.nabble.com> <1cd32cbb0905310954j43e07587qda0ee21befa334e1@mail.gmail.com> Message-ID: <1cd32cbb0905311003s183cc2ebh3229563465cf0a29@mail.gmail.com> On Sun, May 31, 2009 at 12:54 PM, wrote: > On Sun, May 31, 2009 at 12:17 PM, physeco wrote: >> >> I'm new to multidimensional optimization with scipy. ?Sorry for asking the >> simple question, but I can't figure out the syntax for fmin_tnc and >> fmin_l_bfgs_b. ?Here's a simple example of the error I'm getting: >> >> Executing: >>>def f(x): >> ? ? ? ?return (x[0]*x[1]-1)**2+1 >>>g=0.1,0.1 >>>b=[(-10,10),(-10,10)] >>>so.fmin_tnc(f,g,bounds=b) >> >> Leads to the error: >> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- >> /usr/lib/python2.5/site-packages/scipy/optimize/tnc.py in fmin_tnc(func, x0, >> fprime, args, approx_grad, bounds, epsilon, scale, offset, messages, >> maxCGit, maxfun, eta, stepmx, accuracy, fmin, ftol, xtol, pgtol, rescale) >> ? ?244 ? ? rc, nf, x = moduleTNC.minimize(func_and_grad, x0, low, up, >> scale, offset, >> ? ?245 ? ? ? ? ? ? messages, maxCGit, maxfun, eta, stepmx, accuracy, >> --> 246 ? ? ? ? ? ? fmin, ftol, xtol, pgtol, rescale) >> ? ?247 ? ? return array(x), nf, rc >> ? ?248 >> >> /usr/lib/python2.5/site-packages/scipy/optimize/tnc.py in func_and_grad(x) >> ? ?203 ? ? ? ? def func_and_grad(x): >> ? ?204 ? ? ? ? ? ? x = asarray(x) >> --> 205 ? ? ? ? ? ? f, g = func(x, *args) >> ? ?206 ? ? ? ? ? ? return f, list(g) >> ? ?207 ? ? else: >> >> : 'numpy.float64' object is not iterable >> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- >> >> I get a similar error from so.fmin_l_bfgs_b(f,g,bounds=b). ?If you can point >> out my mistake, it would be greatly appreciate. >> >> Thank you! >> -- >> View this message in context: http://www.nabble.com/basic-usage-of-fmin_tnc-and-fmin_l_bfgs_b-tp23798939p23798939.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > from the description, the function needs to return both, the function > value and gradient values: > > func : callable func(x, *args) > Function to minimize. Should return f and g, where f is the value of > the function and g its gradient (a list of floats). If the function > returns None, the minimization is aborted. > > > import scipy.optimize as so > def f(x): > ? ?return (x[0]*x[1]-1)**2+1, [(x[0]*x[1]-1)*x[1], (x[0]*x[1]-1)*x[0]] > g = np.array([0.1,0.1]) > b=[(-10,10),(-10,10)] > so.fmin_tnc(f,g,bounds=b) > I usually check whether there are any good usage examples in the test files, for these cases see scipy\optimize\tests\test_optimize.py Josef From josef.pktd at gmail.com Sun May 31 14:18:11 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 31 May 2009 14:18:11 -0400 Subject: [SciPy-user] basic usage of fmin_tnc and fmin_l_bfgs_b In-Reply-To: <1cd32cbb0905311003s183cc2ebh3229563465cf0a29@mail.gmail.com> References: <23798939.post@talk.nabble.com> <1cd32cbb0905310954j43e07587qda0ee21befa334e1@mail.gmail.com> <1cd32cbb0905311003s183cc2ebh3229563465cf0a29@mail.gmail.com> Message-ID: <1cd32cbb0905311118v3b2ad80blc8c7dad348cdbf13@mail.gmail.com> On Sun, May 31, 2009 at 1:03 PM, wrote: > On Sun, May 31, 2009 at 12:54 PM, ? wrote: >> On Sun, May 31, 2009 at 12:17 PM, physeco wrote: >>> >>> I'm new to multidimensional optimization with scipy. ?Sorry for asking the >>> simple question, but I can't figure out the syntax for fmin_tnc and >>> fmin_l_bfgs_b. ?Here's a simple example of the error I'm getting: >>> >>> Executing: >>>>def f(x): >>> ? ? ? ?return (x[0]*x[1]-1)**2+1 >>>>g=0.1,0.1 >>>>b=[(-10,10),(-10,10)] >>>>so.fmin_tnc(f,g,bounds=b) >>> >>> Leads to the error: >>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- >>> /usr/lib/python2.5/site-packages/scipy/optimize/tnc.py in fmin_tnc(func, x0, >>> fprime, args, approx_grad, bounds, epsilon, scale, offset, messages, >>> maxCGit, maxfun, eta, stepmx, accuracy, fmin, ftol, xtol, pgtol, rescale) >>> ? ?244 ? ? rc, nf, x = moduleTNC.minimize(func_and_grad, x0, low, up, >>> scale, offset, >>> ? ?245 ? ? ? ? ? ? messages, maxCGit, maxfun, eta, stepmx, accuracy, >>> --> 246 ? ? ? ? ? ? fmin, ftol, xtol, pgtol, rescale) >>> ? ?247 ? ? return array(x), nf, rc >>> ? ?248 >>> >>> /usr/lib/python2.5/site-packages/scipy/optimize/tnc.py in func_and_grad(x) >>> ? ?203 ? ? ? ? def func_and_grad(x): >>> ? ?204 ? ? ? ? ? ? x = asarray(x) >>> --> 205 ? ? ? ? ? ? f, g = func(x, *args) >>> ? ?206 ? ? ? ? ? ? return f, list(g) >>> ? ?207 ? ? else: >>> >>> : 'numpy.float64' object is not iterable >>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- >>> >>> I get a similar error from so.fmin_l_bfgs_b(f,g,bounds=b). ?If you can point >>> out my mistake, it would be greatly appreciate. >>> >>> Thank you! >>> -- >>> View this message in context: http://www.nabble.com/basic-usage-of-fmin_tnc-and-fmin_l_bfgs_b-tp23798939p23798939.html >>> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> from the description, the function needs to return both, the function >> value and gradient values: >> >> func : callable func(x, *args) >> Function to minimize. Should return f and g, where f is the value of >> the function and g its gradient (a list of floats). If the function >> returns None, the minimization is aborted. >> >> >> import scipy.optimize as so >> def f(x): >> ? ?return (x[0]*x[1]-1)**2+1, [(x[0]*x[1]-1)*x[1], (x[0]*x[1]-1)*x[0]] >> g = np.array([0.1,0.1]) >> b=[(-10,10),(-10,10)] >> so.fmin_tnc(f,g,bounds=b) >> > > I usually check whether there are any good usage examples in the test > files, for these cases see > scipy\optimize\tests\test_optimize.py > with numerical gradient def f0(x): return (x[0]*x[1]-1)**2+1 print so.fmin_tnc(f0, g, bounds=b, approx_grad=1) note: my gradients are missing factor 2 Josef From gilles.rochefort at gmail.com Sun May 31 17:43:30 2009 From: gilles.rochefort at gmail.com (Gilles Rochefort) Date: Sun, 31 May 2009 23:43:30 +0200 Subject: [SciPy-user] basic usage of fmin_tnc and fmin_l_bfgs_b In-Reply-To: <23798939.post@talk.nabble.com> References: <23798939.post@talk.nabble.com> Message-ID: <4A22FA02.9080809@gmail.com> physeco a ?crit : > I'm new to multidimensional optimization with scipy. Sorry for asking the > simple question, but I can't figure out the syntax for fmin_tnc and > fmin_l_bfgs_b. Here's a simple example of the error I'm getting: > > Executing: > >> def f(x): >> > return (x[0]*x[1]-1)**2+1 > >> g=0.1,0.1 >> b=[(-10,10),(-10,10)] >> so.fmin_tnc(f,g,bounds=b) >> > > Leads to the error: > --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > /usr/lib/python2.5/site-packages/scipy/optimize/tnc.py in fmin_tnc(func, x0, > fprime, args, approx_grad, bounds, epsilon, scale, offset, messages, > maxCGit, maxfun, eta, stepmx, accuracy, fmin, ftol, xtol, pgtol, rescale) > 244 rc, nf, x = moduleTNC.minimize(func_and_grad, x0, low, up, > scale, offset, > 245 messages, maxCGit, maxfun, eta, stepmx, accuracy, > --> 246 fmin, ftol, xtol, pgtol, rescale) > 247 return array(x), nf, rc > 248 > > /usr/lib/python2.5/site-packages/scipy/optimize/tnc.py in func_and_grad(x) > 203 def func_and_grad(x): > 204 x = asarray(x) > --> 205 f, g = func(x, *args) > 206 return f, list(g) > 207 else: > > : 'numpy.float64' object is not iterable > --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > I get a similar error from so.fmin_l_bfgs_b(f,g,bounds=b). If you can point > out my mistake, it would be greatly appreciate. > > Thank you! > Hi, function to be minimized is supposed to return both function value and gradient. You can eventually provide a gradient function separately. If you do not want or do not have a gradient function to provide, you can always turn to True the approx_gradient argument, which compute a numerical approximation of your function. To take back your example : fmin_l_bfgs_b(f,g,approx_grad=True, bounds=b) (array([ 0.99999789, 0.99999789]), 1.0000000000178644, {'funcalls': 8, 'grad': array([ -8.45989945e-06, -8.45989945e-06]), 'nbiter': 4, 'task': 'CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL', 'warnflag': 0}) fmin_tnc works the same as fmin_l_bfgs_b. Gilles. -------------- next part -------------- An HTML attachment was scrubbed... URL: