From akshaysrinivasan at gmail.com Tue Jun 1 01:10:26 2010 From: akshaysrinivasan at gmail.com (Akshay Srinivasan) Date: Tue, 01 Jun 2010 10:40:26 +0530 Subject: [SciPy-User] Kinpy In-Reply-To: References: <4C03E52F.1010309@gmail.com> <4C04065F.5090509@sun.ac.za> Message-ID: <4C049642.8080906@gmail.com> On 06/01/10 04:12, Matthew Brett wrote: > Hi, > > On Mon, May 31, 2010 at 11:56 AM, Johann Rohwer wrote: >> You might be interested in PySCeS, the Python Simulator for Cellular Systems >> (http://pysces.sf.net), > > I bow low in respect for that excellent name. I don't know who came > up with it, but whoever it was deserves due honor ;) I'm the honorable one :) @Johann Ahh, very cool. I didn't know about PySCeS, will give it a try. Thanks. @Dmitrey I don't see why I'd want something more than a lambda function. Care to elaborate ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimodisasha at gmail.com Tue Jun 1 03:07:06 2010 From: massimodisasha at gmail.com (Massimo Di Stefano) Date: Tue, 1 Jun 2010 09:07:06 +0200 Subject: [SciPy-User] numpy , scipy : test skipped on osx Message-ID: <0E83E20B-3AC4-499C-ADEA-F81ED598DF4E@gmail.com> Hi All, i'm on OS X 10.6.3 python 2.6.5 numpy, scipy (svn versions) tring to run he numpy/scipy i have : >>> import numpy [461526 refs] >>> numpy.test('1','10') Running unit tests for numpy NumPy version 2.0.0.dev8448 NumPy is installed in /usr/local/gislib/unix/lib/python2.6/site-packages/numpy Python version 2.6.5 (r265:79063, May 31 2010, 17:10:00) [GCC 4.2.1 (Apple Inc. build 5659)] nose version 0.11.3 nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/COMPATIBILITY is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/COMPATIBILITY is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/DEV_README.txt is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/DEV_README.txt is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/INSTALL.txt is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/INSTALL.txt is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/LICENSE.txt is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/LICENSE.txt is executable; skipped ... ... ... ... nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/linalg/tests/test_build.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/linalg/tests/test_build.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/linalg/tests/test_linalg.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/linalg/tests/test_linalg.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/linalg/tests/test_regression.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/linalg/tests/test_regression.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/ma/tests/test_core.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/ma/tests/test_core.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/ma/tests/test_extras.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/ma/tests/test_extras.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/ma/tests/test_mrecords.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/ma/tests/test_mrecords.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/ma/tests/test_old_ma.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/ma/tests/test_old_ma.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/ma/tests/test_regression.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/ma/tests/test_regression.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/ma/tests/test_subclassing.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/ma/tests/test_subclassing.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/matrixlib/tests/test_defmatrix.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/matrixlib/tests/test_defmatrix.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/matrixlib/tests/test_multiarray.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/matrixlib/tests/test_multiarray.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/matrixlib/tests/test_numeric.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/matrixlib/tests/test_numeric.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/matrixlib/tests/test_regression.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/matrixlib/tests/test_regression.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/oldnumeric/tests/test_oldnumeric.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/oldnumeric/tests/test_oldnumeric.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/oldnumeric/tests/test_regression.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/oldnumeric/tests/test_regression.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/polynomial/tests/test_chebyshev.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/polynomial/tests/test_chebyshev.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/polynomial/tests/test_polynomial.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/polynomial/tests/test_polynomial.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/polynomial/tests/test_polyutils.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/polynomial/tests/test_polyutils.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/random/tests/test_random.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/random/tests/test_random.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/testing/tests/test_decorators.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/testing/tests/test_decorators.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/testing/tests/test_utils.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/testing/tests/test_utils.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/tests/test_ctypeslib.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/tests/test_ctypeslib.py is executable; skipped ---------------------------------------------------------------------- Ran 0 tests in 0.138s OK [494104 refs] >>> Have you any lue on how can i fix it ? thanks for any help! Massimo. From cournape at gmail.com Tue Jun 1 03:19:58 2010 From: cournape at gmail.com (David Cournapeau) Date: Tue, 1 Jun 2010 16:19:58 +0900 Subject: [SciPy-User] numpy , scipy : test skipped on osx In-Reply-To: <0E83E20B-3AC4-499C-ADEA-F81ED598DF4E@gmail.com> References: <0E83E20B-3AC4-499C-ADEA-F81ED598DF4E@gmail.com> Message-ID: On Tue, Jun 1, 2010 at 4:07 PM, Massimo Di Stefano wrote: > Hi All, > i'm on OS X 10.6.3 > python 2.6.5 > numpy, scipy (svn versions) > > tring to run he numpy/scipy i have : > >>>> import numpy > [461526 refs] >>>> numpy.test('1','10') > Running unit tests for numpy > NumPy version 2.0.0.dev8448 > NumPy is installed in /usr/local/gislib/unix/lib/python2.6/site-packages/numpy > Python version 2.6.5 (r265:79063, May 31 2010, 17:10:00) [GCC 4.2.1 (Apple Inc. build 5659)] > nose version 0.11.3 > nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] > nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] > nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/COMPATIBILITY is executable; skipped How did you install numpy ? David From massimodisasha at gmail.com Tue Jun 1 03:26:19 2010 From: massimodisasha at gmail.com (Massimo Di Stefano) Date: Tue, 1 Jun 2010 09:26:19 +0200 Subject: [SciPy-User] numpy , scipy : test skipped on osx In-Reply-To: References: <0E83E20B-3AC4-499C-ADEA-F81ED598DF4E@gmail.com> Message-ID: <35A402E8-B457-4435-A36C-9E89FCA3A8C4@gmail.com> I installed numpy / scipy from svn i used : python setup.py build sudo python setup.py install i'm using a python 2.6.5 installed from source in >> /usr/local/gislib/unix/bin/python i had the same problem some month ago (using the system python that comes with osx) someone helped me on scipy irc channel but i forghet what we did to fix the "skipped test" problem. thnks, Massimo. Il giorno 01/giu/2010, alle ore 09.19, David Cournapeau ha scritto: > On Tue, Jun 1, 2010 at 4:07 PM, Massimo Di Stefano > wrote: >> Hi All, >> i'm on OS X 10.6.3 >> python 2.6.5 >> numpy, scipy (svn versions) >> >> tring to run he numpy/scipy i have : >> >>>>> import numpy >> [461526 refs] >>>>> numpy.test('1','10') >> Running unit tests for numpy >> NumPy version 2.0.0.dev8448 >> NumPy is installed in /usr/local/gislib/unix/lib/python2.6/site-packages/numpy >> Python version 2.6.5 (r265:79063, May 31 2010, 17:10:00) [GCC 4.2.1 (Apple Inc. build 5659)] >> nose version 0.11.3 >> nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] >> nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] >> nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/COMPATIBILITY is executable; skipped > > How did you install numpy ? > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From rajs2010 at gmail.com Tue Jun 1 06:46:27 2010 From: rajs2010 at gmail.com (Rajeev) Date: Tue, 1 Jun 2010 03:46:27 -0700 (PDT) Subject: [SciPy-User] performance python Message-ID: <60e3d5d7-3d97-4426-bd2a-92ee84bc2372@t14g2000prm.googlegroups.com> Hi, I was trying to run the codes given at http://www.scipy.org/PerformancePython and I got the following errors for blitz /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/mathfunc.h: 45: error: ?labs? is not a member of ?std? /usr/lib/python2.5/site-packages/scipy/weave/blitz/blitz/funcs.h:509: error: call of overloaded ?abs(int&)? is ambiguous However I could get the result for rest of the cases (except pyrex and psyco) - On my desktop pc, I got Doing 100 iterations on a 1000x1000 grid numeric took 6.16 seconds fastinline took 1.7 seconds fortran77 took 1.71 seconds fortran90-arrays took 2.19 seconds fortran95-forall took 2.18 seconds slow (1 iteration) took 8.17 seconds 100 iterations should take about 817.000000 seconds You don't have Psyco installed! For C++, Enter nx n_iter eps --> 1000 100 1e-16 nx = 1000, ny = 1000, n_iter = 100, eps = 1e-16 0.326132 Iterations took 3.93 seconds. For matlab, >> tic; laplace; toc Elapsed time is 7.750676 seconds. With octave, octave:3> tic; laplace; toc Elapsed time is 15.6242 seconds. On another machine which is a cluster for parallel computation with 32 nodes, I got the following numeric took 7.64 seconds fastinline took 3.42 seconds fortran77 took 3.56 seconds fortran90-arrays took 2.38 seconds fortran95-forall took 2.39 seconds slow (1 iteration) took 6.33 seconds 100 iterations should take about 633.000000 seconds You don't have Psyco installed! For C++, Enter nx n_iter eps --> 1000 100 1e-16 nx = 1000, ny = 1000, n_iter = 100, eps = 1e-16 0.326132 Iterations took 6.05 seconds. For octave, octave:1> tic; laplace; toc Elapsed time is 15.9253 seconds. Can someone explain the relative difference of performance? Also we should add an example with cython. Meanwhile please help me with the error in blitz. Best wishes, Rajeev From vincent at vincentdavis.net Tue Jun 1 10:00:07 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Tue, 1 Jun 2010 08:00:07 -0600 Subject: [SciPy-User] scipy.io.matlab.loadmat error In-Reply-To: References: <8CA9D85A-CA93-4B7F-8434-02F633C44090@gmail.com> Message-ID: Work on ( I guess, never used it) Mac Osx 10.6 running python 2.6 In [7]: mt.loadmat('teste.mat') /Library/Frameworks/EPD64.framework/Versions/6.1/lib/python2.6/site-packages/scipy/io/matlab/mio.py:99: FutureWarning: Using struct_as_record default value (False) This will change to True in future versions return MatFile5Reader(byte_stream, **kwargs) Out[7]: {'__globals__': [], '__header__': 'MATLAB 5.0 MAT-file, Platform: MACI, Created on: Mon May 31 21:06:09 2010', '__version__': '1.0', 'x': array([[0, 1, 3, 0, 1, 3, 4, 5, 7, 7]], dtype=uint8)} On Mon, May 31, 2010 at 9:58 PM, Matthew Brett wrote: > Hi, > >>> ?But I thought I had fixed that on the SVN ??? >> >> You did but I assume that only applied to csv (type?) files. >> I was thinking that they may have a "similar" problem with this mat >> file. But I tried to clearly say I have no idea. > > Actually the .mat files are a custom binary format by matlab - we > don't use the genfromtxt stuff to load them... > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From afraser at lanl.gov Tue Jun 1 10:45:21 2010 From: afraser at lanl.gov (Andy Fraser) Date: Tue, 01 Jun 2010 08:45:21 -0600 Subject: [SciPy-User] using multiple processors for particle filtering In-Reply-To: <97132D07-D91C-45F1-BACD-AAE476E91F9F@yale.edu> (Zachary Pincus's message of "Thu\, 27 May 2010 23\:13\:20 -0400") References: <8739xgndes.fsf@lanl.gov> <8763292fi4.fsf@lanl.gov> <97132D07-D91C-45F1-BACD-AAE476E91F9F@yale.edu> Message-ID: <87k4qizu9q.fsf@lanl.gov> Zach, Thank you for your detailed reply. The way I've structured my code makes it difficult to implement your advice. After taking some time to work on the problem, I will post again. I may not get back to it till after my summer vacation. >>>>> "ZP" == Zachary Pincus writes: ZP> [...] Several problems here: ZP> (1) I am sorry I didn't mention this earlier, but looking over ZP> your original email, it appears that your single-process code ZP> might be very inefficient: it seems to perturb each particle ZP> individually in a for- loop rather than working on an array of ZP> all the particles. [...] Correct. My particles are planes that carry cameras. I have three kinds of classes: ParticleFilters, Planes, and Cameras. That structure makes it easy to change the characteristics of the Planes or Cameras by using subclasses at the expense of making it hard to speed things up. ZP> (2) From the slowdowns you report, it looks like overhead ZP> costs are completely dominating. For each job, the code and ZP> data need to be serialized (pickled, I think, is how the ZP> multiprocessing library handles it), written to a pipe, ZP> unpickled, executed, and the results need to be pickled, sent ZP> back, and unpickled. Perhaps using memmap to share state might ZP> be better? Or you can make sure that the function parameters ZP> and results can be very rapidly pickled and unpickled (single ZP> numpy arrays, e.g., not lists-of-sub-arrays or something). I suspected that [un]pickling was the dominating factor. I had not looked at mmap before. It looks like a better tool. Andy From zachary.pincus at yale.edu Tue Jun 1 10:52:25 2010 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 1 Jun 2010 10:52:25 -0400 Subject: [SciPy-User] using multiple processors for particle filtering In-Reply-To: <87k4qizu9q.fsf@lanl.gov> References: <8739xgndes.fsf@lanl.gov> <8763292fi4.fsf@lanl.gov> <97132D07-D91C-45F1-BACD-AAE476E91F9F@yale.edu> <87k4qizu9q.fsf@lanl.gov> Message-ID: > Thank you for your detailed reply. The way I've structured my code > makes it difficult to implement your advice. After taking some time > to work on the problem, I will post again. I may not get back to it > till after my summer vacation. You're welcome; good luck! Overall, the only take-home here is that the main commandment in writing numerical codes for interpreted languages like Python or Matlab is: "Thou shalt not write looping constructs". If your design absolutely requires iteration over individual data elements (and they often do), you might look at cython, which is a nice way to write in (more or less) python that gets compiled to C and can easily interact with python objects / numpy arrays. Zach On Jun 1, 2010, at 10:45 AM, Andy Fraser wrote: > Zach, > > Thank you for your detailed reply. The way I've structured my code > makes it difficult to implement your advice. After taking some time > to work on the problem, I will post again. I may not get back to it > till after my summer vacation. > >>>>>> "ZP" == Zachary Pincus writes: > > ZP> [...] Several problems here: > > ZP> (1) I am sorry I didn't mention this earlier, but looking over > ZP> your original email, it appears that your single-process code > ZP> might be very inefficient: it seems to perturb each particle > ZP> individually in a for- loop rather than working on an array of > ZP> all the particles. [...] > > Correct. My particles are planes that carry cameras. I have three > kinds of classes: ParticleFilters, Planes, and Cameras. That > structure makes it easy to change the characteristics of the Planes or > Cameras by using subclasses at the expense of making it hard to speed > things up. > > ZP> (2) From the slowdowns you report, it looks like overhead > ZP> costs are completely dominating. For each job, the code and > ZP> data need to be serialized (pickled, I think, is how the > ZP> multiprocessing library handles it), written to a pipe, > ZP> unpickled, executed, and the results need to be pickled, sent > ZP> back, and unpickled. Perhaps using memmap to share state might > ZP> be better? Or you can make sure that the function parameters > ZP> and results can be very rapidly pickled and unpickled (single > ZP> numpy arrays, e.g., not lists-of-sub-arrays or something). > > I suspected that [un]pickling was the dominating factor. I had not > looked at mmap before. It looks like a better tool. > > Andy > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From lorenzo.isella at gmail.com Tue Jun 1 11:16:35 2010 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Tue, 01 Jun 2010 17:16:35 +0200 Subject: [SciPy-User] Again on Calculating (Conditional) Time Intervals Message-ID: <1275405395.2088.26.camel@rattlesnake> Dear All, I hope this is not too off-topic. I have digged up an old email I posted which went unanswered quite some time ago. I made some progress on a simpler problem than the one for which I initially asked for help and I am attaching at the end of the email my own scripts. I anyone can help me to progress a bit further, I will be very grateful. Consider and array of this kind: 1 12 45 2 7 12 2 15 37 3 25 89 3 8 13 3 13 44 4 77 89 4 77 89 5 12 22 8 12 22 9 15 22 11 22 37 23 3 12 24 18 37 25 1 12 where the first column is time measured in some units. The other two columns are some ID's identifying infected individuals establishing a contact at the corresponding time. As you can see, there may be time-gaps in my recorded times and there may be repeated times if several contacts take place simultaneously. The ID's are always sorted out in such a way the ID number of the 2nd column is always smaller than the corresponding entry of the third column (I am obviously indexing everything from 1). Now, this is my problem: I want to look at a specific ID I will call A (let us say A is 12) and calculate all the time differences t_AC-t_AB for B!=C, i.e. all the time intervals between the most recent contact between A and B and the first subsequent contact between and A and C (which has to be different from B). An example to fix the ideas: A=12, B=22, C=1, then t_AB=8 (pick the most recent one before t_AC) t_AC=25, hence t_AC-t_AB=25-8=17. (but let me say it again: I want to be able to calculate all such intervals for any B and C on the fly). It should be clear at this point that the calculated t_AC-t_AB != t_AB-t_AC as some time-ordering is implicit in the definition (in t_AC-t_AB, AC contacts have to always be more recent than AB contacts). Even in the case of multiple disjointed AB and AC contacts, I always have to look for the closest time intervals in time. E.g. if I had 10 12 22 40 12 22 60 1 12 100 1 12 110 12 22 130 12 22 150 1 12 then I would work out the time intervals 60-40=20 and 150-130=20. Now, thanks to the help I got from the list, I am able to calculate the distribution of contact and interval durations between IDs (something simpler than the conditional time interval). See the code at the end of the email which you can run on the first dataset I provide in this email to get the contact/interval distributions. Sorry for the long email, but any suggestion about how to calculate the conditional probability efficiently would help me a great deal. Many thanks Lorenzo #!/usr/bin/env python import scipy as s import pylab as p import numpy as n import sys import string def single_tag_contact_times(sliced_data, tag_id): #I can follow a given tag by selecting his ID number and looking #for it through the data sel=s.where(sliced_data==tag_id) #now I need to add a condition in case the id of the tag I have chosen is non-existing if (len(sel[0])==0): #print "the chosen tag does not exist" return tag_contact_times=sliced_data[sel[0],0] #I select the times at which #the tag I am tracking undergoes a contact. tag_no_rep=n.unique1d(tag_contact_times) #The idea is the following: #in a given time interval delta_slice, a tag may undergo multiple contacts #with different tags. This corresponds to different entries in the output #of time_binned_interaction. That function does not allow for multiple contacts between # the SAME two tags being reported in the same time slice, but it allows the same tag ID #to appear twice in the same time window if it gets in touch with TWO different tags #within the same delta_slice. It is fundamental to know that in a given time slice #tag A has estabilished contact with tag B and tag C (if I discard any bit of this info, #then I lose info about the state of the network at that time slice), but when it comes to #simply having the time-dependent distribution of contact durations and intervals between #any two contacts estabilished by packet A, I will simply say that tag A reported a contact #in that given time-slice. More sophisticated statistics (e.g. the number of contacts #estabilished by tag A in a given time slice), can be implemented if found useful/needed #later on. #p.save("single_tag_contact_times_no_rep.dat",tag_no_rep,fmt='%d') return tag_no_rep def contact_duration_and_interval_many_tags(sliced_interactions,\ delta_slice, counter): #I added this line since now there is no guarantee that in the edge list # (contact list) tag_A---tag_B, the id of tag_A is <= id of tag_B. sliced_interactions[:,1:3]=s.sort(sliced_interactions[:,1:3]) #This function iterates interval_between_contacts_single_tag on a all the tag ID`s #thus outputting the distribution of time intervals between any two contacts in the system. tag_ids= n.unique1d(s.ravel(sliced_interactions[:,1:3])) #to get a list of #all tag ID`s, which appear (repeated) on two rows of the matrix output by # time_binned_interaction #n.savetxt("tag_IDs.dat", tag_ids , fmt='%d') # tag_ids=tag_ids.astype('int') #print "tag_ids is, ", tag_ids overall_gaps=s.zeros(0) #this array will contain the time intervals between two consecutive #contacts for all the tags in the system. overall_duration=s.zeros(0) #this array will contain the time duration of the #contacts for all the tags in the system. for i in xrange(len(tag_ids)): track_tag_id=tag_ids[i] #i.e. iterate on all tags contact_times=single_tag_contact_times(sliced_interactions, track_tag_id) #get #an array with all the interactions of a given tag #print "contact_times is, ", contact_times results=contact_duration_and_interval_single_tag(contact_times, delta_slice) tag_duration=results[0] tag_intervals=results[1] #get #an array with the time intervals between two contacts for a given tag #print "tag_intervals is, ", tag_intervals overall_gaps=s.hstack((overall_gaps,tag_intervals)) #collect #the results on all tags #print "overall_gaps is, ", overall_gaps overall_duration=s.hstack((overall_duration,tag_duration)) #overall_gaps=overall_gaps[s.where(overall_gaps !=0)] #overall_duration=overall_duration[s.where(overall_duration !=0)] filename="many_tags_contact_interval_distr2_%01d"%(counter+1) filename=filename+"_.dat" n.savetxt(filename, overall_gaps , fmt='%d') filename="many_tags_contact_duration_distr2_%01d"%(counter+1) filename=filename+"_.dat" n.savetxt(filename, overall_duration , fmt='%d') return overall_duration, overall_gaps def contact_duration_and_interval_many_tags(sliced_interactions,\ delta_slice, counter): #I added this line since now there is no guarantee that in the edge list # (contact list) tag_A---tag_B, the id of tag_A is <= id of tag_B. sliced_interactions[:,1:3]=s.sort(sliced_interactions[:,1:3]) #This function iterates interval_between_contacts_single_tag on a all the tag ID`s #thus outputting the distribution of time intervals between any two contacts in the system. tag_ids= n.unique1d(s.ravel(sliced_interactions[:,1:3])) #to get a list of #all tag ID`s, which appear (repeated) on two rows of the matrix output by # time_binned_interaction #n.savetxt("tag_IDs.dat", tag_ids , fmt='%d') # tag_ids=tag_ids.astype('int') #print "tag_ids is, ", tag_ids overall_gaps=s.zeros(0) #this array will contain the time intervals between two consecutive #contacts for all the tags in the system. overall_duration=s.zeros(0) #this array will contain the time duration of the #contacts for all the tags in the system. for i in xrange(len(tag_ids)): track_tag_id=tag_ids[i] #i.e. iterate on all tags contact_times=single_tag_contact_times(sliced_interactions, track_tag_id) #get #an array with all the interactions of a given tag #print "contact_times is, ", contact_times results=contact_duration_and_interval_single_tag(contact_times, delta_slice) tag_duration=results[0] tag_intervals=results[1] #get #an array with the time intervals between two contacts for a given tag #print "tag_intervals is, ", tag_intervals overall_gaps=s.hstack((overall_gaps,tag_intervals)) #collect #the results on all tags #print "overall_gaps is, ", overall_gaps overall_duration=s.hstack((overall_duration,tag_duration)) #overall_gaps=overall_gaps[s.where(overall_gaps !=0)] #overall_duration=overall_duration[s.where(overall_duration !=0)] filename="many_tags_contact_interval_distr2_%01d"%(counter+1) filename=filename+"_.dat" n.savetxt(filename, overall_gaps , fmt='%d') filename="many_tags_contact_duration_distr2_%01d"%(counter+1) filename=filename+"_.dat" n.savetxt(filename, overall_duration , fmt='%d') return overall_duration, overall_gaps def contact_duration_and_interval_single_tag(single_tag_no_rep, delta_slice): #the following if condition is useful only when I am really tracking a particular #tag whose ID is given a priory but which may not exist at all (in the sense that #it would not estabilish any contact) in the time window during which I am studying #the system. if (single_tag_no_rep==None): print "The chosen tag does not exist hence no analysis can be performed on it" return # delta_slice=int(delta_slice) #I do not need floating point arithmetic single_tag_no_rep=(single_tag_no_rep-single_tag_no_rep[0])/delta_slice gaps=s.diff(single_tag_no_rep) #a bit more efficient than the line above #print "gaps is, ", gaps #gaps is now an array of integers. It either has a list of consecutive 1`s # (which means a contact duration of delta_slice times the number of consecutive ones) # of an entry higher than one which expresses (in units of delta_slice) the time during #which the tag underwent no contact #p.save("gaps.dat",gaps, fmt='%d') # find_gap=s.where(gaps != 1)[0] find_gap=s.where(gaps > 1)[0] #a better definition: a tag may estabilish #several contacts within the same timeslice. So I may have some zeros in #gaps due to different simultaneous contacts. a tag is truly disconnected #from all the others when I see an increment larger than one in the #rescaled time. gap_distr=(gaps[find_gap]-1)*delta_slice #so, this is really the list of the #time interval between two contacts for my tag. After the discussion with Ciro, #I modified slightly the definition (now there is a -1) in the definition. #It probably does not matter much for the calculated distribution. #print "gap_distr is, ", gap_distr #NB: the procedure above does NOT break down is gap_distr is empty #Now I calculate the duration of the contacts of my tag. I changed this bit since #I had new discussions with Ciro #single_tag_no_rep=s.hstack((0,single_tag_no_rep)) #print "single_tag_no_rep is, ", single_tag_no_rep, "and its length is, ", len(single_tag_no_rep) # e2=s.diff(single_tag_no_rep) # #print "e2 is, ", e2 # sel=s.where(e2!=1)[0] # #print "sel is, ", sel #sel=s.where(gaps!=1)[0] # res=0 #this will contain the results and will be overwritten #What is here needs to be tested very carefully! There may be some bugs sol=s.hstack((0,find_gap,len(gaps))) #print "sol is, ", sol res=s.diff(sol) #print "res initially is, ", res res[0]=res[0]+1 #to account for troubles I normally have at the beginning of the array #print "res is, ", res res=res*delta_slice #print "the sum of all the durations is, ", res.sum() return [res,gap_distr] f = open(sys.argv[1]) sliced_interactions = [map(int, string.split(line)) for line in f.readlines()] f.close() print ("sliced_interactions is, ", sliced_interactions) sliced_interactions = s.array(sliced_interactions, dtype="int64") print ("sliced_interactions is now, ", sliced_interactions) counter=0 delta_slice=1 contact_duration_and_interval_many_tags(sliced_interactions,\ delta_slice,counter) From massimodisasha at gmail.com Tue Jun 1 11:46:27 2010 From: massimodisasha at gmail.com (Massimo Di Stefano) Date: Tue, 1 Jun 2010 17:46:27 +0200 Subject: [SciPy-User] SciPy-User Digest, Vol 82, Issue 2 In-Reply-To: References: Message-ID: <1CC1A403-5324-458F-8620-B0AE779303EE@gmail.com> I'm tring to find answer on google but seems i'm the only that runs in this problem. i've no clue on what i'm missing, i'm avaiable for any kind of test thanks a lot for any help! Regards, Massimo Il giorno 01/giu/2010, alle ore 17.16, scipy-user-request at scipy.org ha scritto: > I installed numpy / scipy from svn > > i used : > > python setup.py build > sudo python setup.py install > > i'm using a python 2.6.5 installed from source in >>> /usr/local/gislib/unix/bin/python > > i had the same problem some month ago > (using the system python that comes with osx) > someone helped me on scipy irc channel > but i forghet what we did to fix the "skipped test" problem. > > > thnks, > > Massimo. > > Il giorno 01/giu/2010, alle ore 09.19, David Cournapeau ha scritto: > >> On Tue, Jun 1, 2010 at 4:07 PM, Massimo Di Stefano >> wrote: >>> Hi All, >>> i'm on OS X 10.6.3 >>> python 2.6.5 >>> numpy, scipy (svn versions) >>> >>> tring to run he numpy/scipy i have : >>> >>>>>> import numpy >>> [461526 refs] >>>>>> numpy.test('1','10') >>> Running unit tests for numpy >>> NumPy version 2.0.0.dev8448 >>> NumPy is installed in /usr/local/gislib/unix/lib/python2.6/site-packages/numpy >>> Python version 2.6.5 (r265:79063, May 31 2010, 17:10:00) [GCC 4.2.1 (Apple Inc. build 5659)] >>> nose version 0.11.3 >>> nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] >>> nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] >>> nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/COMPATIBILITY is executable; skipped -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at vincentdavis.net Tue Jun 1 11:57:26 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Tue, 1 Jun 2010 09:57:26 -0600 Subject: [SciPy-User] SciPy-User Digest, Vol 82, Issue 2 In-Reply-To: <1CC1A403-5324-458F-8620-B0AE779303EE@gmail.com> References: <1CC1A403-5324-458F-8620-B0AE779303EE@gmail.com> Message-ID: I think many of the tested are skipped becuase they are not relevant to osx. It seems the last time I installed I got the same. I am on a phone so it is difficult for me to read you test results but I think you might find that that are looking for items that are not on osx. Just a thought Vincent On Tuesday, June 1, 2010, Massimo Di Stefano wrote: > I'm tring to find answer on googlebut seems i'm the only that runs in this problem. > i've no clue on what i'm missing,i'm avaiable for any kind of testthanks a lot for any help! > Regards, > Massimo > > > Il giorno 01/giu/2010, alle ore 17.16, scipy-user-request at scipy.org ha scritto: > I installed numpy / scipy from svn > > i used : > > python setup.py build > sudo python setup.py install > > i'm using a python 2.6.5 installed from source in > /usr/local/gislib/unix/bin/python > > i had the same problem some month ago > (using the system python that comes with osx) > someone helped me on scipy irc channel > but i forghet what we did to fix the "skipped test" problem. > > > thnks, > > Massimo. > > Il giorno 01/giu/2010, alle ore 09.19, David Cournapeau ha scritto: > > On Tue, Jun 1, 2010 at 4:07 PM, Massimo Di Stefano > wrote: > Hi All, > i'm on OS X 10.6.3 > python 2.6.5 > numpy, scipy (svn versions) > > tring to run he numpy/scipy i have : > > import numpy > [461526 refs] > numpy.test('1','10') > Running unit tests for numpy > NumPy version 2.0.0.dev8448 > NumPy is installed in /usr/local/gislib/unix/lib/python2.6/site-packages/numpy > Python version 2.6.5 (r265:79063, May 31 2010, 17:10:00) [GCC 4.2.1 (Apple Inc. build 5659)] > nose version 0.11.3 > nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] > nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] > nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/COMPATIBILITY is executable; skipped > From massimodisasha at gmail.com Tue Jun 1 12:26:23 2010 From: massimodisasha at gmail.com (Massimo Di Stefano) Date: Tue, 1 Jun 2010 18:26:23 +0200 Subject: [SciPy-User] numpy , scipy : test skipped on osx In-Reply-To: References: <1CC1A403-5324-458F-8620-B0AE779303EE@gmail.com> Message-ID: Apologize my previouse mail was incomplete, it don't show the full test log see my previouse post : http://mail.scipy.org/pipermail/scipy-user/2010-June/025517.html as you can see All the test are skipped :-/ Il giorno 01/giu/2010, alle ore 17.57, Vincent Davis ha scritto: > I think many of the tested are skipped becuase they are not relevant > to osx. It seems the last time I installed I got the same. I am on a > phone so it is difficult for me to read you test results but I think > you might find that that are looking for items that are not on osx. > Just a thought > > Vincent > > > On Tuesday, June 1, 2010, Massimo Di Stefano wrote: >> I'm tring to find answer on googlebut seems i'm the only that runs in this problem. >> i've no clue on what i'm missing,i'm avaiable for any kind of testthanks a lot for any help! >> Regards, >> Massimo >> >> >> Il giorno 01/giu/2010, alle ore 17.16, scipy-user-request at scipy.org ha scritto: >> I installed numpy / scipy from svn >> >> i used : >> >> python setup.py build >> sudo python setup.py install >> >> i'm using a python 2.6.5 installed from source in >> /usr/local/gislib/unix/bin/python >> >> i had the same problem some month ago >> (using the system python that comes with osx) >> someone helped me on scipy irc channel >> but i forghet what we did to fix the "skipped test" problem. >> >> >> thnks, >> >> Massimo. >> >> Il giorno 01/giu/2010, alle ore 09.19, David Cournapeau ha scritto: >> >> On Tue, Jun 1, 2010 at 4:07 PM, Massimo Di Stefano >> wrote: >> Hi All, >> i'm on OS X 10.6.3 >> python 2.6.5 >> numpy, scipy (svn versions) >> >> tring to run he numpy/scipy i have : >> >> import numpy >> [461526 refs] >> numpy.test('1','10') >> Running unit tests for numpy >> NumPy version 2.0.0.dev8448 >> NumPy is installed in /usr/local/gislib/unix/lib/python2.6/site-packages/numpy >> Python version 2.6.5 (r265:79063, May 31 2010, 17:10:00) [GCC 4.2.1 (Apple Inc. build 5659)] >> nose version 0.11.3 >> nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] >> nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] >> nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/COMPATIBILITY is executable; skipped >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From jsseabold at gmail.com Tue Jun 1 12:37:53 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 1 Jun 2010 12:37:53 -0400 Subject: [SciPy-User] numpy , scipy : test skipped on osx In-Reply-To: References: <1CC1A403-5324-458F-8620-B0AE779303EE@gmail.com> Message-ID: On Tue, Jun 1, 2010 at 12:26 PM, Massimo Di Stefano wrote: > Apologize my previouse mail was incomplete, it don't show the full test log > > see my previouse post : > > http://mail.scipy.org/pipermail/scipy-user/2010-June/025517.html > > as you can see All the test are skipped :-/ > What happens if you do numpy.test('1','10', extra_argv=['--exe']) Skipper From bsouthey at gmail.com Tue Jun 1 12:42:08 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 01 Jun 2010 11:42:08 -0500 Subject: [SciPy-User] numpy , scipy : test skipped on osx In-Reply-To: References: <1CC1A403-5324-458F-8620-B0AE779303EE@gmail.com> Message-ID: <4C053860.9050108@gmail.com> On 06/01/2010 11:26 AM, Massimo Di Stefano wrote: > Apologize my previouse mail was incomplete, it don't show the full test log > > see my previouse post : > > http://mail.scipy.org/pipermail/scipy-user/2010-June/025517.html > > as you can see All the test are skipped :-/ > > > Il giorno 01/giu/2010, alle ore 17.57, Vincent Davis ha scritto: > > >> I think many of the tested are skipped becuase they are not relevant >> to osx. It seems the last time I installed I got the same. I am on a >> phone so it is difficult for me to read you test results but I think >> you might find that that are looking for items that are not on osx. >> Just a thought >> >> Vincent >> >> >> On Tuesday, June 1, 2010, Massimo Di Stefano wrote: >> >>> I'm tring to find answer on googlebut seems i'm the only that runs in this problem. >>> i've no clue on what i'm missing,i'm avaiable for any kind of testthanks a lot for any help! >>> Regards, >>> Massimo >>> >>> >>> Il giorno 01/giu/2010, alle ore 17.16, scipy-user-request at scipy.org ha scritto: >>> I installed numpy / scipy from svn >>> >>> i used : >>> >>> python setup.py build >>> sudo python setup.py install >>> >>> i'm using a python 2.6.5 installed from source in >>> /usr/local/gislib/unix/bin/python >>> >>> i had the same problem some month ago >>> (using the system python that comes with osx) >>> someone helped me on scipy irc channel >>> but i forghet what we did to fix the "skipped test" problem. >>> >>> >>> thnks, >>> >>> Massimo. >>> >>> Il giorno 01/giu/2010, alle ore 09.19, David Cournapeau ha scritto: >>> >>> On Tue, Jun 1, 2010 at 4:07 PM, Massimo Di Stefano >>> wrote: >>> Hi All, >>> i'm on OS X 10.6.3 >>> python 2.6.5 >>> numpy, scipy (svn versions) >>> >>> tring to run he numpy/scipy i have : >>> >>> import numpy >>> [461526 refs] >>> numpy.test('1','10') >>> Running unit tests for numpy >>> NumPy version 2.0.0.dev8448 >>> NumPy is installed in /usr/local/gislib/unix/lib/python2.6/site-packages/numpy >>> Python version 2.6.5 (r265:79063, May 31 2010, 17:10:00) [GCC 4.2.1 (Apple Inc. build 5659)] >>> nose version 0.11.3 >>> nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] >>> nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] >>> nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/COMPATIBILITY is executable; skipped >>> >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Why do you use "numpy.test('1','10')"? "numpy.test()" should be sufficient for basic test. Otherwise read the docstring as some arguments are meant to be integer, list or Boolean: http://docs.scipy.org/numpy/docs/numpy.testing.nosetester.NoseTester.test/ such as "numpy.test(verbose=10)" Bruce From massimodisasha at gmail.com Tue Jun 1 16:22:40 2010 From: massimodisasha at gmail.com (Massimo Di Stefano) Date: Tue, 1 Jun 2010 22:22:40 +0200 Subject: [SciPy-User] numpy , scipy : test skipped on osx In-Reply-To: <4C053860.9050108@gmail.com> References: <1CC1A403-5324-458F-8620-B0AE779303EE@gmail.com> <4C053860.9050108@gmail.com> Message-ID: that's what i tried : while : numpy.test() (skip all the test without logs) [1] numpy.test(verbose=10) (skip all the test swowing the log) [2] i have that : numpy.test('1','10', extra_argv=['--exe']) runs the test as aspected, but i have a python segfault (it probably depends from something wrong on my system configuration but i have no clue on how to debug it. .. .. test_optional_none (test_array_from_pyobj.test_USHORT_gen) ... ok test_in_out (test_array_from_pyobj.test_intent) ... ok test_callback.TestF77Callback.test_all ... Fatal Python error: /var/folders/G7/G7KYb9O2GaGW2zFTZZP9nE+++TI/-Tmp-/tmpefY15u/src.macosx-10.5-intel-2.6/_test_ext_module_5403module.c:206 object at 0x102ea8a00 has negative ref count -2604246222170760230 Abort trap see the test log : http://www.geofemengineering.it/data/numpy_test_osx.txt and the osx crash log : http://www.geofemengineering.it/data/numpy_svn_test_osx_crashlog.txt thanks to All for any help! Massimo. [1] MacBook-Pro-15-di-Massimo-Di-Stefano:bld sasha$ python Python 2.6.5 (r265:79063, May 31 2010, 17:10:00) [GCC 4.2.1 (Apple Inc. build 5659)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy [113311 refs] >>> numpy.test() Running unit tests for numpy NumPy version 2.0.0.dev8448 NumPy is installed in /usr/local/gislib/unix/lib/python2.6/site-packages/numpy Python version 2.6.5 (r265:79063, May 31 2010, 17:10:00) [GCC 4.2.1 (Apple Inc. build 5659)] nose version 0.11.3 ---------------------------------------------------------------------- Ran 0 tests in 0.129s OK [299676 refs] [2] >>> numpy.test(verbose=10) Running unit tests for numpy NumPy version 2.0.0.dev8448 NumPy is installed in /usr/local/gislib/unix/lib/python2.6/site-packages/numpy Python version 2.6.5 (r265:79063, May 31 2010, 17:10:00) [GCC 4.2.1 (Apple Inc. build 5659)] nose version 0.11.3 nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/COMPATIBILITY is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/COMPATIBILITY is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/DEV_README.txt is executable; skipped ... ..... nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/testing/tests/test_utils.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/tests/test_ctypeslib.py is executable; skipped nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/tests/test_ctypeslib.py is executable; skipped ---------------------------------------------------------------------- Ran 0 tests in 0.174s OK [304709 refs] >>> Il giorno 01/giu/2010, alle ore 18.42, Bruce Southey ha scritto: > On 06/01/2010 11:26 AM, Massimo Di Stefano wrote: >> Apologize my previouse mail was incomplete, it don't show the full test log >> >> see my previouse post : >> >> http://mail.scipy.org/pipermail/scipy-user/2010-June/025517.html >> >> as you can see All the test are skipped :-/ >> >> >> Il giorno 01/giu/2010, alle ore 17.57, Vincent Davis ha scritto: >> >> >>> I think many of the tested are skipped becuase they are not relevant >>> to osx. It seems the last time I installed I got the same. I am on a >>> phone so it is difficult for me to read you test results but I think >>> you might find that that are looking for items that are not on osx. >>> Just a thought >>> >>> Vincent >>> >>> >>> On Tuesday, June 1, 2010, Massimo Di Stefano wrote: >>> >>>> I'm tring to find answer on googlebut seems i'm the only that runs in this problem. >>>> i've no clue on what i'm missing,i'm avaiable for any kind of testthanks a lot for any help! >>>> Regards, >>>> Massimo >>>> >>>> >>>> Il giorno 01/giu/2010, alle ore 17.16, scipy-user-request at scipy.org ha scritto: >>>> I installed numpy / scipy from svn >>>> >>>> i used : >>>> >>>> python setup.py build >>>> sudo python setup.py install >>>> >>>> i'm using a python 2.6.5 installed from source in >>>> /usr/local/gislib/unix/bin/python >>>> >>>> i had the same problem some month ago >>>> (using the system python that comes with osx) >>>> someone helped me on scipy irc channel >>>> but i forghet what we did to fix the "skipped test" problem. >>>> >>>> >>>> thnks, >>>> >>>> Massimo. >>>> >>>> Il giorno 01/giu/2010, alle ore 09.19, David Cournapeau ha scritto: >>>> >>>> On Tue, Jun 1, 2010 at 4:07 PM, Massimo Di Stefano >>>> wrote: >>>> Hi All, >>>> i'm on OS X 10.6.3 >>>> python 2.6.5 >>>> numpy, scipy (svn versions) >>>> >>>> tring to run he numpy/scipy i have : >>>> >>>> import numpy >>>> [461526 refs] >>>> numpy.test('1','10') >>>> Running unit tests for numpy >>>> NumPy version 2.0.0.dev8448 >>>> NumPy is installed in /usr/local/gislib/unix/lib/python2.6/site-packages/numpy >>>> Python version 2.6.5 (r265:79063, May 31 2010, 17:10:00) [GCC 4.2.1 (Apple Inc. build 5659)] >>>> nose version 0.11.3 >>>> nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] >>>> nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] >>>> nose.selector: INFO: /usr/local/gislib/unix/lib/python2.6/site-packages/numpy/COMPATIBILITY is executable; skipped >>>> >>>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > Why do you use "numpy.test('1','10')"? > "numpy.test()" should be sufficient for basic test. Otherwise read the > docstring as some arguments are meant to be integer, list or Boolean: > http://docs.scipy.org/numpy/docs/numpy.testing.nosetester.NoseTester.test/ > such as "numpy.test(verbose=10)" > > Bruce > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From bsouthey at gmail.com Tue Jun 1 17:51:35 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 01 Jun 2010 16:51:35 -0500 Subject: [SciPy-User] Again on Calculating (Conditional) Time Intervals In-Reply-To: <1275405395.2088.26.camel@rattlesnake> References: <1275405395.2088.26.camel@rattlesnake> Message-ID: <4C0580E7.3090207@gmail.com> I really don't understand what you are trying to do but here goes... On 06/01/2010 10:16 AM, Lorenzo Isella wrote: > Dear All, > I hope this is not too off-topic. I have digged up an old email I posted which went unanswered quite some time ago. > I made some progress on a simpler problem than the one for which I initially asked for help and I am attaching at the end of the email > my own scripts. I anyone can help me to progress a bit further, I will be very grateful. > Consider and array of this kind: > > 1 12 45 > 2 7 12 > 2 15 37 > 3 25 89 > 3 8 13 > 3 13 44 > 4 77 89 > 4 77 89 > 5 12 22 > 8 12 22 > 9 15 22 > 11 22 37 > 23 3 12 > 24 18 37 > 25 1 12 > > > > where the first column is time measured in some units. The other two > columns are some ID's identifying infected individuals establishing a > contact at the corresponding time. As you can see, there may be > time-gaps in my recorded times and there may be repeated times if > several contacts take place simultaneously. How do you know that 'several contacts take place simultaneously' or is this due to another variable or rounding ? For example, should you just drop the duplicated line '4 77 89'? I would be tempted to store this is in 2-d matrix, say contact, where row and columns are the IDs. The duplicated occurrences suggest using either an another axis or a tuple. You could store a tuple (or even a numpy 1-d array) using a dictionary for a sparse matrix (eg http://openbookproject.net//thinkCSpy/ch12.html). > The ID's are always sorted > out in such a way the ID number of the 2nd column is always smaller than > the corresponding entry of the third column (I am obviously indexing > everything from 1). > Now, this is my problem: I want to look at a specific ID I will call A > (let us say A is 12) and calculate all the time differences t_AC-t_AB > for B!=C, i.e. all the time intervals between the most recent contact > between A and B and the first subsequent contact between and A and C > (which has to be different from B). > An example to fix the ideas: A=12, B=22, C=1, then > t_AB=8 (pick the most recent one before t_AC) > Is this '5' or '8'? > t_AC=25, > > hence t_AC-t_AB=25-8=17. (but let me say it again: I want to be able to > calculate all such intervals for any B and C on the fly). > Except for what happens next, this could be contact[(1, 12)]- contact[(12, 22)] > It should be clear at this point that the calculated t_AC-t_AB != > t_AB-t_AC as some time-ordering is implicit in the definition (in > t_AC-t_AB, AC contacts have to always be more recent than AB contacts). > Even in the case of multiple disjointed AB and AC contacts, I always > have to look for the closest time intervals in time. E.g. if I had > > 10 12 22 > 40 12 22 > 60 1 12 > 100 1 12 > 110 12 22 > 130 12 22 > 150 1 12 > > then I would work out the time intervals 60-40=20 and 150-130=20. > How do you define these two intervals without knowing the values in advance? What does 'closest' really mean? So, given the above constraint that AC < AB, '60-40' can come from: AC contacts is [10, 40, 110, 130] (say from contacts[(1, 22)] AB contacts is [60, 100, 150] (say from contacts[(1, 12)] min(AB contacts) minus max(AC contacts that are smaller than min(AB contacts)) min(AB contacts)= 60 max(AC contacts that are smaller than min(AB contacts))= max([10, 40])=40 (How you do that last one is easier with numpy array because of the easy to find the elements smaller than the AB contacts.) Otherwise you have to do some 'smart' way to find the minimum difference of all pairs that is greater than zero. 60-10 60-40 60-110 60-130 100-10 100-40 ... > Now, thanks to the help I got from the list, I am able to calculate the > distribution of contact and interval durations between IDs (something simpler than the conditional > time interval). See the code at the end of the email which you can run on the first dataset I provide in this email to get > the contact/interval distributions. > Sorry for the long email, but any suggestion about how to calculate the conditional probability efficiently would help me a great deal. > Many thanks > > Lorenzo > > Bruce > #!/usr/bin/env python > import scipy as s > import pylab as p > import numpy as n > import sys > import string > > > def single_tag_contact_times(sliced_data, tag_id): > > > #I can follow a given tag by selecting his ID number and looking > #for it through the data > > sel=s.where(sliced_data==tag_id) > > #now I need to add a condition in case the id of the tag I have chosen is non-existing > > > if (len(sel[0])==0): > #print "the chosen tag does not exist" > return > > tag_contact_times=sliced_data[sel[0],0] #I select the times at which > #the tag I am tracking undergoes a contact. > > > > tag_no_rep=n.unique1d(tag_contact_times) #The idea is the following: > #in a given time interval delta_slice, a tag may undergo multiple contacts > #with different tags. This corresponds to different entries in the output > #of time_binned_interaction. That function does not allow for multiple contacts between > # the SAME two tags being reported in the same time slice, but it allows the same tag ID > #to appear twice in the same time window if it gets in touch with TWO different tags > #within the same delta_slice. It is fundamental to know that in a given time slice > #tag A has estabilished contact with tag B and tag C (if I discard any bit of this info, > #then I lose info about the state of the network at that time slice), but when it comes to > #simply having the time-dependent distribution of contact durations and intervals between > #any two contacts estabilished by packet A, I will simply say that tag A reported a contact > #in that given time-slice. More sophisticated statistics (e.g. the number of contacts > #estabilished by tag A in a given time slice), can be implemented if found useful/needed > #later on. > > > > #p.save("single_tag_contact_times_no_rep.dat",tag_no_rep,fmt='%d') > > return tag_no_rep > > > def contact_duration_and_interval_many_tags(sliced_interactions,\ > delta_slice, counter): > > #I added this line since now there is no guarantee that in the edge list > # (contact list) tag_A---tag_B, the id of tag_A is<= id of tag_B. > > sliced_interactions[:,1:3]=s.sort(sliced_interactions[:,1:3]) > > #This function iterates interval_between_contacts_single_tag on a all the tag ID`s > #thus outputting the distribution of time intervals between any two contacts in the system. > > tag_ids= n.unique1d(s.ravel(sliced_interactions[:,1:3])) #to get a list of > #all tag ID`s, which appear (repeated) on two rows of the matrix output by > # time_binned_interaction > > > #n.savetxt("tag_IDs.dat", tag_ids , fmt='%d') > > > # tag_ids=tag_ids.astype('int') > > > > #print "tag_ids is, ", tag_ids > > overall_gaps=s.zeros(0) #this array will contain the time intervals between two consecutive > #contacts for all the tags in the system. > > > > overall_duration=s.zeros(0) #this array will contain the time duration of the > #contacts for all the tags in the system. > > > > for i in xrange(len(tag_ids)): > track_tag_id=tag_ids[i] #i.e. iterate on all tags > > contact_times=single_tag_contact_times(sliced_interactions, track_tag_id) #get > #an array with all the interactions of a given tag > > #print "contact_times is, ", contact_times > > results=contact_duration_and_interval_single_tag(contact_times, delta_slice) > > tag_duration=results[0] > > > tag_intervals=results[1] #get > #an array with the time intervals between two contacts for a given tag > > > #print "tag_intervals is, ", tag_intervals > > overall_gaps=s.hstack((overall_gaps,tag_intervals)) #collect > #the results on all tags > > > #print "overall_gaps is, ", overall_gaps > > overall_duration=s.hstack((overall_duration,tag_duration)) > > #overall_gaps=overall_gaps[s.where(overall_gaps !=0)] > #overall_duration=overall_duration[s.where(overall_duration !=0)] > filename="many_tags_contact_interval_distr2_%01d"%(counter+1) > filename=filename+"_.dat" > > n.savetxt(filename, overall_gaps , fmt='%d') > > filename="many_tags_contact_duration_distr2_%01d"%(counter+1) > filename=filename+"_.dat" > > > n.savetxt(filename, overall_duration , fmt='%d') > > return overall_duration, overall_gaps > > > > def contact_duration_and_interval_many_tags(sliced_interactions,\ > delta_slice, counter): > > #I added this line since now there is no guarantee that in the edge list > # (contact list) tag_A---tag_B, the id of tag_A is<= id of tag_B. > > sliced_interactions[:,1:3]=s.sort(sliced_interactions[:,1:3]) > > #This function iterates interval_between_contacts_single_tag on a all the tag ID`s > #thus outputting the distribution of time intervals between any two contacts in the system. > > tag_ids= n.unique1d(s.ravel(sliced_interactions[:,1:3])) #to get a list of > #all tag ID`s, which appear (repeated) on two rows of the matrix output by > # time_binned_interaction > > > #n.savetxt("tag_IDs.dat", tag_ids , fmt='%d') > > > # tag_ids=tag_ids.astype('int') > > > > #print "tag_ids is, ", tag_ids > > overall_gaps=s.zeros(0) #this array will contain the time intervals between two consecutive > #contacts for all the tags in the system. > > > > overall_duration=s.zeros(0) #this array will contain the time duration of the > #contacts for all the tags in the system. > > > > for i in xrange(len(tag_ids)): > track_tag_id=tag_ids[i] #i.e. iterate on all tags > > contact_times=single_tag_contact_times(sliced_interactions, track_tag_id) #get > #an array with all the interactions of a given tag > > #print "contact_times is, ", contact_times > > results=contact_duration_and_interval_single_tag(contact_times, delta_slice) > > tag_duration=results[0] > > > tag_intervals=results[1] #get > #an array with the time intervals between two contacts for a given tag > > > #print "tag_intervals is, ", tag_intervals > > overall_gaps=s.hstack((overall_gaps,tag_intervals)) #collect > #the results on all tags > > > #print "overall_gaps is, ", overall_gaps > > overall_duration=s.hstack((overall_duration,tag_duration)) > > #overall_gaps=overall_gaps[s.where(overall_gaps !=0)] > #overall_duration=overall_duration[s.where(overall_duration !=0)] > filename="many_tags_contact_interval_distr2_%01d"%(counter+1) > filename=filename+"_.dat" > > n.savetxt(filename, overall_gaps , fmt='%d') > > filename="many_tags_contact_duration_distr2_%01d"%(counter+1) > filename=filename+"_.dat" > > > n.savetxt(filename, overall_duration , fmt='%d') > > return overall_duration, overall_gaps > > > def contact_duration_and_interval_single_tag(single_tag_no_rep, delta_slice): > > #the following if condition is useful only when I am really tracking a particular > #tag whose ID is given a priory but which may not exist at all (in the sense that > #it would not estabilish any contact) in the time window during which I am studying > #the system. > > > if (single_tag_no_rep==None): > print "The chosen tag does not exist hence no analysis can be performed on it" > return > > > > # delta_slice=int(delta_slice) #I do not need floating point arithmetic > > single_tag_no_rep=(single_tag_no_rep-single_tag_no_rep[0])/delta_slice > gaps=s.diff(single_tag_no_rep) #a bit more efficient than the line above > > #print "gaps is, ", gaps > > #gaps is now an array of integers. It either has a list of consecutive 1`s > # (which means a contact duration of delta_slice times the number of consecutive ones) > # of an entry higher than one which expresses (in units of delta_slice) the time during > #which the tag underwent no contact > > > #p.save("gaps.dat",gaps, fmt='%d') > > # find_gap=s.where(gaps != 1)[0] > > find_gap=s.where(gaps> 1)[0] #a better definition: a tag may estabilish > #several contacts within the same timeslice. So I may have some zeros in > #gaps due to different simultaneous contacts. a tag is truly disconnected > #from all the others when I see an increment larger than one in the > #rescaled time. > > > gap_distr=(gaps[find_gap]-1)*delta_slice #so, this is really the list of the > #time interval between two contacts for my tag. After the discussion with Ciro, > #I modified slightly the definition (now there is a -1) in the definition. > #It probably does not matter much for the calculated distribution. > > #print "gap_distr is, ", gap_distr > #NB: the procedure above does NOT break down is gap_distr is empty > > > #Now I calculate the duration of the contacts of my tag. I changed this bit since > #I had new discussions with Ciro > > #single_tag_no_rep=s.hstack((0,single_tag_no_rep)) > > #print "single_tag_no_rep is, ", single_tag_no_rep, "and its length is, ", len(single_tag_no_rep) > > # e2=s.diff(single_tag_no_rep) > > # #print "e2 is, ", e2 > > # sel=s.where(e2!=1)[0] > # #print "sel is, ", sel > > #sel=s.where(gaps!=1)[0] > > # res=0 #this will contain the results and will be overwritten > > #What is here needs to be tested very carefully! There may be some bugs > > > sol=s.hstack((0,find_gap,len(gaps))) > #print "sol is, ", sol > > > > res=s.diff(sol) > #print "res initially is, ", res > > > res[0]=res[0]+1 #to account for troubles I normally have at the beginning of the array > > #print "res is, ", res > > > > > res=res*delta_slice > > > #print "the sum of all the durations is, ", res.sum() > > return [res,gap_distr] > > > f = open(sys.argv[1]) > sliced_interactions = [map(int, string.split(line)) for line in f.readlines()] > f.close() > > print ("sliced_interactions is, ", sliced_interactions) > > sliced_interactions = s.array(sliced_interactions, dtype="int64") > > print ("sliced_interactions is now, ", sliced_interactions) > > counter=0 > > delta_slice=1 > > contact_duration_and_interval_many_tags(sliced_interactions,\ > delta_slice,counter) > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbrt.somerville at gmail.com Tue Jun 1 18:55:03 2010 From: rbrt.somerville at gmail.com (robert somerville) Date: Tue, 1 Jun 2010 15:55:03 -0700 Subject: [SciPy-User] python and filter design: calculating optimal "S" transform Message-ID: Hi; this is an airy question. does anybody have some code or ideas on how to calculate the optimal "S" transform of user specified order (wanting the coefficients) for a published filter response curve, ie. f(s) = (b1*S^2 + b2*S) / (a1*S^2 + a2*S + a3) I am parameterizing the response of a linear device (Out = Response*In). I have the measured frequency response for the device (amplitude, phase) for a range of frequencies. I wish to model that measured response via a ratio of polynomials in the s-domain (or Laplace domain), where I define the polynomial orders (for numerator and denominator). Something like the "Yule-Walker" method is what I'm after except, to my knowledge, Yule-Walker approach is strictly for responses involving a denominator polynomial (i.e. strictly autoregressive) only. I need some thing to discover both numerator and denominator coefficients. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Jun 1 18:59:33 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 2 Jun 2010 07:59:33 +0900 Subject: [SciPy-User] numpy , scipy : test skipped on osx In-Reply-To: <35A402E8-B457-4435-A36C-9E89FCA3A8C4@gmail.com> References: <0E83E20B-3AC4-499C-ADEA-F81ED598DF4E@gmail.com> <35A402E8-B457-4435-A36C-9E89FCA3A8C4@gmail.com> Message-ID: On Tue, Jun 1, 2010 at 4:26 PM, Massimo Di Stefano wrote: > I installed numpy / scipy from svn > > i used : > > python setup.py build > sudo python setup.py install > > i'm using a python 2.6.5 installed from source in >>> /usr/local/gislib/unix/bin/python > > i had the same problem some month ago > (using the system python that comes with osx) > someone helped me on scipy irc channel > but i forghet what we did to fix the "skipped test" problem. The problem seems to be the set executable bit on tests - remove it for the tests scripts. Now, I have no idea why they are set executable in the first place. David From jsseabold at gmail.com Tue Jun 1 19:17:09 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 1 Jun 2010 19:17:09 -0400 Subject: [SciPy-User] numpy , scipy : test skipped on osx In-Reply-To: References: <0E83E20B-3AC4-499C-ADEA-F81ED598DF4E@gmail.com> <35A402E8-B457-4435-A36C-9E89FCA3A8C4@gmail.com> Message-ID: On Tue, Jun 1, 2010 at 6:59 PM, David Cournapeau wrote: > On Tue, Jun 1, 2010 at 4:26 PM, Massimo Di Stefano > wrote: >> I installed numpy / scipy from svn >> >> i used : >> >> python setup.py build >> sudo python setup.py install >> >> i'm using a python 2.6.5 installed from source in >>>> /usr/local/gislib/unix/bin/python >> >> i had the same problem some month ago >> (using the system python that comes with osx) >> someone helped me on scipy irc channel >> but i forghet what we did to fix the "skipped test" problem. > > The problem seems to be the set executable bit on tests - remove it > for the tests scripts. Now, I have no idea why they are set executable > in the first place. > This comes from setuptools according to Robert. http://thread.gmane.org/gmane.comp.python.scientific.devel/11653/focus=11670 I was under the impression that the numpy install process actually changes them to be not executable somewhere along the way, so that it looks like a borked install (?). Skipper From david at silveregg.co.jp Tue Jun 1 20:44:12 2010 From: david at silveregg.co.jp (David) Date: Wed, 02 Jun 2010 09:44:12 +0900 Subject: [SciPy-User] numpy , scipy : test skipped on osx In-Reply-To: References: <0E83E20B-3AC4-499C-ADEA-F81ED598DF4E@gmail.com> <35A402E8-B457-4435-A36C-9E89FCA3A8C4@gmail.com> Message-ID: <4C05A95C.8000700@silveregg.co.jp> On 06/02/2010 08:17 AM, Skipper Seabold wrote: > > This comes from setuptools according to Robert. Yes, that's why I asked how it was installed - installing with easy_install typically messes up the install. > I was under the impression that the numpy install process actually > changes them to be not executable somewhere along the way AFAIK, we do not do that. cheers, David From josef.pktd at gmail.com Tue Jun 1 21:46:29 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 1 Jun 2010 21:46:29 -0400 Subject: [SciPy-User] python and filter design: calculating optimal "S" transform In-Reply-To: References: Message-ID: On Tue, Jun 1, 2010 at 6:55 PM, robert somerville wrote: > Hi; > this is an airy question. > > does anybody have some code or ideas on how to calculate the optimal "S" > transform of user specified order (wanting the coefficients)? for a > published filter response curve, ie. > > f(s) = (b1*S^2 + b2*S) / (a1*S^2 + a2*S + a3) > > I am parameterizing the response of a linear device (Out = Response*In). I > have the measured frequency response for the device (amplitude, phase) for a > range of frequencies. > > I wish to model that measured response via a ratio of polynomials in the > s-domain (or Laplace domain), where I define the polynomial orders (for > numerator and denominator). > > Something like the "Yule-Walker" method is what I'm after except, to my > knowledge, Yule-Walker approach is strictly for responses involving a > denominator polynomial (i.e. strictly autoregressive) only. I need some > thing to discover both numerator and denominator coefficients. > I have quite a bit of difficulty translating the terminology and to understand how "optimal" would be defined in your case. I have coded up quite a bit for working with ARMA models, especially for going in between ma and ar representation (infinite denominator or infinite numerator). If I understand the question correctly, then the closest I have is to use numerical optimization to solve for the ARMA(p,q) for fixed p,q that mimimizes the squared integrated distance between the theoretical impulse response function of this process and one that is given as target. I don't remember if I ever added a function to convert the ma term to an invertible form, since the optimization is unconstraint. If nobody else a more signal theoretic solution, I can look up my code which is somewhere in scikits.statsmodels.sandbox.tsa Josef > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From eneide.odissea at gmail.com Wed Jun 2 05:49:17 2010 From: eneide.odissea at gmail.com (Emanuele Rocci) Date: Wed, 2 Jun 2010 11:49:17 +0200 Subject: [SciPy-User] failing to create a scikit.timeseries object Message-ID: Hi All I am trying to create a scikit.timeseries object starting from 2 datetime objects. If I understood correctly it should be possible to create a scikits.timeseries starting fromdatetime objects. I try the following code but it says that Insufficient parameters. The 2 datetime differs for few microseconds. In this case what should be the value for freq parameter? Is what I am trying allowed? In theory, since timeseries can be based on datetime objects it should be possible to hanlde up to microsecond , is this correct? I think that this is not really clear to me. Regards Eo import datetime import sckilits.timeseries as ts tm1 = datetime.datetime( 2010,1,1, 10,10,2, 123456 ) tm2 = datetime.datetime( 2010,1,1, 10,10,2, 345678 ) d = [ tm1, tm2 ] tseries = ts.time_series( dates=d ) tseries = ts.time_series( d ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rigal at rapideye.de Wed Jun 2 08:05:36 2010 From: rigal at rapideye.de (Matthieu Rigal) Date: Wed, 2 Jun 2010 14:05:36 +0200 Subject: [SciPy-User] point-curve distance estimation or calculation In-Reply-To: References: Message-ID: <201006021405.36360.rigal@rapideye.de> Hi folks, This is my first message on the scipy-user list. I post it here because I wanna try do the most out of it using numpy and scipy and I hope it is not too far from the initial scope. I couldn't finda solution to it on the Internet. First I have two sets of data. I am doing several leastsq optimizations. for linear y=ax+b, I know how to handle the rest, for second order or more, it is more difficult. In the y = ax?+bx+c case, I know have a curve. I want to calculate or estimate the distance between each point (combination of two data sets) and the curve. Calculation is doable by applying this formula http://mathcentral.uregina.ca/QQ/database/QQ.09.07/s/elliot1.php But then, an estimation of the correct result is required, and it takes a lot of time for every step, if you have 25 million values in each set. First, there might be some functions useful to do this calculation quite fast (up to 5 minutes is quite acceptable), that I may have overseen. That may basically calculate the following internally http://answers.yahoo.com/question/index?qid=20070109172252AAP34wx Second, it may be better to get a close estimation of the distance to the curve. I was thinking about calculating the tangent of the curve at the (x, y0) point calculated from (x, y). For this also, I didn't found a direct way of calculating the tangent else than calculating points very close to y0 and solve the equation. Are there some direct possibilities ? Then I would simply calculate a distance from point to line, using one of following methods http://www.worsleyschool.net/science/files/linepoint/distance.html And so get an estimation. But also in this case, it seems quite complicated and not very efficient, so you may have a fully different idea :-)) Thanks for your hints, Regards, Matthieu RapidEye AG Molkenmarkt 30 14776 Brandenburg an der Havel Germany Follow us on Twitter! www.twitter.com/rapideye_ag Head Office/Sitz der Gesellschaft: Brandenburg an der Havel Management Board/Vorstand: Wolfgang G. Biedermann Chairman of Supervisory Board/Vorsitzender des Aufsichtsrates: Juergen Breitkopf Commercial Register/Handelsregister Potsdam HRB 17 796 Tax Number/Steuernummer: 048/100/00053 VAT-Ident-Number/Ust.-ID: DE 199331235 DIN EN ISO 9001 certified ************************************************************************* Diese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. The information in this e-mail is intended for the named recipients only. It may contain privileged and confidential information. If you have received this communication in error, any use, copying or dissemination of its contents is strictly prohibited. Please erase all copies of the message along with any included attachments and notify RapidEye AG or the sender immediately by telephone at the number indicated on this page. From zachary.pincus at yale.edu Wed Jun 2 09:18:58 2010 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 2 Jun 2010 09:18:58 -0400 Subject: [SciPy-User] point-curve distance estimation or calculation In-Reply-To: <201006021405.36360.rigal@rapideye.de> References: <201006021405.36360.rigal@rapideye.de> Message-ID: <2A5DC4D0-5874-4685-BB82-B3C523CC4B28@yale.edu> > Calculation is doable by applying this formula > http://mathcentral.uregina.ca/QQ/database/QQ.09.07/s/elliot1.php > But then, an estimation of the correct result is required, and it > takes a > lot of time for every step, if you have 25 million values in each set. > > First, there might be some functions useful to do this calculation > quite > fast (up to 5 minutes is quite acceptable), that I may have > overseen. That > may basically calculate the following internally > http://answers.yahoo.com/question/index?qid=20070109172252AAP34wx Are you applying the formula individually for each x,y point (slow), or are you applying the formula in parallel to an array of all the x,y points (potentially fast)? If the former, then you'll want to read up on some numpy tutorials until you see how to implement the formula without looping through each point (or post code here and someone can help); if the latter then maybe you can post code anyway and people can see if there are obvious bottlenecks. In principle it seems like applying the closed-form solution shouldn't be too slow, right? Unless I missed something, there's nothing iterative, right? Zach From josef.pktd at gmail.com Wed Jun 2 09:29:30 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 2 Jun 2010 09:29:30 -0400 Subject: [SciPy-User] point-curve distance estimation or calculation In-Reply-To: <2A5DC4D0-5874-4685-BB82-B3C523CC4B28@yale.edu> References: <201006021405.36360.rigal@rapideye.de> <2A5DC4D0-5874-4685-BB82-B3C523CC4B28@yale.edu> Message-ID: On Wed, Jun 2, 2010 at 9:18 AM, Zachary Pincus wrote: >> Calculation is doable by applying this formula >> http://mathcentral.uregina.ca/QQ/database/QQ.09.07/s/elliot1.php >> But then, an estimation of the correct result is required, and it >> takes a >> lot of time for every step, if you have 25 million values in each set. >> >> First, there might be some functions useful to do this calculation >> quite >> fast (up to 5 minutes is quite acceptable), that I may have >> overseen. That >> may basically calculate the following internally >> http://answers.yahoo.com/question/index?qid=20070109172252AAP34wx from the example here it looks like the minimum distance is a solution to a degree 3 polynomial for each point a+bx+bx**3 = 0 is there a vectorized way to find the real root of this? Josef > > Are you applying the formula individually for each x,y point (slow), > or are you applying the formula in parallel to an array of all the x,y > points (potentially fast)? > > If the former, then you'll want to read up on some numpy tutorials > until you see how to implement the formula without looping through > each point (or post code here and someone can help); if the latter > then maybe you can post code anyway and people can see if there are > obvious bottlenecks. > > In principle it seems like applying the closed-form solution shouldn't > be too slow, right? Unless I missed something, there's nothing > iterative, right? > > Zach > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From rbrt.somerville at gmail.com Wed Jun 2 11:28:10 2010 From: rbrt.somerville at gmail.com (robert somerville) Date: Wed, 2 Jun 2010 08:28:10 -0700 Subject: [SciPy-User] python and filter design: calculating optimal "S" transform Message-ID: My electrical guy says this looks very interesting, and he would like to see if it is what we are are looking for, although he says it looks very close . we are trying to model the impulse response of some geophysical instrumentation from published response curves. I believe we are trying to determine the best coefficients in the modeling polynomial Thanks; Robert Somerville -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Jun 2 12:16:50 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 2 Jun 2010 12:16:50 -0400 Subject: [SciPy-User] python and filter design: calculating optimal "S" transform In-Reply-To: References: Message-ID: On Tue, Jun 1, 2010 at 9:46 PM, wrote: > On Tue, Jun 1, 2010 at 6:55 PM, robert somerville > wrote: >> Hi; >> this is an airy question. >> >> does anybody have some code or ideas on how to calculate the optimal "S" >> transform of user specified order (wanting the coefficients)? for a >> published filter response curve, ie. >> >> f(s) = (b1*S^2 + b2*S) / (a1*S^2 + a2*S + a3) >> >> I am parameterizing the response of a linear device (Out = Response*In). I >> have the measured frequency response for the device (amplitude, phase) for a >> range of frequencies. >> >> I wish to model that measured response via a ratio of polynomials in the >> s-domain (or Laplace domain), where I define the polynomial orders (for >> numerator and denominator). >> >> Something like the "Yule-Walker" method is what I'm after except, to my >> knowledge, Yule-Walker approach is strictly for responses involving a >> denominator polynomial (i.e. strictly autoregressive) only. I need some >> thing to discover both numerator and denominator coefficients. >> > > I have quite a bit of difficulty translating the terminology and to > understand how "optimal" would be defined in your case. > > I have coded up quite a bit for working with ARMA models, especially > for going in between ma and ar representation (infinite denominator or > infinite numerator). If I understand the question correctly, then the > closest I have is to use numerical optimization to solve for the > ARMA(p,q) for fixed p,q that mimimizes the squared integrated distance > between the theoretical impulse response function of this process and > one that is given as target. I don't remember if I ever added a > function to convert the ma term to an invertible form, since the > optimization is unconstraint. > > If nobody else a more signal theoretic solution, I can look up my code > which is somewhere in scikits.statsmodels.sandbox.tsa > > Josef > On Wed, Jun 2, 2010 at 11:28 AM, robert somerville wrote: > My electrical guy says this looks very interesting, and he would like to see > if it is what we are are looking for, although he says it looks very close . > > we are trying to model the impulse response of some geophysical > instrumentation from published response curves. I believe we are trying to > determine the best coefficients in the modeling polynomial > > Thanks; > Robert Somerville > (Can you try to reply to the thread to keep the information together?) some documentation is here: http://statsmodels.sourceforge.net/sandbox.html#time-series-analysis-tsa for most parts I have both sample and theoretical properties of arma processes for example theoretical impulse response function of ARMA process: http://bazaar.launchpad.net/~scipystats/statsmodels/trunk/annotate/head%3A/scikits/statsmodels/sandbox/tsa/arima.py#L278 I think the conversion code and the theoretical acf, pac, impulse response functions are fully tested, What I have written in frequency domain is more eclectic and less tested, because I have less experience working in frequency than time domain. I think theoretical spectrum is easy, but I don't remember how far I got with theoretical impulse response function in frequency domain. The closest I have to what you might need, is ar2arma http://bazaar.launchpad.net/~scipystats/statsmodels/trunk/annotate/head%3A/scikits/statsmodels/sandbox/tsa/try_fi.py#L79 def ar2arma(ar_des, p, q, n=20, mse='ar', start=None): '''find arma approximation to ar process This finds the ARMA(p,q) coefficients that minimize the integrated squared difference between the impulse_response functions (MA representation) of the AR and the ARMA process. This does currently not check whether the MA lagpolynomial of the ARMA process is invertible, neither does it check the roots of the AR lagpolynomial. This was written to approximate an infinite order AR process (fractionally integrated ARMA), but I think it should be possible to change the function to match a given impulse response function, instead of the one implied by ar_des. The function has some examples but is not really tested, because didn't finish the missing pieces to get the invertible process. some functions where written to help me with the translation of the terminology between signal analysis, scipy.signal and time series analysis, essentially ma, ar are the same as num, denom (but I never remember which is which). The lag polynomials sometimes have the leading 1 sometimes not. general for is ar(L) y_t = ma(L) u_t , with u_t input and y_t output, L lag-operator In the optimization and estimation, ar(0)=ma(0)=1 is usually assumed. Note: I worked on this heavily more than half a year ago, and have barely looked at it since. Hope that helps and I would appreciate any feedback. Josef > > > > > > >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > From Chris.Barker at noaa.gov Wed Jun 2 12:53:55 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 02 Jun 2010 09:53:55 -0700 Subject: [SciPy-User] point-curve distance estimation or calculation In-Reply-To: <201006021405.36360.rigal@rapideye.de> References: <201006021405.36360.rigal@rapideye.de> Message-ID: <4C068CA3.1040106@noaa.gov> Matthieu Rigal wrote: > In the y = ax?+bx+c case, I know have a curve. I want to calculate or > estimate the distance between each point (combination of two data sets) > and the curve. Usually, one is trying to find the "error", or difference between the y you get from the fitted curve and the actual y you have measured. This is also called the "residual". And if this is your case, it's really easy: r = y1 - (ax?+bx+c) where x is a 1-d array of your x values, and y1 is a 1-d array of the corresponding y values. So I don't know why you need to use: > Calculation is doable by applying this formula > http://mathcentral.uregina.ca/QQ/database/QQ.09.07/s/elliot1.php Though if I have misunderstood, then you can still use that formula, and simply plug in a 1-d arrays of values, rather than scalar values, and numpy will do it all in c loops for you. If that still isn't fast enough, then there are tricks to reduce the amount of data copying going on -- post your code here and ask for help. > Second, it may be better to get a close estimation of the distance to the > curve. I'm still wondering if you really need the distance, in 2-d space to the curve, or if you need the difference between you measured value and the one predicted by the fitted curve. If the later, then it is the distance in the y-direction, not the closest distance, which is much easier (see above). HTH, -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From robert.kern at gmail.com Wed Jun 2 12:59:41 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 2 Jun 2010 12:59:41 -0400 Subject: [SciPy-User] point-curve distance estimation or calculation In-Reply-To: <201006021405.36360.rigal@rapideye.de> References: <201006021405.36360.rigal@rapideye.de> Message-ID: On Wed, Jun 2, 2010 at 08:05, Matthieu Rigal wrote: > Hi folks, > > This is my first message on the scipy-user list. I post it here because I > wanna try do the most out of it using numpy and scipy and I hope it is not > too far from the initial scope. I couldn't finda solution to it on the > Internet. > > First I have two sets of data. I am doing several leastsq optimizations. > for linear y=ax+b, I know how to handle the rest, for second order or more, > it is more difficult. > > In the y = ax?+bx+c case, I know have a curve. I want to calculate or > estimate the distance between each point (combination of two data sets) > and the curve. This is precisely the Orthogonal Distance Regression problem solved by scipy.odr. http://docs.scipy.org/doc/scipy/reference/odr.html The ODRPACK User's Guide referenced is available here: http://www.mechanicalkern.com/static/odrpack_guide.pdf -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From R.Springuel at umit.maine.edu Wed Jun 2 14:08:54 2010 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Wed, 02 Jun 2010 14:08:54 -0400 Subject: [SciPy-User] Speeding up a search algorithm Message-ID: <4C069E36.40907@umit.maine.edu> I've got a search algorithm I've written which is designed to find the minimum value of a rank 2 ndarray (called search) and remember it (calling it m) and where that value was located (with the values of n1 and n2). While that may seem a simple enough task using min and argmin functions, there are a few of caveats that make using those functions impractical (at least they appear impractical to me): 1) The array being searched is an extract from a larger array (made using take twice on the larger array with the rows and columns being taken specified by a list of indecies called current). What we want to know is the location of the minimum in the original array, not in the extract, so the values for n1 and n2 have to be taken from current. 2) The array is distance array. I.e. it is symmetric about the main diagonal and all the main diagonal elements are 0. However, these main diagonal elements should not count as being the minimum value of the array. 3) The minimum value does not necessarily occur only once in the array (even when the symmetry is taken into account) and the way to choose between multiple minimums can vary. Which variation is used is specified by the value of aggr (None, True, or False). As it stands, the algorithm looks like this: search = distancematrix.take(current,axis=0).take(current,axis=1) m = numpy.max(search) n1 = current[0] n2 = current[1] if aggr: p1 = 0 else: p1 = N for i in range(len(search)): for j in range(len(search)): if i == j: break else: if search[i][j] < m or numpy.isnan(m): m = search[i][j] n1 = current[i] n2 = current[j] if n1 < 0: p1 = tree.pop[n1] else: p1 = 1 if n2 < 0: p1 += tree.pop[n2] else: p1 += 1 elif search[i][j] == m and aggr != None: if current[i] < 0: p2 = tree.pop[current[i]] else: p2 = 1 if current[j] < 0: p2 += tree.pop[current[j]] else: p2 += 1 if p2 < p1 and not aggr: n1 = current[i] n2 = current[j] p1 = p2 elif p2 > p1 and aggr: n1 = current[i] n2 = current[j] p1 = p2 However, I've found it to be fairly slow, especially on large arrays, when compared to min and argmin (probably due to all the looping in python). Does anyone have any suggestions for optimizing this function or otherwise speeding it up? -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By Appointment Only From edepagne at lcogt.net Wed Jun 2 14:24:22 2010 From: edepagne at lcogt.net (=?iso-8859-1?q?=C9ric_Depagne?=) Date: Wed, 2 Jun 2010 11:24:22 -0700 Subject: [SciPy-User] Speeding up a search algorithm In-Reply-To: <4C069E36.40907@umit.maine.edu> References: <4C069E36.40907@umit.maine.edu> Message-ID: <201006021124.22298.edepagne@lcogt.net> Le mercredi 2 juin 2010 11:08:54, R. Padraic Springuel a ?crit : > I've got a search algorithm I've written which is designed to find the > minimum value of a rank 2 ndarray (called search) and remember it > (calling it m) and where that value was located (with the values of n1 > and n2). While that may seem a simple enough task using min and argmin > functions, there are a few of caveats that make using those functions > impractical (at least they appear impractical to me): > > 1) The array being searched is an extract from a larger array (made > using take twice on the larger array with the rows and columns being > taken specified by a list of indecies called current). What we want to > know is the location of the minimum in the original array, not in the > extract, so the values for n1 and n2 have to be taken from current. > > 2) The array is distance array. I.e. it is symmetric about the main > diagonal and all the main diagonal elements are 0. However, these main > diagonal elements should not count as being the minimum value of the array. > > 3) The minimum value does not necessarily occur only once in the array > (even when the symmetry is taken into account) and the way to choose > between multiple minimums can vary. Which variation is used is > specified by the value of aggr (None, True, or False). > > As it stands, the algorithm looks like this: > > search = distancematrix.take(current,axis=0).take(current,axis=1) > m = numpy.max(search) > n1 = current[0] > n2 = current[1] > if aggr: > p1 = 0 > else: > p1 = N > for i in range(len(search)): > for j in range(len(search)): > if i == j: > break > else: > if search[i][j] < m or numpy.isnan(m): > m = search[i][j] > n1 = current[i] > n2 = current[j] > if n1 < 0: > p1 = tree.pop[n1] > else: > p1 = 1 > if n2 < 0: > p1 += tree.pop[n2] > else: > p1 += 1 > elif search[i][j] == m and aggr != None: > if current[i] < 0: > p2 = tree.pop[current[i]] > else: > p2 = 1 > if current[j] < 0: > p2 += tree.pop[current[j]] > else: > p2 += 1 > if p2 < p1 and not aggr: > n1 = current[i] > n2 = current[j] > p1 = p2 > elif p2 > p1 and aggr: > n1 = current[i] > n2 = current[j] > p1 = p2 > > > However, I've found it to be fairly slow, especially on large arrays, > when compared to min and argmin (probably due to all the looping in > python). Does anyone have any suggestions for optimizing this function > or otherwise speeding it up? > You may want to work only on one part of your array. For instance this : [(x,y) for x in range(4) for y in range(x)] will give you [(1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2)] that can be used as the indexes for the lower half of your matrix, without the diagonal. Then, once you have selected only this part of the array, you can use array.min(). ?ric. -- Un clavier azerty en vaut deux ---------------------------------------------------------- ?ric Depagne edepagne at lcogt.net Las Cumbres Observatory 6740 Cortona Dr Goleta CA, 93117 ---------------------------------------------------------- From charlesr.harris at gmail.com Wed Jun 2 14:28:43 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 2 Jun 2010 12:28:43 -0600 Subject: [SciPy-User] Speeding up a search algorithm In-Reply-To: <4C069E36.40907@umit.maine.edu> References: <4C069E36.40907@umit.maine.edu> Message-ID: On Wed, Jun 2, 2010 at 12:08 PM, R. Padraic Springuel < R.Springuel at umit.maine.edu> wrote: > I've got a search algorithm I've written which is designed to find the > minimum value of a rank 2 ndarray (called search) and remember it > (calling it m) and where that value was located (with the values of n1 > and n2). While that may seem a simple enough task using min and argmin > functions, there are a few of caveats that make using those functions > impractical (at least they appear impractical to me): > > What is the larger picture here? This sounds like a bit like you are shooting for some sort of clustering and there may already be an appropriate algorithm for it. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ferrell at diablotech.com Wed Jun 2 15:03:18 2010 From: ferrell at diablotech.com (Robert Ferrell) Date: Wed, 2 Jun 2010 13:03:18 -0600 Subject: [SciPy-User] failing to create a scikit.timeseries object In-Reply-To: References: Message-ID: With just datetime data, you probably want to create a DateArray. I think that second resolution is the highest frequency available (at least that's what I see in the documentation). Here's how to make a date array: ts.date_array(freq='S', dlist=d) DateArray([01-Jan-2010 10:10:02, 01-Jan-2010 10:10:02], freq='S') I don't know if timeseries Date objects can resolve microsecond differences. -r On Jun 2, 2010, at 3:49 AM, Emanuele Rocci wrote: > Hi All > > I am trying to create a scikit.timeseries object starting from 2 > datetime objects. > > If I understood correctly it should be possible to create a > scikits.timeseries starting fromdatetime objects. > > I try the following code but it says that Insufficient parameters. > > The 2 datetime differs for few microseconds. In this case what > should be the value for freq parameter? > > Is what I am trying allowed? In theory, since timeseries can be > based on datetime objects it should be possible to hanlde up to > microsecond , is this correct? > > I think that this is not really clear to me. > > Regards Eo > > import datetime > import sckilits.timeseries as ts > > tm1 = datetime.datetime( 2010,1,1, 10,10,2, 123456 ) > tm2 = datetime.datetime( 2010,1,1, 10,10,2, 345678 ) > d = [ tm1, tm2 ] > tseries = ts.time_series( dates=d ) > tseries = ts.time_series( d ) > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Jun 2 15:50:26 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 2 Jun 2010 15:50:26 -0400 Subject: [SciPy-User] failing to create a scikit.timeseries object In-Reply-To: References: Message-ID: <86ACE1D1-6B7E-4649-97EF-1A9E7DB534DD@gmail.com> On Jun 2, 2010, at 3:03 PM, Robert Ferrell wrote: > I think that second resolution is the highest frequency available (at least that's what I see in the documentation). Yes. > Here's how to make a date array: > > ts.date_array(freq='S', dlist=d) > DateArray([01-Jan-2010 10:10:02, 01-Jan-2010 10:10:02], > freq='S') > > I don't know if timeseries Date objects can resolve microsecond differences. It shouldn't. Internally, dates are stored as integers. The difference between one frequency and another is the reference used to convert integers back to some dates (eg., with an "ANN" frequency, we just store the year; with a "MON" freq, we just store the nb of months since 01/01/01...). Because so far the highest frequency is "SEC", there's no way to distinguish times at a larger frequency. From R.Springuel at umit.maine.edu Wed Jun 2 17:24:19 2010 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Wed, 02 Jun 2010 17:24:19 -0400 Subject: [SciPy-User] Speeding up a search algorithm Message-ID: <4C06CC03.50208@umit.maine.edu> Eric wrote: > You may want to work only on one part of your array. > > For instance this : > [(x,y) for x in range(4) for y in range(x)] > > will give you > [(1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2)] > > that can be used as the indexes for the lower half of your matrix, without the > diagonal. > > Then, once you have selected only this part of the array, you can use > array.min(). I'm not sure I understand what you're proposing. I tried these possible variations: >>> a = arange(100).reshape(10,10) >>> a[(x,y) for x in range(4) for y in range(x)] File "", line 1 a[(x,y) for x in range(4) for y in range(x)] ^ SyntaxError: invalid syntax >>> b = [(x,y) for x in range(4) for y in range(x)] >>> a[b] Traceback (most recent call last): File "", line 1, in ValueError: too many indices for array >>> a.take(b) array([[1, 0], [2, 0], [2, 1], [3, 0], [3, 1], [3, 2]]) None of which are returning the appropriate part of the array (two don't even return anything). Chuck wrote: > What is the larger picture here? This sounds like a bit like you are > shooting for some > sort of clustering and there may already be an appropriate algorithm for it. Yes, this is part of a clustering algorithm. The problem is that algorithms already in scipy don't support everything that I need to do here and I don't know anything about c (and so can't figure out how to modify them to actually do what I want them to do). I had the same problem with PyCluster back before scipy had clustering algorithms incorporated into it (except for kmeans) and wrote my own package that did everything that I wanted it to do, though it is written in pure Python. This algorithm comes from that package of mine. I'm trying to speed it up because it can take 24 hours or so to complete the clustering on the ~3500 point data sets I'm working with now. -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By Appointment Only From david_baddeley at yahoo.com.au Wed Jun 2 18:44:35 2010 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 2 Jun 2010 15:44:35 -0700 (PDT) Subject: [SciPy-User] Speeding up a search algorithm In-Reply-To: <4C069E36.40907@umit.maine.edu> References: <4C069E36.40907@umit.maine.edu> Message-ID: <167856.49827.qm@web33002.mail.mud.yahoo.com> How about the following strategy: #set up arrays of indices X, Y = numpy.ogrid[:distancematrix.shape[0], :distancematrix.shape[1] #pull out the relevant chunk search = distancematrix[current, current] x = X[current, current] y = Y[current, current] #make a copy of the array & make the diagonal elements large (so they're not found) search = search.copy() + 1e20*numpy.eye(search.shape) #find the minimum value m = search.min() #find all minima mask = search == m #get the coordinates xs = x[mask] ys = y[mask] #decide which minimum to take (sorry I didn't understand you logic here) .... hope this helps, cheers, David ----- Original Message ---- From: R. Padraic Springuel To: Scipy User Support Sent: Thu, 3 June, 2010 6:08:54 AM Subject: [SciPy-User] Speeding up a search algorithm I've got a search algorithm I've written which is designed to find the minimum value of a rank 2 ndarray (called search) and remember it (calling it m) and where that value was located (with the values of n1 and n2). While that may seem a simple enough task using min and argmin functions, there are a few of caveats that make using those functions impractical (at least they appear impractical to me): 1) The array being searched is an extract from a larger array (made using take twice on the larger array with the rows and columns being taken specified by a list of indecies called current). What we want to know is the location of the minimum in the original array, not in the extract, so the values for n1 and n2 have to be taken from current. 2) The array is distance array. I.e. it is symmetric about the main diagonal and all the main diagonal elements are 0. However, these main diagonal elements should not count as being the minimum value of the array. 3) The minimum value does not necessarily occur only once in the array (even when the symmetry is taken into account) and the way to choose between multiple minimums can vary. Which variation is used is specified by the value of aggr (None, True, or False). As it stands, the algorithm looks like this: search = distancematrix.take(current,axis=0).take(current,axis=1) m = numpy.max(search) n1 = current[0] n2 = current[1] if aggr: p1 = 0 else: p1 = N for i in range(len(search)): for j in range(len(search)): if i == j: break else: if search[i][j] < m or numpy.isnan(m): m = search[i][j] n1 = current[i] n2 = current[j] if n1 < 0: p1 = tree.pop[n1] else: p1 = 1 if n2 < 0: p1 += tree.pop[n2] else: p1 += 1 elif search[i][j] == m and aggr != None: if current[i] < 0: p2 = tree.pop[current[i]] else: p2 = 1 if current[j] < 0: p2 += tree.pop[current[j]] else: p2 += 1 if p2 < p1 and not aggr: n1 = current[i] n2 = current[j] p1 = p2 elif p2 > p1 and aggr: n1 = current[i] n2 = current[j] p1 = p2 However, I've found it to be fairly slow, especially on large arrays, when compared to min and argmin (probably due to all the looping in python). Does anyone have any suggestions for optimizing this function or otherwise speeding it up? -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By Appointment Only _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From stephens.js at gmail.com Wed Jun 2 23:00:15 2010 From: stephens.js at gmail.com (Scott Stephens) Date: Wed, 2 Jun 2010 22:00:15 -0500 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 Message-ID: I'm attempting to build/install scipy from source on Mac OS X 10.6 (on intel hardware) and am getting failures on imports. I've compiled python 2.6.4 as a framework; I've built both it and numpy as x86_64-only applications, and am trying to build scipy the same way (in other words, I'm not trying to do a multi-architecture universal build). I ran the numpy test suite and got one known fail and one skipped test. I built scipy like this: FFLAGS="-arch x86_64 -fPIC" LDFLAGS="-Wall -arch x86_64 -undefined dynamic_lookup" python setup.py build python setup.py install I also tried the build without overriding the compile and link flags, but that leads to producing libraries that are universal 32-bit ppc/x86, rather than the desired 64 bit x86_64. When I do import scipy.fftpack, I get: Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/__init__.py", line 10, in from basic import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/basic.py", line 13, in import _fftpack as fftpack ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so, 2): no suitable image found. Did find: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so: can't map Running scipy.test() generates 19 test failures, most of which are similar to the above. The obvious checks for architecture and dependencies doesn't show anything wrong: ----- file /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so: Mach-O 64-bit executable x86_64 ----- otool -L /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so: /usr/local/lib/libgfortran.2.dylib (compatibility version 3.0.0, current version 3.0.0) /usr/local/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.1) ----- General system info: os.name: 'posix' sys.platform: 'darwin' sys.version: '2.6.4 (r264:75706, Mar 27 2010, 11:45:57) \n[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)]' numpy.version.version: '1.3.0' gcc --version: i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5659) gfortran --version: GNU Fortran (GCC) 4.2.3 uname -a: Darwin indy.local 10.3.0 Darwin Kernel Version 10.3.0: Fri Feb 26 11:58:09 PST 2010; root:xnu-1504.3.12~1/RELEASE_I386 i386 Any ideas? I'm pretty stumped. Thanks, Scott From cournape at gmail.com Thu Jun 3 02:58:43 2010 From: cournape at gmail.com (David Cournapeau) Date: Thu, 3 Jun 2010 15:58:43 +0900 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: References: Message-ID: On Thu, Jun 3, 2010 at 12:00 PM, Scott Stephens wrote: > I'm attempting to build/install scipy from source on Mac OS X 10.6 (on > intel hardware) and am getting failures on imports. ?I've compiled > python 2.6.4 as a framework; I've built both it and numpy as > x86_64-only applications, and am trying to build scipy the same way > (in other words, I'm not trying to do a multi-architecture universal > build). ?I ran the numpy test suite and got one known fail and one > skipped test. > > I built scipy like this: > FFLAGS="-arch x86_64 -fPIC" LDFLAGS="-Wall -arch x86_64 -undefined > dynamic_lookup" python setup.py build > python setup.py install This comes up often, see here: http://ask.scipy.org/en/topic/34-error-building-scipy-on-mac-os-x:-importerror:-dlopen-no-suitable-image-found#reply-95 cheers, David From paul.anton.letnes at gmail.com Thu Jun 3 03:10:16 2010 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Thu, 3 Jun 2010 09:10:16 +0200 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: References: Message-ID: On 3. juni 2010, at 05.00, Scott Stephens wrote: > I'm attempting to build/install scipy from source on Mac OS X 10.6 (on > intel hardware) and am getting failures on imports. I've compiled > python 2.6.4 as a framework; I've built both it and numpy as > x86_64-only applications, and am trying to build scipy the same way > (in other words, I'm not trying to do a multi-architecture universal > build). I ran the numpy test suite and got one known fail and one > skipped test. > > I built scipy like this: > FFLAGS="-arch x86_64 -fPIC" LDFLAGS="-Wall -arch x86_64 -undefined > dynamic_lookup" python setup.py build > python setup.py install > > I also tried the build without overriding the compile and link flags, > but that leads to producing libraries that are universal 32-bit > ppc/x86, rather than the desired 64 bit x86_64. > > When I do import scipy.fftpack, I get: > Traceback (most recent call last): > File "", line 1, in > File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/__init__.py", > line 10, in > from basic import * > File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/basic.py", > line 13, in > import _fftpack as fftpack > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so, > 2): no suitable image found. Did find: > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so: > can't map > > Running scipy.test() generates 19 test failures, most of which are > similar to the above. The obvious checks for architecture and > dependencies doesn't show anything wrong: > > ----- > file /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so: > Mach-O 64-bit executable x86_64 > ----- > otool -L /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/_fftpack.so: > /usr/local/lib/libgfortran.2.dylib (compatibility version 3.0.0, > current version 3.0.0) > /usr/local/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current > version 1.0.0) > /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current > version 125.0.1) > ----- > > General system info: > os.name: 'posix' > sys.platform: 'darwin' > sys.version: '2.6.4 (r264:75706, Mar 27 2010, 11:45:57) \n[GCC 4.2.1 > (Apple Inc. build 5646) (dot 1)]' > numpy.version.version: '1.3.0' > gcc --version: i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5659) > gfortran --version: GNU Fortran (GCC) 4.2.3 > uname -a: Darwin indy.local 10.3.0 Darwin Kernel Version 10.3.0: Fri > Feb 26 11:58:09 PST 2010; root:xnu-1504.3.12~1/RELEASE_I386 i386 > > Any ideas? I'm pretty stumped. > > Thanks, > > Scott > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Have you considered MacPorts? http://www.macports.org/ After installing macports, run: sudo port install py26-scipy and scipy will be compiled from source. Good luck, Paul. From eneide.odissea at gmail.com Thu Jun 3 03:36:08 2010 From: eneide.odissea at gmail.com (eneide.odissea) Date: Thu, 3 Jun 2010 09:36:08 +0200 Subject: [SciPy-User] failing to create a scikit.timeseries object In-Reply-To: <86ACE1D1-6B7E-4649-97EF-1A9E7DB534DD@gmail.com> References: <86ACE1D1-6B7E-4649-97EF-1A9E7DB534DD@gmail.com> Message-ID: I Thank you all and I apologize for my very bad code snippet. Do you know whether in scikits.timeseries there is a command / option / configuration that allows to store time using long instead of integer? Probably it might be necessary also to setup a callback somewhere able to convert the datetime into this internally stored number ; have you any idea about it? On Wed, Jun 2, 2010 at 9:50 PM, Pierre GM wrote: > On Jun 2, 2010, at 3:03 PM, Robert Ferrell wrote: > > > I think that second resolution is the highest frequency available (at > least that's what I see in the documentation). > > Yes. > > > > Here's how to make a date array: > > > > ts.date_array(freq='S', dlist=d) > > DateArray([01-Jan-2010 10:10:02, 01-Jan-2010 10:10:02], > > freq='S') > > > > I don't know if timeseries Date objects can resolve microsecond > differences. > > It shouldn't. Internally, dates are stored as integers. The difference > between one frequency and another is the reference used to convert integers > back to some dates (eg., with an "ANN" frequency, we just store the year; > with a "MON" freq, we just store the nb of months since 01/01/01...). > Because so far the highest frequency is "SEC", there's no way to distinguish > times at a larger frequency. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rigal at rapideye.de Thu Jun 3 06:06:48 2010 From: rigal at rapideye.de (Matthieu Rigal) Date: Thu, 3 Jun 2010 12:06:48 +0200 Subject: [SciPy-User] point-curve distance estimation or calculation In-Reply-To: References: Message-ID: <201006031206.49086.rigal@rapideye.de> Hi Eat, Jose, Robert, Zachary, Josef and Christopher, Thanks a lot for all your messages, I needed a bit of time to ingest them all... First, here are some precisions to the questions I got, since my message was not really clear: - I work with (x,y) coordinates - I am looking for a leastsq fitting for y = ax? + bx +c - I want to have the distance from each point to the curve (in this case, the y-distance, which is fast and already implemented, is OK when the curve is soft, but quite different to the real distance (ODR-like), when the curve is strong) The ODR package, that I didn't found/saw at the beginning is doing what I want to do, but I have two problems with it : - It seems like it is not handling masked arrays... For ax+b, I send a compressed masked array to get the leastsq parameters fit, and afterwards I calculate the delta on the hole masked N-d array back... Here, the ODR is doing the leastsq fitting inside (or I misunderstood what function to give as input) - It needs a really long processing time. Maybe, in relation to the upper comment, it is somehow possible to already give the function fit and to get only the delta as a result (and not all the parameters generated by the run), to save a bit processing time. - My x and y are 8bits on one hand and 32 bits on the other hand, this may slow down the process for the ODR calculation.. I let Robert especially answer on these points, but this is why I was thinking about estimating the distance via calculating the tangent at this point. As Josef, mentioned it, it would only have an acceptable processing time if I could use a vectorized way to find the tangent or to solve the degree 3 polynomial. But I do not know how this could look like... On Wednesday 02 June 2010 22:37:43 you wrote: > Hi Matthieu, > > I'm sending this message first off-list because I'll like to know few > details more. > > > > > First I have two sets of data. > > I'm assuming that you are talking about (x, y) co-ordinates here. Right? > > > I am doing several leastsq optimizations. for linear y=ax+b, > > Linear in what sence? Surely f(x)= ax+ b is linear _in the parameters_ a > and b, and it represents a 'straight line', but f(x) is _not_ linear in > a sence that for all x, a, b is true: f(x)+ f(x)= 2f(x)! > > > I know how to handle the rest, for second order or more, it is more > > difficult. > But it doesen't need to be at all that more difficult! > > First I have to ask why you are doing several leastsq optimizations? > (What follows I'll assume that you actually did it, because you needed > to 'fit' some 'polylines' to your (x. y) data and now you encounter > problems when trying to 'fit higher degree polycurves' to the data?). > > > In the y = ax?+bx+c case, I know have a curve. > > Yes indeed, and the parameters (a, b, c) would be estimated 'as easily' > as with your "y=ax+b" case!!! (Because the parameters (a, b, c) are > still linear respect to f(x)= ax?+bx+c, and could still be estimated > with the leastsq!!!) > > > I want to calculate or estimate the distance between each point > > (combination of two data sets) and the curve. > After this I won't quote your text anymore, because it gets quite > convolved. However I'll just like to ask your opinion wheter it would > more suitable (as R. Kern allready in the list suggested of the > orthogonal distance regression a.k.a total least squares method) to > consired your problem as a function of both x and y, i.e. your curve(s) > would be fitted as a function like c= f(x, y)? > > If yes, then there are fast methods available (of'course limited to your > particular hardware)! > > Please feel free to explain your specific needs in more details ;-) > > > Regards, > eat > -- Matthieu Rigal Product Development RapidEye AG Tel: +49-(0)3381-89 04 331 Molkenmarkt 30 Fax: +49-(0)3381-89 04 101 14776 Brandenburg/Havel Germany http://www.rapideye.de RapidEye AG Molkenmarkt 30 14776 Brandenburg an der Havel Germany Follow us on Twitter! www.twitter.com/rapideye_ag Head Office/Sitz der Gesellschaft: Brandenburg an der Havel Management Board/Vorstand: Wolfgang G. Biedermann Chairman of Supervisory Board/Vorsitzender des Aufsichtsrates: Juergen Breitkopf Commercial Register/Handelsregister Potsdam HRB 17 796 Tax Number/Steuernummer: 048/100/00053 VAT-Ident-Number/Ust.-ID: DE 199331235 DIN EN ISO 9001 certified ************************************************************************* Diese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. The information in this e-mail is intended for the named recipients only. It may contain privileged and confidential information. If you have received this communication in error, any use, copying or dissemination of its contents is strictly prohibited. Please erase all copies of the message along with any included attachments and notify RapidEye AG or the sender immediately by telephone at the number indicated on this page. From robince at gmail.com Thu Jun 3 06:31:05 2010 From: robince at gmail.com (Robin) Date: Thu, 3 Jun 2010 11:31:05 +0100 Subject: [SciPy-User] using multiple processors for particle filtering In-Reply-To: <8763292fi4.fsf@lanl.gov> References: <8739xgndes.fsf@lanl.gov> <8763292fi4.fsf@lanl.gov> Message-ID: On Thu, May 27, 2010 at 10:37 PM, Andy Fraser wrote: > > #Multiprocessing version: > > ? ? ? ?noise = numpy.random.standard_normal((N_particles,noise_df)) > ? ? ? ?jobs = zip(self.particles,noise) > ? ? ? ?self.particles = self.pool.map(func, jobs, self.chunk_size) > ? ? ? ?return (m,v) What platform are you on? I often forget that multiprocessing works quite differently on Windows to unix platforms (and is much less useful). On unix platforms the child processes are spawned with fork(), which means they share all the memory state of the parent process, with copy on write if they make changes. On Windows seperate processes are spawned and all the state has to be past through the serialiser (I think). So on unix you can share large quantities of (read only) data very cheaply by making it accessible before the fork. So if you are on Mac/Linux and the slow down is caused by passing the large noise array, you could get around this by making it a global somehow before the fork when you initiate the pool... ie import mymodule mymodule.noise = numpy.random.standard_normal((N_particles,noise_df)) then use this in func, dont pass the noise array in the map call. But I agree with Zachary about using arrays of object parameters rather than lists of objects each with their own parameter variables. Cheers Robin From stephens.js at gmail.com Thu Jun 3 07:25:19 2010 From: stephens.js at gmail.com (Scott Stephens) Date: Thu, 3 Jun 2010 06:25:19 -0500 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: References: Message-ID: On Thu, Jun 3, 2010 at 1:58 AM, David Cournapeau wrote: > On Thu, Jun 3, 2010 at 12:00 PM, Scott Stephens wrote: >> I built scipy like this: >> FFLAGS="-arch x86_64 -fPIC" LDFLAGS="-Wall -arch x86_64 -undefined >> dynamic_lookup" python setup.py build >> python setup.py install > > This comes up often, see here: > http://ask.scipy.org/en/topic/34-error-building-scipy-on-mac-os-x:-importerror:-dlopen-no-suitable-image-found#reply-95 > I actually knew that the flags were overridden, I included "-fPIC" in FFLAGS and "-undefined dynamic_lookup" in LDFLAGS because I saw they were in the default build. I didn't know that problems with that were related to the import problem, so thank you for making that connection for me. Does anyone have any suggestions about how exactly my flags are wrong, or some method to figure out what's wrong? I've included snippets of the build logs from the original build and from my build in case they may be useful. ----- >From default build: ----- /usr/local/bin/gfortran -Wall -arch ppc -arch i686 -Wall -undefined dynamic_lookup -bundle build/temp.m acosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfft.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/drfft.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zrfft.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/fortranobject.o -Lbuild/temp.macosx-10.6-i386-2.6 -ldfftpack -lgfortran -o build/lib.macosx-10.6-i386-2.6/scipy/fftpack/_fftpack.so ----- >From my build: ----- /usr/local/bin/gfortran -Wall -Wall -arch x86_64 -undefined dynamic_lookup build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfft.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/drfft.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zrfft.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/fortranobject.o -L/usr/local/lib/gcc/i686-apple-darwin8/4.2.3/x86_64 -Lbuild/temp.macosx-10.6-i386-2.6 -ldfftpack -lgfortran -o build/lib.macosx-10.6-i386-2.6/scipy/fftpack/_fftpack.so ----- >From default build: ----- building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -arch ppc -arch i686 -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -arch ppc -arch i686 -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -arch ppc -arch i686 -fPIC -O3 -funroll-loops ----- >From my build: ----- building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -arch x86_64 -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -arch x86_64 -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -arch x86_64 -fPIC -O3 -funroll-loops ------ >From default build: ----- building 'scipy.fftpack._fftpack' extension compiling C sources C compiler: gcc -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes creating build/temp.macosx-10.6-i386-2.6/build creating build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6 creating build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy creating build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy/fftpack compile options: '-Ibuild/src.macosx-10.6-i386-2.6 -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c' gcc: scipy/fftpack/src/zfft.c gcc: scipy/fftpack/src/drfft.c gcc: scipy/fftpack/src/zrfft.c gcc: scipy/fftpack/src/zfftnd.c gcc: build/src.macosx-10.6-i386-2.6/fortranobject.c gcc: build/src.macosx-10.6-i386-2.6/scipy/fftpack/_fftpackmodule.c /usr/local/bin/gfortran -Wall -arch ppc -arch i686 -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfft.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/drfft.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zrfft.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/fortranobject.o -Lbuild/temp.macosx-10.6-i386-2.6 -ldfftpack -lgfortran -o build/lib.macosx-10.6-i386-2.6/scipy/fftpack/_fftpack.so ----- >From my build: ----- building 'scipy.fftpack._fftpack' extension compiling C sources C compiler: gcc -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes creating build/temp.macosx-10.6-i386-2.6/build creating build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6 creating build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy creating build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy/fftpack compile options: '-Ibuild/src.macosx-10.6-i386-2.6 -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c' gcc: scipy/fftpack/src/zfft.c gcc: scipy/fftpack/src/drfft.c gcc: scipy/fftpack/src/zrfft.c gcc: scipy/fftpack/src/zfftnd.c gcc: build/src.macosx-10.6-i386-2.6/fortranobject.c gcc: build/src.macosx-10.6-i386-2.6/scipy/fftpack/_fftpackmodule.c /usr/local/bin/gfortran -Wall -Wall -arch x86_64 -undefined dynamic_lookup build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfft.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/drfft.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zrfft.o build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/fortranobject.o -L/usr/local/lib/gcc/i686-apple-darwin8/4.2.3/x86_64 -Lbuild/temp.macosx-10.6-i386-2.6 -ldfftpack -lgfortran -o build/lib.macosx-10.6-i386-2.6/scipy/fftpack/_fftpack.so It looks to me like all of the flags are the same except for the architecture related ones. Ideas anyone? Thanks, Scott From jgomezdans at gmail.com Thu Jun 3 10:20:35 2010 From: jgomezdans at gmail.com (Jose Gomez-Dans) Date: Thu, 3 Jun 2010 15:20:35 +0100 Subject: [SciPy-User] Ranking a list of numbers Message-ID: Hi, I've done this with loops, but I am sure there is a much nicer way of doing it with a couple of index tricks. Let's say I've got an array of numbers. I want to convert it into an array where each element is the rank of that element in the starting array (by rank I mean its position when sorted in decreasing order) For example, if my original array is [ 0.012, 0.08, 2, 0.5, 0.010, 0.03] my output array ought to look like this (starting at 1, rather than 0) [ 5, 3, 1, 2, 6, 4 ] meaning "the first element of the array is the 5th largest, the second is the 3rd largest, the third is the largest" and so on. The ordering can be done with argsort[::-1] (to get the decreasing order), but to get the final array, I can only think of clumsy ways of doing loops. Any ideas? Thanks! J -------------- next part -------------- An HTML attachment was scrubbed... URL: From afraser at lanl.gov Thu Jun 3 10:30:34 2010 From: afraser at lanl.gov (Andy Fraser) Date: Thu, 03 Jun 2010 08:30:34 -0600 Subject: [SciPy-User] using multiple processors for particle filtering In-Reply-To: (Robin's message of "Thu\, 3 Jun 2010 11\:31\:05 +0100") References: <8739xgndes.fsf@lanl.gov> <8763292fi4.fsf@lanl.gov> Message-ID: <871vcoxk6t.fsf@lanl.gov> Thank you for your continuing help. >>>>> "R" == Robin writes: R> On Thu, May 27, 2010 at 10:37 PM, Andy Fraser wrote: >> > #Multiprocessing version: >> >> ? ? ? ?noise = >> numpy.random.standard_normal((N_particles,noise_df)) ? ? ? >> ?jobs = zip(self.particles,noise) ? ? ? ?self.particles = >> self.pool.map(func, jobs, self.chunk_size) ? ? ? ?return (m,v) R> What platform are you on? [...] Ubuntu/GNU/Linux R> So if you are on Mac/Linux and the slow down is caused by R> passing the large noise array, [...] I believe that large image arrays were being copied and maybe pickled. R> But I agree with Zachary about using arrays of object R> parameters rather than lists of objects each with their own R> parameter variables. Following Zach's advice (and my own experience), I've moved all of the loops over particles from python to C++ or implemented them as single numpy functions. That has cut the time by a factor of about 25. My next moves are to figure out where the remaining time gets spent and if there are big expenditures in the C++ code, I will look into multiprocessing there. -- Andy Fraser ISR-2 (MS:B244) afraser at lanl.gov Los Alamos National Laboratory 505 665 9448 Los Alamos, NM 87545 From kwgoodman at gmail.com Thu Jun 3 10:32:21 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 3 Jun 2010 07:32:21 -0700 Subject: [SciPy-User] Ranking a list of numbers In-Reply-To: References: Message-ID: On Thu, Jun 3, 2010 at 7:20 AM, Jose Gomez-Dans wrote: > Hi, > I've done this with loops, but I am sure there is a much nicer way of doing > it with a couple of index tricks. > Let's say I've got an array of numbers. I want to convert it into an array > where each element is the rank of that element in the starting array (by > rank I mean its position when sorted in decreasing order) > For example, if my original array is > [ 0.012, 0.08, 2, 0.5, 0.010, 0.03] > my output array ought to look like this (starting at 1, rather than 0) > [ 5, 3, 1, 2, 6, 4 ] > meaning "the first element of the array is the 5th largest, the second is > the 3rd largest, the third is the largest" and so on. > The ordering can be done with argsort[::-1] (to get the decreasing order), > but to get the final array, I can only think of clumsy ways of doing loops. > Any ideas? > Thanks! > J If you don't want to split ties nor handle NaNs: >> (-a).argsort().argsort() + 1 array([5, 3, 1, 2, 6, 4]) To handle ties you can use: from scipy.stats import rankdata To handle ties and NaNs, you can use the labeled array package, la: >> import la >> lar = la.larry([0.012, 0.08, 2, 0.5, 0.010, 0.03]) >> (-lar).ranking(norm='0,N-1').A + 1 array([ 5., 3., 1., 2., 6., 4.]) From kwgoodman at gmail.com Thu Jun 3 10:36:10 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 3 Jun 2010 07:36:10 -0700 Subject: [SciPy-User] Ranking a list of numbers In-Reply-To: References: Message-ID: On Thu, Jun 3, 2010 at 7:32 AM, Keith Goodman wrote: > On Thu, Jun 3, 2010 at 7:20 AM, Jose Gomez-Dans wrote: >> Hi, >> I've done this with loops, but I am sure there is a much nicer way of doing >> it with a couple of index tricks. >> Let's say I've got an array of numbers. I want to convert it into an array >> where each element is the rank of that element in the starting array (by >> rank I mean its position when sorted in decreasing order) >> For example, if my original array is >> [ 0.012, 0.08, 2, 0.5, 0.010, 0.03] >> my output array ought to look like this (starting at 1, rather than 0) >> [ 5, 3, 1, 2, 6, 4 ] >> meaning "the first element of the array is the 5th largest, the second is >> the 3rd largest, the third is the largest" and so on. >> The ordering can be done with argsort[::-1] (to get the decreasing order), >> but to get the final array, I can only think of clumsy ways of doing loops. >> Any ideas? >> Thanks! >> J > > If you don't want to split ties nor handle NaNs: > >>> (-a).argsort().argsort() + 1 > ? array([5, 3, 1, 2, 6, 4]) > > To handle ties you can use: > > from scipy.stats import rankdata > > To handle ties and NaNs, you can use the labeled array package, la: > >>> import la >>> lar = la.larry([0.012, 0.08, 2, 0.5, 0.010, 0.03]) >>> (-lar).ranking(norm='0,N-1').A + 1 > ? array([ 5., ?3., ?1., ?2., ?6., ?4.]) Oh, actually la has a pure array version: >> a = np.array([0.012, 0.08, 2, 0.5, 0.010, 0.03]) >> from la.afunc import ranking >> ranking(-a, norm='0,N-1') + 1 array([ 5., 3., 1., 2., 6., 4.]) From R.Springuel at umit.maine.edu Thu Jun 3 13:04:42 2010 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Thu, 03 Jun 2010 13:04:42 -0400 Subject: [SciPy-User] Speeding up a search algorithm In-Reply-To: References: Message-ID: <4C07E0AA.9070104@umit.maine.edu> > #make a copy of the array & make the diagonal elements large (so they're not found) > search = search.copy() + 1e20*numpy.eye(search.shape) I've tried this solution before and came across other problems. Unfortunately, I don't remember what those problems were (my version history mentions that I had them and that I fixed them by changing to my current algorithm, but not what the exact problem was). I'll play around with the idea, however. At the very least I should be able to figure out what the problem was. -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By Appointment Only From mdekauwe at gmail.com Thu Jun 3 13:07:05 2010 From: mdekauwe at gmail.com (mdekauwe) Date: Thu, 3 Jun 2010 10:07:05 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Ranking a list of numbers In-Reply-To: References: Message-ID: <28770319.post@talk.nabble.com> Hi, Did you mean import numpy as np a = np.array([ 0.012, 0.08, 2, 0.5, 0.010, 0.03]) index = np.argsort(a)[::-1] b = a[index] print b [ 2. 0.5 0.08 0.03 0.012 0.01 ] Mart Jose Gomez-Dans wrote: > > Hi, > I've done this with loops, but I am sure there is a much nicer way of > doing > it with a couple of index tricks. > Let's say I've got an array of numbers. I want to convert it into an array > where each element is the rank of that element in the starting array (by > rank I mean its position when sorted in decreasing order) > > For example, if my original array is > [ 0.012, 0.08, 2, 0.5, 0.010, 0.03] > my output array ought to look like this (starting at 1, rather than 0) > [ 5, 3, 1, 2, 6, 4 ] > meaning "the first element of the array is the 5th largest, the second is > the 3rd largest, the third is the largest" and so on. > > The ordering can be done with argsort[::-1] (to get the decreasing order), > but to get the final array, I can only think of clumsy ways of doing > loops. > > Any ideas? > Thanks! > J > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/Ranking-a-list-of-numbers-tp28768184p28770319.html Sent from the Scipy-User mailing list archive at Nabble.com. From vincent at vincentdavis.net Thu Jun 3 15:33:05 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Thu, 3 Jun 2010 13:33:05 -0600 Subject: [SciPy-User] Is the scipy docs "Front" page up-to-date? Message-ID: Mostly I question "Warning Please do not to use this editor yet -- things are still too much in motion." http://docs.scipy.org/scipyorg/Front%20Page/ From d.l.goldsmith at gmail.com Thu Jun 3 16:24:28 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 3 Jun 2010 13:24:28 -0700 Subject: [SciPy-User] Is the scipy docs "Front" page up-to-date? In-Reply-To: References: Message-ID: I don't know, but that's not the Front Page of the SciPy Documentation editor, which is: http://docs.scipy.org/scipy/Front%20Page/ (though you can hardly be blamed for mistaking the two: they use the same template, and their url's differ by only an org - can that be changed?) and, though paltry, at least it is current. DG On Thu, Jun 3, 2010 at 12:33 PM, Vincent Davis wrote: > Mostly I question "Warning Please do not to use this editor yet -- > things are still too much in motion." > http://docs.scipy.org/scipyorg/Front%20Page/ > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Thu Jun 3 16:55:01 2010 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Thu, 3 Jun 2010 13:55:01 -0700 (PDT) Subject: [SciPy-User] using multiple processors for particle filtering In-Reply-To: <871vcoxk6t.fsf@lanl.gov> References: <8739xgndes.fsf@lanl.gov> <8763292fi4.fsf@lanl.gov> <871vcoxk6t.fsf@lanl.gov> Message-ID: <10111.33068.qm@web33006.mail.mud.yahoo.com> If you end up with most of your time spent in c code, you might be able to release the GIL and then use multiple threads, in which case you won't need to worry about process spawning overhead & shared memory. my 2 cents, David ----- Original Message ---- From: Andy Fraser To: SciPy Users List Sent: Fri, 4 June, 2010 2:30:34 AM Subject: Re: [SciPy-User] using multiple processors for particle filtering Thank you for your continuing help. >>>>> "R" == Robin writes: R> On Thu, May 27, 2010 at 10:37 PM, Andy Fraser wrote: >> > #Multiprocessing version: >> >> noise = >> numpy.random.standard_normal((N_particles,noise_df)) >> jobs = zip(self.particles,noise) self.particles = >> self.pool.map(func, jobs, self.chunk_size) return (m,v) R> What platform are you on? [...] Ubuntu/GNU/Linux R> So if you are on Mac/Linux and the slow down is caused by R> passing the large noise array, [...] I believe that large image arrays were being copied and maybe pickled. R> But I agree with Zachary about using arrays of object R> parameters rather than lists of objects each with their own R> parameter variables. Following Zach's advice (and my own experience), I've moved all of the loops over particles from python to C++ or implemented them as single numpy functions. That has cut the time by a factor of about 25. My next moves are to figure out where the remaining time gets spent and if there are big expenditures in the C++ code, I will look into multiprocessing there. -- Andy Fraser ISR-2 (MS:B244) afraser at lanl.gov Los Alamos National Laboratory 505 665 9448 Los Alamos, NM 87545 _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From vincent at vincentdavis.net Thu Jun 3 18:04:26 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Thu, 3 Jun 2010 16:04:26 -0600 Subject: [SciPy-User] Is the scipy docs "Front" page up-to-date? In-Reply-To: References: Message-ID: On Thu, Jun 3, 2010 at 2:24 PM, David Goldsmith wrote: > I don't know, but that's not the Front Page of the SciPy Documentation > editor, which is: > > http://docs.scipy.org/scipy/Front%20Page/ What is even more confusing is that if you click the link at the top right of the above page that says "scipy.org editor" you get the other page. Vincent > > (though you can hardly be blamed for mistaking the two: they use the same > template, and their url's differ by only an org - can that be changed?) and, > though paltry, at least it is current. > > DG > > On Thu, Jun 3, 2010 at 12:33 PM, Vincent Davis > wrote: >> >> Mostly I question "Warning Please do not to use this editor yet -- >> things are still too much in motion." >> http://docs.scipy.org/scipyorg/Front%20Page/ >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Mathematician: noun, someone who disavows certainty when their uncertainty > set is non-empty, even if that set has measure zero. > > Hope: noun, that delusive spirit which escaped Pandora's jar and, with her > lies, prevents mankind from committing a general suicide. ?(As interpreted > by Robert Graves) > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From josef.pktd at gmail.com Thu Jun 3 18:07:48 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 3 Jun 2010 18:07:48 -0400 Subject: [SciPy-User] Is the scipy docs "Front" page up-to-date? In-Reply-To: References: Message-ID: On Thu, Jun 3, 2010 at 6:04 PM, Vincent Davis wrote: > On Thu, Jun 3, 2010 at 2:24 PM, David Goldsmith wrote: >> I don't know, but that's not the Front Page of the SciPy Documentation >> editor, which is: >> >> http://docs.scipy.org/scipy/Front%20Page/ > > What is even more confusing is that if you click the link at the top > right of the above page that says "scipy.org editor" you get the other > page. I think this is where the new version of the scipy.org website is created, and I guess it's only partially finished and connected. Josef > Vincent > >> >> (though you can hardly be blamed for mistaking the two: they use the same >> template, and their url's differ by only an org - can that be changed?) and, >> though paltry, at least it is current. >> >> DG >> >> On Thu, Jun 3, 2010 at 12:33 PM, Vincent Davis >> wrote: >>> >>> Mostly I question "Warning Please do not to use this editor yet -- >>> things are still too much in motion." >>> http://docs.scipy.org/scipyorg/Front%20Page/ >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> -- >> Mathematician: noun, someone who disavows certainty when their uncertainty >> set is non-empty, even if that set has measure zero. >> >> Hope: noun, that delusive spirit which escaped Pandora's jar and, with her >> lies, prevents mankind from committing a general suicide. ?(As interpreted >> by Robert Graves) >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From jsseabold at gmail.com Thu Jun 3 18:07:51 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 3 Jun 2010 18:07:51 -0400 Subject: [SciPy-User] Is the scipy docs "Front" page up-to-date? In-Reply-To: References: Message-ID: On Thu, Jun 3, 2010 at 6:04 PM, Vincent Davis wrote: > On Thu, Jun 3, 2010 at 2:24 PM, David Goldsmith > wrote: > > I don't know, but that's not the Front Page of the SciPy Documentation > > editor, which is: > > > > http://docs.scipy.org/scipy/Front%20Page/ > > What is even more confusing is that if you click the link at the top > right of the above page that says "scipy.org editor" you get the other > page. > > I believe the scipy.org editor is for the redesign of scipy.org itself while the docs editor is for the docs. Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at vincentdavis.net Thu Jun 3 18:21:09 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Thu, 3 Jun 2010 16:21:09 -0600 Subject: [SciPy-User] Is the scipy docs "Front" page up-to-date? In-Reply-To: References: Message-ID: On Thu, Jun 3, 2010 at 4:07 PM, wrote: > On Thu, Jun 3, 2010 at 6:04 PM, Vincent Davis wrote: >> On Thu, Jun 3, 2010 at 2:24 PM, David Goldsmith wrote: >>> I don't know, but that's not the Front Page of the SciPy Documentation >>> editor, which is: >>> >>> http://docs.scipy.org/scipy/Front%20Page/ >> >> What is even more confusing is that if you click the link at the top >> right of the above page that says "scipy.org editor" you get the other >> page. > > I think this is where the new version of the scipy.org website is > created, and I guess it's only partially finished and connected. > > Josef I meant to post this on the Dev list. And just to be clear. form this page http://docs.scipy.org/doc/ clicking on "Write, review and proof the documentation" http://docs.scipy.org/numpy/ and then at the top right link "scipy.org editor" http://docs.scipy.org/scipyorg/ which forwards to http://docs.scipy.org/scipyorg/Front%20Page/ Which is not http://docs.scipy.org/scipy/Front%20Page/ Vincent > > >> Vincent >> >>> >>> (though you can hardly be blamed for mistaking the two: they use the same >>> template, and their url's differ by only an org - can that be changed?) and, >>> though paltry, at least it is current. >>> >>> DG >>> >>> On Thu, Jun 3, 2010 at 12:33 PM, Vincent Davis >>> wrote: >>>> >>>> Mostly I question "Warning Please do not to use this editor yet -- >>>> things are still too much in motion." >>>> http://docs.scipy.org/scipyorg/Front%20Page/ >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> -- >>> Mathematician: noun, someone who disavows certainty when their uncertainty >>> set is non-empty, even if that set has measure zero. >>> >>> Hope: noun, that delusive spirit which escaped Pandora's jar and, with her >>> lies, prevents mankind from committing a general suicide. ?(As interpreted >>> by Robert Graves) >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From d.l.goldsmith at gmail.com Thu Jun 3 18:30:25 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 3 Jun 2010 15:30:25 -0700 Subject: [SciPy-User] Is the scipy docs "Front" page up-to-date? In-Reply-To: References: Message-ID: On Thu, Jun 3, 2010 at 3:21 PM, Vincent Davis wrote: > On Thu, Jun 3, 2010 at 4:07 PM, wrote: > > On Thu, Jun 3, 2010 at 6:04 PM, Vincent Davis > wrote: > >> On Thu, Jun 3, 2010 at 2:24 PM, David Goldsmith < > d.l.goldsmith at gmail.com> wrote: > >>> I don't know, but that's not the Front Page of the SciPy Documentation > >>> editor, which is: > >>> > >>> http://docs.scipy.org/scipy/Front%20Page/ > >> > >> What is even more confusing is that if you click the link at the top > >> right of the above page that says "scipy.org editor" you get the other > >> page. > > > > I think this is where the new version of the scipy.org website is > > created, and I guess it's only partially finished and connected. > > > > Josef > > I meant to post this on the Dev list. > > And just to be clear. > form this page > http://docs.scipy.org/doc/ > clicking on > "Write, review and proof the documentation" http://docs.scipy.org/numpy/ > and then at the top right link > "scipy.org editor" http://docs.scipy.org/scipyorg/ which forwards to > http://docs.scipy.org/scipyorg/Front%20Page/ > Which is not > http://docs.scipy.org/scipy/Front%20Page/ > Correct, from http://docs.scipy.org/numpy/ you want the link just to the left of "scipy.org editor," namely Scipy documentation editor - very confusing - can someone suggest a solution? DG > > Vincent > > > > > > >> Vincent > >> > >>> > >>> (though you can hardly be blamed for mistaking the two: they use the > same > >>> template, and their url's differ by only an org - can that be changed?) > and, > >>> though paltry, at least it is current. > >>> > >>> DG > >>> > >>> On Thu, Jun 3, 2010 at 12:33 PM, Vincent Davis < > vincent at vincentdavis.net> > >>> wrote: > >>>> > >>>> Mostly I question "Warning Please do not to use this editor yet -- > >>>> things are still too much in motion." > >>>> http://docs.scipy.org/scipyorg/Front%20Page/ > >>>> _______________________________________________ > >>>> SciPy-User mailing list > >>>> SciPy-User at scipy.org > >>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> > >>> > >>> > >>> -- > >>> Mathematician: noun, someone who disavows certainty when their > uncertainty > >>> set is non-empty, even if that set has measure zero. > >>> > >>> Hope: noun, that delusive spirit which escaped Pandora's jar and, with > her > >>> lies, prevents mankind from committing a general suicide. (As > interpreted > >>> by Robert Graves) > >>> > >>> _______________________________________________ > >>> SciPy-User mailing list > >>> SciPy-User at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> > >>> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Jun 3 18:38:00 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 3 Jun 2010 18:38:00 -0400 Subject: [SciPy-User] Is the scipy docs "Front" page up-to-date? In-Reply-To: References: Message-ID: On Thu, Jun 3, 2010 at 6:30 PM, David Goldsmith wrote: > On Thu, Jun 3, 2010 at 3:21 PM, Vincent Davis > wrote: >> >> On Thu, Jun 3, 2010 at 4:07 PM, ? wrote: >> > On Thu, Jun 3, 2010 at 6:04 PM, Vincent Davis >> > wrote: >> >> On Thu, Jun 3, 2010 at 2:24 PM, David Goldsmith >> >> wrote: >> >>> I don't know, but that's not the Front Page of the SciPy Documentation >> >>> editor, which is: >> >>> >> >>> http://docs.scipy.org/scipy/Front%20Page/ >> >> >> >> What is even more confusing is that if you click the link at the top >> >> right of the above page that says "scipy.org editor" you get the other >> >> page. >> > >> > I think this is where the new version of the scipy.org website is >> > created, and I guess it's only partially finished and connected. >> > >> > Josef >> >> I meant to post this on the Dev list. >> >> And just to be clear. >> form this page >> http://docs.scipy.org/doc/ >> clicking on >> "Write, review and proof the documentation" http://docs.scipy.org/numpy/ >> and then at the top right link >> "scipy.org editor" http://docs.scipy.org/scipyorg/ ?which forwards to >> http://docs.scipy.org/scipyorg/Front%20Page/ >> Which is not >> http://docs.scipy.org/scipy/Front%20Page/ > > Correct, from http://docs.scipy.org/numpy/ you want the link just to the > left of "scipy.org editor," namely?Scipy documentation editor - very > confusing - can someone suggest a solution? I think the description in http://docs.scipy.org/scipyorg/Front%20Page/ is relatively clear "scipy.org editor This is an editor for a tentative new version of the scipy.org site, which currently runs on Moinmoin. We are experimenting on moving the most important parts to Sphinx." maybe point to the other two editors in the Warning, or tell contributors that there are 3 editors now. Josef > DG > >> >> >> Vincent >> >> > >> > >> >> Vincent >> >> >> >>> >> >>> (though you can hardly be blamed for mistaking the two: they use the >> >>> same >> >>> template, and their url's differ by only an org - can that be >> >>> changed?) and, >> >>> though paltry, at least it is current. >> >>> >> >>> DG >> >>> >> >>> On Thu, Jun 3, 2010 at 12:33 PM, Vincent Davis >> >>> >> >>> wrote: >> >>>> >> >>>> Mostly I question "Warning Please do not to use this editor yet -- >> >>>> things are still too much in motion." >> >>>> http://docs.scipy.org/scipyorg/Front%20Page/ >> >>>> _______________________________________________ >> >>>> SciPy-User mailing list >> >>>> SciPy-User at scipy.org >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>> >> >>> >> >>> -- >> >>> Mathematician: noun, someone who disavows certainty when their >> >>> uncertainty >> >>> set is non-empty, even if that set has measure zero. >> >>> >> >>> Hope: noun, that delusive spirit which escaped Pandora's jar and, with >> >>> her >> >>> lies, prevents mankind from committing a general suicide. ?(As >> >>> interpreted >> >>> by Robert Graves) >> >>> >> >>> _______________________________________________ >> >>> SciPy-User mailing list >> >>> SciPy-User at scipy.org >> >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Mathematician: noun, someone who disavows certainty when their uncertainty > set is non-empty, even if that set has measure zero. > > Hope: noun, that delusive spirit which escaped Pandora's jar and, with her > lies, prevents mankind from committing a general suicide. ?(As interpreted > by Robert Graves) > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From vincent at vincentdavis.net Thu Jun 3 18:49:04 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Thu, 3 Jun 2010 16:49:04 -0600 Subject: [SciPy-User] Is the scipy docs "Front" page up-to-date? In-Reply-To: References: Message-ID: On Thu, Jun 3, 2010 at 4:38 PM, wrote: > On Thu, Jun 3, 2010 at 6:30 PM, David Goldsmith wrote: >> On Thu, Jun 3, 2010 at 3:21 PM, Vincent Davis >> wrote: >>> >>> On Thu, Jun 3, 2010 at 4:07 PM, ? wrote: >>> > On Thu, Jun 3, 2010 at 6:04 PM, Vincent Davis >>> > wrote: >>> >> On Thu, Jun 3, 2010 at 2:24 PM, David Goldsmith >>> >> wrote: >>> >>> I don't know, but that's not the Front Page of the SciPy Documentation >>> >>> editor, which is: >>> >>> >>> >>> http://docs.scipy.org/scipy/Front%20Page/ >>> >> >>> >> What is even more confusing is that if you click the link at the top >>> >> right of the above page that says "scipy.org editor" you get the other >>> >> page. >>> > >>> > I think this is where the new version of the scipy.org website is >>> > created, and I guess it's only partially finished and connected. >>> > >>> > Josef >>> >>> I meant to post this on the Dev list. >>> >>> And just to be clear. >>> form this page >>> http://docs.scipy.org/doc/ >>> clicking on >>> "Write, review and proof the documentation" http://docs.scipy.org/numpy/ >>> and then at the top right link >>> "scipy.org editor" http://docs.scipy.org/scipyorg/ ?which forwards to >>> http://docs.scipy.org/scipyorg/Front%20Page/ >>> Which is not >>> http://docs.scipy.org/scipy/Front%20Page/ >> >> Correct, from http://docs.scipy.org/numpy/ you want the link just to the >> left of "scipy.org editor," namely?Scipy documentation editor - very >> confusing - can someone suggest a solution? > > I think the description in > http://docs.scipy.org/scipyorg/Front%20Page/ is relatively clear > > "scipy.org editor > This is an editor for a tentative new version of the scipy.org site, > which currently runs on Moinmoin. We are experimenting on moving the > most important parts to Sphinx." In retrospect yes it is clear. > > maybe point to the other two editors in the Warning, or tell > contributors that there are 3 editors now. I would suggest having only one page. With the current and new system explained. It is to me more clear than that I should edit the current system and not the new and the two are distinct. One of the problems is that they look the same. I thought the page was out of date because I thought it was the same site. My suggestion is that if it is not used the link should be difficult to find or documented but not actually a link html link. To be clear "Please do not to use this editor yet -- things are still too much in motion." To me this looks like the same place as the rest of the wiki I can't tell it is a different editor than http://docs.scipy.org/scipy/Front%20Page/ Vincent > > Josef > > >> DG >> >>> >>> >>> Vincent >>> >>> > >>> > >>> >> Vincent >>> >> >>> >>> >>> >>> (though you can hardly be blamed for mistaking the two: they use the >>> >>> same >>> >>> template, and their url's differ by only an org - can that be >>> >>> changed?) and, >>> >>> though paltry, at least it is current. >>> >>> >>> >>> DG >>> >>> >>> >>> On Thu, Jun 3, 2010 at 12:33 PM, Vincent Davis >>> >>> >>> >>> wrote: >>> >>>> >>> >>>> Mostly I question "Warning Please do not to use this editor yet -- >>> >>>> things are still too much in motion." >>> >>>> http://docs.scipy.org/scipyorg/Front%20Page/ >>> >>>> _______________________________________________ >>> >>>> SciPy-User mailing list >>> >>>> SciPy-User at scipy.org >>> >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >>> >>> >>> >>> -- >>> >>> Mathematician: noun, someone who disavows certainty when their >>> >>> uncertainty >>> >>> set is non-empty, even if that set has measure zero. >>> >>> >>> >>> Hope: noun, that delusive spirit which escaped Pandora's jar and, with >>> >>> her >>> >>> lies, prevents mankind from committing a general suicide. ?(As >>> >>> interpreted >>> >>> by Robert Graves) >>> >>> >>> >>> _______________________________________________ >>> >>> SciPy-User mailing list >>> >>> SciPy-User at scipy.org >>> >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >>> >> _______________________________________________ >>> >> SciPy-User mailing list >>> >> SciPy-User at scipy.org >>> >> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>> > _______________________________________________ >>> > SciPy-User mailing list >>> > SciPy-User at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> > >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> -- >> Mathematician: noun, someone who disavows certainty when their uncertainty >> set is non-empty, even if that set has measure zero. >> >> Hope: noun, that delusive spirit which escaped Pandora's jar and, with her >> lies, prevents mankind from committing a general suicide. ?(As interpreted >> by Robert Graves) >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From uclamathguy at gmail.com Fri Jun 4 02:39:28 2010 From: uclamathguy at gmail.com (Ryan R. Rosario) Date: Thu, 3 Jun 2010 23:39:28 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Problem with np.load() on Huge Sparse Matrix In-Reply-To: References: Message-ID: <28776255.post@talk.nabble.com> Is this a bug? Has anybody else experienced this? Not being able to load a matrix from disk is a huge limitation for me. I would appreciate any help anyone can provide with this. Thanks, Ryan Ryan R. Rosario wrote: > > Hi, > > I have a very huge sparse (395000 x 395000) CSC matrix that I cannot > save in one pass, so I saved the data, indices, indptr and shape in > separate files as suggested by Dave Wade-Farley a few years back. > > When I try to read back the indices pickle: > >>> np.save("indices.pickle", mymatrix.indices) >>>> indices = np.load("indices.pickle.npy") >>>> indices > array([394852, 394649, 394533, ..., 0, 0, 0], dtype=int32) >>>> intersection_matrix.indices > array([394852, 394649, 394533, ..., 1557, 1223, 285], dtype=int32) > > Why is this happening? My only workaround is to print all of entries > of intersection_matrix.indices to a file, and read in back which takes > up to 2 hours. It would be great if I could get np.load to work > because it is much faster. > > Thanks, > Ryan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/Problem-with-np.load%28%29-on-Huge-Sparse-Matrix-tp28719518p28776255.html Sent from the Scipy-User mailing list archive at Nabble.com. From matthieu.brucher at gmail.com Fri Jun 4 06:00:18 2010 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 4 Jun 2010 12:00:18 +0200 Subject: [SciPy-User] Gaussian filter on an angle Message-ID: Hi, I'm trying to blur an angle field, but it's not easy ;) Applying gaussian_filter (from ndimage) on the sinus and the cos is not enough to have a smooth angle field, and of course applying gaussian_filter directly on the angle field does not yeild satisfactiry results. Does anyone know of a function (even if it not in Python yet) that could gaussian filter an angle field? Something like a Riemanian filter (instead of an Euclidian one)... Matthieu -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From lorenzo.isella at gmail.com Fri Jun 4 06:16:53 2010 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Fri, 04 Jun 2010 12:16:53 +0200 Subject: [SciPy-User] Again on Calculating (Conditional) Time Intervals In-Reply-To: References: <1275405395.2088.26.camel@rattlesnake> Message-ID: <1275646613.2493.26.camel@rattlesnake> Dear Brandon, Thanks a lot. I am reading carefully and tested your snippet; it seems to be just doing fine. Maybe a bit of clarification if in order Let us say that an object A meets B at times t_1, t_2, t_3 (in increasing order and there may be gaps between them). Then A meets C at times t_5, t_6 and t_7. Then the quantity I am after is abs(t_1-t_5) i.e. the time interval between the beginning of the A-B and A-C interaction. This may sound obscure, but if A is capable of spreading information, then this time interval is a measure of its activity (how long before after talking to B it starts talking to C?). Cheers Lorenzo On Tue, 2010-06-01 at 18:45 -0400, Nuttall, Brandon C wrote: > Lorenzo, > > I'm not sure I'm clear on what you want. However, the attached Python code produces a list of the observed delta times (time between meetings of different ID pairs). That list can then be analyzed using histogram or any of the probability density functions in scipy and numpy. > > Hope this helps. > > Brandon > > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Lorenzo Isella > Sent: Tuesday, June 01, 2010 11:17 AM > To: scipy-user at scipy.org > Subject: [SciPy-User] Again on Calculating (Conditional) Time Intervals > > Dear All, > I hope this is not too off-topic. I have digged up an old email I posted which went unanswered quite some time ago. > I made some progress on a simpler problem than the one for which I initially asked for help and I am attaching at the end of the email > my own scripts. I anyone can help me to progress a bit further, I will be very grateful. > Consider and array of this kind: > > 1 12 45 > 2 7 12 > 2 15 37 > 3 25 89 > 3 8 13 > 3 13 44 > 4 77 89 > 4 77 89 > 5 12 22 > 8 12 22 > 9 15 22 > 11 22 37 > 23 3 12 > 24 18 37 > 25 1 12 > > > > where the first column is time measured in some units. The other two > columns are some ID's identifying infected individuals establishing a > contact at the corresponding time. As you can see, there may be > time-gaps in my recorded times and there may be repeated times if > several contacts take place simultaneously. The ID's are always sorted > out in such a way the ID number of the 2nd column is always smaller than > the corresponding entry of the third column (I am obviously indexing > everything from 1). > Now, this is my problem: I want to look at a specific ID I will call A > (let us say A is 12) and calculate all the time differences t_AC-t_AB > for B!=C, i.e. all the time intervals between the most recent contact > between A and B and the first subsequent contact between and A and C > (which has to be different from B). > An example to fix the ideas: A=12, B=22, C=1, then > t_AB=8 (pick the most recent one before t_AC) > t_AC=25, > hence t_AC-t_AB=25-8=17. (but let me say it again: I want to be able to > calculate all such intervals for any B and C on the fly). > It should be clear at this point that the calculated t_AC-t_AB != > t_AB-t_AC as some time-ordering is implicit in the definition (in > t_AC-t_AB, AC contacts have to always be more recent than AB contacts). > Even in the case of multiple disjointed AB and AC contacts, I always > have to look for the closest time intervals in time. E.g. if I had > > 10 12 22 > 40 12 22 > 60 1 12 > 100 1 12 > 110 12 22 > 130 12 22 > 150 1 12 > > then I would work out the time intervals 60-40=20 and 150-130=20. > Now, thanks to the help I got from the list, I am able to calculate the > distribution of contact and interval durations between IDs (something simpler than the conditional > time interval). See the code at the end of the email which you can run on the first dataset I provide in this email to get > the contact/interval distributions. > Sorry for the long email, but any suggestion about how to calculate the conditional probability efficiently would help me a great deal. > Many thanks > > Lorenzo > > > #!/usr/bin/env python > import scipy as s > import pylab as p > import numpy as n > import sys > import string > > > def single_tag_contact_times(sliced_data, tag_id): > > > #I can follow a given tag by selecting his ID number and looking > #for it through the data > > sel=s.where(sliced_data==tag_id) > > #now I need to add a condition in case the id of the tag I have chosen is non-existing > > > if (len(sel[0])==0): > #print "the chosen tag does not exist" > return > > tag_contact_times=sliced_data[sel[0],0] #I select the times at which > #the tag I am tracking undergoes a contact. > > > > tag_no_rep=n.unique1d(tag_contact_times) #The idea is the following: > #in a given time interval delta_slice, a tag may undergo multiple contacts > #with different tags. This corresponds to different entries in the output > #of time_binned_interaction. That function does not allow for multiple contacts between > # the SAME two tags being reported in the same time slice, but it allows the same tag ID > #to appear twice in the same time window if it gets in touch with TWO different tags > #within the same delta_slice. It is fundamental to know that in a given time slice > #tag A has estabilished contact with tag B and tag C (if I discard any bit of this info, > #then I lose info about the state of the network at that time slice), but when it comes to > #simply having the time-dependent distribution of contact durations and intervals between > #any two contacts estabilished by packet A, I will simply say that tag A reported a contact > #in that given time-slice. More sophisticated statistics (e.g. the number of contacts > #estabilished by tag A in a given time slice), can be implemented if found useful/needed > #later on. > > > > #p.save("single_tag_contact_times_no_rep.dat",tag_no_rep,fmt='%d') > > return tag_no_rep > > > def contact_duration_and_interval_many_tags(sliced_interactions,\ > delta_slice, counter): > > #I added this line since now there is no guarantee that in the edge list > # (contact list) tag_A---tag_B, the id of tag_A is <= id of tag_B. > > sliced_interactions[:,1:3]=s.sort(sliced_interactions[:,1:3]) > > #This function iterates interval_between_contacts_single_tag on a all the tag ID`s > #thus outputting the distribution of time intervals between any two contacts in the system. > > tag_ids= n.unique1d(s.ravel(sliced_interactions[:,1:3])) #to get a list of > #all tag ID`s, which appear (repeated) on two rows of the matrix output by > # time_binned_interaction > > > #n.savetxt("tag_IDs.dat", tag_ids , fmt='%d') > > > # tag_ids=tag_ids.astype('int') > > > > #print "tag_ids is, ", tag_ids > > overall_gaps=s.zeros(0) #this array will contain the time intervals between two consecutive > #contacts for all the tags in the system. > > > > overall_duration=s.zeros(0) #this array will contain the time duration of the > #contacts for all the tags in the system. > > > > for i in xrange(len(tag_ids)): > track_tag_id=tag_ids[i] #i.e. iterate on all tags > > contact_times=single_tag_contact_times(sliced_interactions, track_tag_id) #get > #an array with all the interactions of a given tag > > #print "contact_times is, ", contact_times > > results=contact_duration_and_interval_single_tag(contact_times, delta_slice) > > tag_duration=results[0] > > > tag_intervals=results[1] #get > #an array with the time intervals between two contacts for a given tag > > > #print "tag_intervals is, ", tag_intervals > > overall_gaps=s.hstack((overall_gaps,tag_intervals)) #collect > #the results on all tags > > > #print "overall_gaps is, ", overall_gaps > > overall_duration=s.hstack((overall_duration,tag_duration)) > > #overall_gaps=overall_gaps[s.where(overall_gaps !=0)] > #overall_duration=overall_duration[s.where(overall_duration !=0)] > filename="many_tags_contact_interval_distr2_%01d"%(counter+1) > filename=filename+"_.dat" > > n.savetxt(filename, overall_gaps , fmt='%d') > > filename="many_tags_contact_duration_distr2_%01d"%(counter+1) > filename=filename+"_.dat" > > > n.savetxt(filename, overall_duration , fmt='%d') > > return overall_duration, overall_gaps > > > > def contact_duration_and_interval_many_tags(sliced_interactions,\ > delta_slice, counter): > > #I added this line since now there is no guarantee that in the edge list > # (contact list) tag_A---tag_B, the id of tag_A is <= id of tag_B. > > sliced_interactions[:,1:3]=s.sort(sliced_interactions[:,1:3]) > > #This function iterates interval_between_contacts_single_tag on a all the tag ID`s > #thus outputting the distribution of time intervals between any two contacts in the system. > > tag_ids= n.unique1d(s.ravel(sliced_interactions[:,1:3])) #to get a list of > #all tag ID`s, which appear (repeated) on two rows of the matrix output by > # time_binned_interaction > > > #n.savetxt("tag_IDs.dat", tag_ids , fmt='%d') > > > # tag_ids=tag_ids.astype('int') > > > > #print "tag_ids is, ", tag_ids > > overall_gaps=s.zeros(0) #this array will contain the time intervals between two consecutive > #contacts for all the tags in the system. > > > > overall_duration=s.zeros(0) #this array will contain the time duration of the > #contacts for all the tags in the system. > > > > for i in xrange(len(tag_ids)): > track_tag_id=tag_ids[i] #i.e. iterate on all tags > > contact_times=single_tag_contact_times(sliced_interactions, track_tag_id) #get > #an array with all the interactions of a given tag > > #print "contact_times is, ", contact_times > > results=contact_duration_and_interval_single_tag(contact_times, delta_slice) > > tag_duration=results[0] > > > tag_intervals=results[1] #get > #an array with the time intervals between two contacts for a given tag > > > #print "tag_intervals is, ", tag_intervals > > overall_gaps=s.hstack((overall_gaps,tag_intervals)) #collect > #the results on all tags > > > #print "overall_gaps is, ", overall_gaps > > overall_duration=s.hstack((overall_duration,tag_duration)) > > #overall_gaps=overall_gaps[s.where(overall_gaps !=0)] > #overall_duration=overall_duration[s.where(overall_duration !=0)] > filename="many_tags_contact_interval_distr2_%01d"%(counter+1) > filename=filename+"_.dat" > > n.savetxt(filename, overall_gaps , fmt='%d') > > filename="many_tags_contact_duration_distr2_%01d"%(counter+1) > filename=filename+"_.dat" > > > n.savetxt(filename, overall_duration , fmt='%d') > > return overall_duration, overall_gaps > > > def contact_duration_and_interval_single_tag(single_tag_no_rep, delta_slice): > > #the following if condition is useful only when I am really tracking a particular > #tag whose ID is given a priory but which may not exist at all (in the sense that > #it would not estabilish any contact) in the time window during which I am studying > #the system. > > > if (single_tag_no_rep==None): > print "The chosen tag does not exist hence no analysis can be performed on it" > return > > > > # delta_slice=int(delta_slice) #I do not need floating point arithmetic > > single_tag_no_rep=(single_tag_no_rep-single_tag_no_rep[0])/delta_slice > gaps=s.diff(single_tag_no_rep) #a bit more efficient than the line above > > #print "gaps is, ", gaps > > #gaps is now an array of integers. It either has a list of consecutive 1`s > # (which means a contact duration of delta_slice times the number of consecutive ones) > # of an entry higher than one which expresses (in units of delta_slice) the time during > #which the tag underwent no contact > > > #p.save("gaps.dat",gaps, fmt='%d') > > # find_gap=s.where(gaps != 1)[0] > > find_gap=s.where(gaps > 1)[0] #a better definition: a tag may estabilish > #several contacts within the same timeslice. So I may have some zeros in > #gaps due to different simultaneous contacts. a tag is truly disconnected > #from all the others when I see an increment larger than one in the > #rescaled time. > > > gap_distr=(gaps[find_gap]-1)*delta_slice #so, this is really the list of the > #time interval between two contacts for my tag. After the discussion with Ciro, > #I modified slightly the definition (now there is a -1) in the definition. > #It probably does not matter much for the calculated distribution. > > #print "gap_distr is, ", gap_distr > #NB: the procedure above does NOT break down is gap_distr is empty > > > #Now I calculate the duration of the contacts of my tag. I changed this bit since > #I had new discussions with Ciro > > #single_tag_no_rep=s.hstack((0,single_tag_no_rep)) > > #print "single_tag_no_rep is, ", single_tag_no_rep, "and its length is, ", len(single_tag_no_rep) > > # e2=s.diff(single_tag_no_rep) > > # #print "e2 is, ", e2 > > # sel=s.where(e2!=1)[0] > # #print "sel is, ", sel > > #sel=s.where(gaps!=1)[0] > > # res=0 #this will contain the results and will be overwritten > > #What is here needs to be tested very carefully! There may be some bugs > > > sol=s.hstack((0,find_gap,len(gaps))) > #print "sol is, ", sol > > > > res=s.diff(sol) > #print "res initially is, ", res > > > res[0]=res[0]+1 #to account for troubles I normally have at the beginning of the array > > #print "res is, ", res > > > > > res=res*delta_slice > > > #print "the sum of all the durations is, ", res.sum() > > return [res,gap_distr] > > > f = open(sys.argv[1]) > sliced_interactions = [map(int, string.split(line)) for line in f.readlines()] > f.close() > > print ("sliced_interactions is, ", sliced_interactions) > > sliced_interactions = s.array(sliced_interactions, dtype="int64") > > print ("sliced_interactions is now, ", sliced_interactions) > > counter=0 > > delta_slice=1 > > contact_duration_and_interval_many_tags(sliced_interactions,\ > delta_slice,counter) > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From aarchiba at physics.mcgill.ca Fri Jun 4 06:38:17 2010 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Fri, 4 Jun 2010 06:38:17 -0400 Subject: [SciPy-User] Gaussian filter on an angle In-Reply-To: References: Message-ID: On 4 June 2010 06:00, Matthieu Brucher wrote: > Hi, > > I'm trying to blur an angle field, but it's not easy ;) > Applying gaussian_filter (from ndimage) on the sinus and the cos is > not enough to have a smooth angle field, and of course applying > gaussian_filter directly on the angle field does not yeild > satisfactiry results. > Does anyone know of a function (even if it not in Python yet) that > could gaussian filter an angle field? Something like a Riemanian > filter (instead of an Euclidian one)... This isn't my field, but I suspect you will have problems with this. In particular, there is a *topological* obstacle to blurring angle fields. In the blurred field, you want each angle to be close to that of nearby pixels. But imagine following the angle around the image in a circle: the angle changes by one full turn as you go around this loop. Any smoothing mechanism must either introduce a discontinuity in this loop or retain one full turn around the loop. The former is unlikely to be desirable, and the latter is asking rather a lot of a smoothing method, and in any case still results in rapidly-changing angles around small loops. You could look into "phase unwrapping", techniques to reconstruct a function from its values modulo 2 pi; obviously once you had an unwrapped function blurring would work normally. In this setting unwrapping simply fails when there are topological obstacles. The alternative I would suggest is what you already tried, converting your angles to a vector field and smoothing that. You'll still get defects where the angles change rapidly, but I don't think that can be avoided, and the length of the resulting vectors will tell you something about the degree of defectiveness. The key to making any of this work is having original angles that are not too noisy. If you're extracting the angles from some underlying data, say by calculating an average direction over squares of an image, I recommend using enough averaging to get the noise on the angle quite small, so that defects will be rare. You may find yourself needing to resolve defects manually if you can't just live with them. Anne P.S. This sort of topological obstruction is the origin for hypothetical "cosmic strings" as well as some of the neat dynamics of vortices in inviscid fluids and magnetic fields in type II superconductors. -A > Matthieu > -- > Information System Engineer, Ph.D. > Blog: http://matt.eifelle.com > LinkedIn: http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From stephens.js at gmail.com Fri Jun 4 08:16:45 2010 From: stephens.js at gmail.com (Scott Stephens) Date: Fri, 4 Jun 2010 07:16:45 -0500 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: References: Message-ID: Update: I was able to create a more successful build using the alternative scons build process with: LDFLAGS="-arch x86_64" FFLAGS="-arch x86_64" python setupscons.py scons --silent=1 install Looks like all of the created files have the right architectures, and nothing in the unit tests fail because of import errors. I'm still not getting a flawless scipy.test() run though (6 errors and 1 failure). I haven't yet had a chance to do a deep dive and figure out what exactly is failing and if it might be related to build problems. On Thu, Jun 3, 2010 at 6:25 AM, Scott Stephens wrote: > On Thu, Jun 3, 2010 at 1:58 AM, David Cournapeau wrote: >> On Thu, Jun 3, 2010 at 12:00 PM, Scott Stephens wrote: >>> I built scipy like this: >>> FFLAGS="-arch x86_64 -fPIC" LDFLAGS="-Wall -arch x86_64 -undefined >>> dynamic_lookup" python setup.py build >>> python setup.py install >> >> This comes up often, see here: >> http://ask.scipy.org/en/topic/34-error-building-scipy-on-mac-os-x:-importerror:-dlopen-no-suitable-image-found#reply-95 >> > > I actually knew that the flags were overridden, I included "-fPIC" in > FFLAGS and "-undefined dynamic_lookup" in LDFLAGS because I saw they > were in the default build. ?I didn't know that problems with that were > related to the import problem, so thank you for making that connection > for me. ?Does anyone have any suggestions about how exactly my flags > are wrong, or some method to figure out what's wrong? ?I've included > snippets of the build logs from the original build and from my build > in case they may be useful. > > ----- > From default build: > ----- > /usr/local/bin/gfortran -Wall -arch ppc -arch i686 -Wall -undefined > dynamic_lookup -bundle build/temp.m > acosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy/fftpack/_fftpackmodule.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfft.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/drfft.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zrfft.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfftnd.o > build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/fortranobject.o > -Lbuild/temp.macosx-10.6-i386-2.6 -ldfftpack -lgfortran -o > build/lib.macosx-10.6-i386-2.6/scipy/fftpack/_fftpack.so > > ----- > From my build: > ----- > /usr/local/bin/gfortran -Wall -Wall -arch x86_64 -undefined > dynamic_lookup build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy/fftpack/_fftpackmodule.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfft.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/drfft.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zrfft.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfftnd.o > build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/fortranobject.o > -L/usr/local/lib/gcc/i686-apple-darwin8/4.2.3/x86_64 > -Lbuild/temp.macosx-10.6-i386-2.6 -ldfftpack -lgfortran -o > build/lib.macosx-10.6-i386-2.6/scipy/fftpack/_fftpack.so > > ----- > From default build: > ----- > building 'dfftpack' library > compiling Fortran sources > Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form > -fno-second-underscore -arch ppc -arch i686 -fPIC -O3 -funroll-loops > Fortran f90 compiler: /usr/local/bin/gfortran -Wall > -fno-second-underscore -arch ppc -arch i686 -fPIC -O3 -funroll-loops > Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form > -fno-second-underscore -Wall -fno-second-underscore -arch ppc -arch > i686 -fPIC -O3 -funroll-loops > > ----- > From my build: > ----- > building 'dfftpack' library > compiling Fortran sources > Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form > -fno-second-underscore -arch x86_64 -fPIC -O3 -funroll-loops > Fortran f90 compiler: /usr/local/bin/gfortran -Wall > -fno-second-underscore -arch x86_64 -fPIC -O3 -funroll-loops > Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form > -fno-second-underscore -Wall -fno-second-underscore -arch x86_64 -fPIC > -O3 -funroll-loops > > ------ > From default build: > ----- > building 'scipy.fftpack._fftpack' extension > compiling C sources > C compiler: gcc -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g > -fwrapv -O3 -Wall -Wstrict-prototypes > > creating build/temp.macosx-10.6-i386-2.6/build > creating build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6 > creating build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy > creating build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy/fftpack > compile options: '-Ibuild/src.macosx-10.6-i386-2.6 > -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/core/include > -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 > -c' > gcc: scipy/fftpack/src/zfft.c > gcc: scipy/fftpack/src/drfft.c > gcc: scipy/fftpack/src/zrfft.c > gcc: scipy/fftpack/src/zfftnd.c > gcc: build/src.macosx-10.6-i386-2.6/fortranobject.c > gcc: build/src.macosx-10.6-i386-2.6/scipy/fftpack/_fftpackmodule.c > /usr/local/bin/gfortran -Wall -arch ppc -arch i686 -Wall -undefined > dynamic_lookup -bundle > build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy/fftpack/_fftpackmodule.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfft.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/drfft.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zrfft.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfftnd.o > build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/fortranobject.o > -Lbuild/temp.macosx-10.6-i386-2.6 -ldfftpack -lgfortran -o > build/lib.macosx-10.6-i386-2.6/scipy/fftpack/_fftpack.so > > ----- > From my build: > ----- > building 'scipy.fftpack._fftpack' extension > compiling C sources > C compiler: gcc -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g > -fwrapv -O3 -Wall -Wstrict-prototypes > > creating build/temp.macosx-10.6-i386-2.6/build > creating build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6 > creating build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy > creating build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy/fftpack > compile options: '-Ibuild/src.macosx-10.6-i386-2.6 > -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/core/include > -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 > -c' > gcc: scipy/fftpack/src/zfft.c > gcc: scipy/fftpack/src/drfft.c > gcc: scipy/fftpack/src/zrfft.c > gcc: scipy/fftpack/src/zfftnd.c > gcc: build/src.macosx-10.6-i386-2.6/fortranobject.c > gcc: build/src.macosx-10.6-i386-2.6/scipy/fftpack/_fftpackmodule.c > /usr/local/bin/gfortran -Wall -Wall -arch x86_64 -undefined > dynamic_lookup build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/scipy/fftpack/_fftpackmodule.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfft.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/drfft.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zrfft.o > build/temp.macosx-10.6-i386-2.6/scipy/fftpack/src/zfftnd.o > build/temp.macosx-10.6-i386-2.6/build/src.macosx-10.6-i386-2.6/fortranobject.o > -L/usr/local/lib/gcc/i686-apple-darwin8/4.2.3/x86_64 > -Lbuild/temp.macosx-10.6-i386-2.6 -ldfftpack -lgfortran -o > build/lib.macosx-10.6-i386-2.6/scipy/fftpack/_fftpack.so > > It looks to me like all of the flags are the same except for the > architecture related ones. ?Ideas anyone? > > Thanks, > > Scott > From mlist at re-factory.de Fri Jun 4 08:30:09 2010 From: mlist at re-factory.de (Robert Elsner) Date: Fri, 04 Jun 2010 14:30:09 +0200 Subject: [SciPy-User] UnivariateSpline broken? Message-ID: <1275654609.11736.11.camel@robert-desktop-work> Hello, the UnivariateSpline implementation in scipy.interpolate seems to be broken (tested on 0.7). It just produces garbage for some use cases especially with logarithmic spacing of the x values. A sample script to illustrate the problem is here. Am I misusing the code or is it a bug? Cheers #!/usr/bin/python import numpy as np from scipy.interpolate import UnivariateSpline, splrep, splev # Works as expected x = np.logspace(-4, 1) y = x**2 sp_1 = UnivariateSpline(x,y,k=3) print np.all((sp_1(x) - y) < 1e-10 ) # Doesn't work as expected y = np.sin(x) sp_2 = UnivariateSpline(x,y,k=3) print np.all((sp_2(x) - y) < 1e-10 ) # Works if using low-level routines tck = splrep(x,y,k=3) print np.all((splev(x, tck) - y) < 1e-10 ) From ralf.gommers at googlemail.com Fri Jun 4 08:31:33 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 4 Jun 2010 20:31:33 +0800 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: References: Message-ID: On Fri, Jun 4, 2010 at 8:16 PM, Scott Stephens wrote: > Update: I was able to create a more successful build using the > alternative scons build process with: > LDFLAGS="-arch x86_64" FFLAGS="-arch x86_64" python setupscons.py > scons --silent=1 install > > Thanks for the update. > Looks like all of the created files have the right architectures, and > nothing in the unit tests fail because of import errors. I'm still not > getting a flawless scipy.test() run though (6 errors and 1 failure). > I haven't yet had a chance to do a deep dive and figure out what > exactly is failing and if it might be related to build problems. > > Can you post the errors and failure? Currently one test gives an error, test_lapack_misaligned. Others that have been failing in recent checkouts are one lambertw test and several matlab ones, so no need to investigate those. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlist at re-factory.de Fri Jun 4 08:44:08 2010 From: mlist at re-factory.de (Robert Elsner) Date: Fri, 04 Jun 2010 14:44:08 +0200 Subject: [SciPy-User] [SciPy-user] Problem with np.load() on Huge Sparse Matrix In-Reply-To: <28776255.post@talk.nabble.com> References: <28776255.post@talk.nabble.com> Message-ID: <1275655448.11736.15.camel@robert-desktop-work> Hello, how sparse is your matrix (NNZ)? From your code it is not clear that mymatrix and intersection_matrix are actually the same matrices. Cheers. Am Donnerstag, den 03.06.2010, 23:39 -0700 schrieb Ryan R. Rosario: > Is this a bug? Has anybody else experienced this? > > Not being able to load a matrix from disk is a huge limitation for me. I > would appreciate any help anyone can provide with this. > > Thanks, > Ryan > > > > Ryan R. Rosario wrote: > > > > Hi, > > > > I have a very huge sparse (395000 x 395000) CSC matrix that I cannot > > save in one pass, so I saved the data, indices, indptr and shape in > > separate files as suggested by Dave Wade-Farley a few years back. > > > > When I try to read back the indices pickle: > > > >>> np.save("indices.pickle", mymatrix.indices) > >>>> indices = np.load("indices.pickle.npy") > >>>> indices > > array([394852, 394649, 394533, ..., 0, 0, 0], dtype=int32) > >>>> intersection_matrix.indices > > array([394852, 394649, 394533, ..., 1557, 1223, 285], dtype=int32) > > > > Why is this happening? My only workaround is to print all of entries > > of intersection_matrix.indices to a file, and read in back which takes > > up to 2 hours. It would be great if I could get np.load to work > > because it is much faster. > > > > Thanks, > > Ryan > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > From gruben at bigpond.net.au Fri Jun 4 09:11:07 2010 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri, 04 Jun 2010 23:11:07 +1000 Subject: [SciPy-User] Gaussian filter on an angle In-Reply-To: References: Message-ID: <4C08FB6B.8020505@bigpond.net.au> I'm probably misunderstanding what you want because I don't know what an angle field is, but maybe you want to radon transform (or inverse radon transform) the data first, then apply a 1D filter before inverse transforming. If you have access to Matlab you could try the radon and iradon functions. I have some Python code for doing these, part of which has licensing problems wrt getting it into scikits.image. Apologies if I've completely misunderstood what you want to do, Gary Matthieu Brucher wrote: > Hi, > > I'm trying to blur an angle field, but it's not easy ;) > Applying gaussian_filter (from ndimage) on the sinus and the cos is > not enough to have a smooth angle field, and of course applying > gaussian_filter directly on the angle field does not yeild > satisfactiry results. > Does anyone know of a function (even if it not in Python yet) that > could gaussian filter an angle field? Something like a Riemanian > filter (instead of an Euclidian one)... > > Matthieu From zachary.pincus at yale.edu Fri Jun 4 10:36:36 2010 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 4 Jun 2010 10:36:36 -0400 Subject: [SciPy-User] Gaussian filter on an angle In-Reply-To: References: Message-ID: <9D023B54-6B32-460E-BF66-314376383005@yale.edu> On Jun 4, 2010, at 6:38 AM, Anne Archibald wrote: > On 4 June 2010 06:00, Matthieu Brucher > wrote: >> Hi, >> >> I'm trying to blur an angle field, but it's not easy ;) >> Applying gaussian_filter (from ndimage) on the sinus and the cos is >> not enough to have a smooth angle field, and of course applying >> gaussian_filter directly on the angle field does not yeild >> satisfactiry results. >> Does anyone know of a function (even if it not in Python yet) that >> could gaussian filter an angle field? Something like a Riemanian >> filter (instead of an Euclidian one)... > > This isn't my field, but I suspect you will have problems with this. > In particular, there is a *topological* obstacle to blurring angle > fields. In the blurred field, you want each angle to be close to that > of nearby pixels. But imagine following the angle around the image in > a circle: the angle changes by one full turn as you go around this > loop. Any smoothing mechanism must either introduce a discontinuity in > this loop or retain one full turn around the loop. Anne's quite right -- I've banged my head on things like this before too. I have a different idea about how to get around these issues, in a killing-a-gnat-with-a-bazooka kind of way, though: you might be able to pose this question as one of smoothing via curve-fitting instead of via filtering? E.g. fit a bivariate spline or some other polynomial surface to your angles in such a was as to minimize not the squared residuals directly, but the square of the minimum angle between the fit surface at that point and the data (going clockwise or counterclockwise, whichever is smaller... I forget exactly but there's a closed-form way to calculate that). This way you get a smooth underlying fit in a way that is (I think?) immune to discontinuities in the angle data. Problem is either this will be very slow (fitting a many-parameter surface) or probably over-smooth (fitting a low-parameter surface). There are probably some multi-resolution methods you could use. I think the nonlinear least squares optimizers in scipy would be the way to go here? Zach > The former is > unlikely to be desirable, and the latter is asking rather a lot of a > smoothing method, and in any case still results in rapidly-changing > angles around small loops. You could look into "phase unwrapping", > techniques to reconstruct a function from its values modulo 2 pi; > obviously once you had an unwrapped function blurring would work > normally. In this setting unwrapping simply fails when there are > topological obstacles. The alternative I would suggest is what you > already tried, converting your angles to a vector field and smoothing > that. You'll still get defects where the angles change rapidly, but I > don't think that can be avoided, and the length of the resulting > vectors will tell you something about the degree of defectiveness. > > The key to making any of this work is having original angles that are > not too noisy. If you're extracting the angles from some underlying > data, say by calculating an average direction over squares of an > image, I recommend using enough averaging to get the noise on the > angle quite small, so that defects will be rare. You may find yourself > needing to resolve defects manually if you can't just live with them. > > > Anne > > P.S. This sort of topological obstruction is the origin for > hypothetical "cosmic strings" as well as some of the neat dynamics of > vortices in inviscid fluids and magnetic fields in type II > superconductors. -A > >> Matthieu >> -- >> Information System Engineer, Ph.D. >> Blog: http://matt.eifelle.com >> LinkedIn: http://www.linkedin.com/in/matthieubrucher >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From tsyu80 at gmail.com Fri Jun 4 10:36:41 2010 From: tsyu80 at gmail.com (Tony S Yu) Date: Fri, 4 Jun 2010 10:36:41 -0400 Subject: [SciPy-User] UnivariateSpline broken? In-Reply-To: <1275654609.11736.11.camel@robert-desktop-work> References: <1275654609.11736.11.camel@robert-desktop-work> Message-ID: <6FFAADE2-0E86-475D-A5CE-A6B5D13A5EE0@gmail.com> On Jun 4, 2010, at 8:30 AM, Robert Elsner wrote: > > Hello, > > the UnivariateSpline implementation in scipy.interpolate seems to be > broken (tested on 0.7). It just produces garbage for some use cases > especially with logarithmic spacing of the x values. > A sample script to illustrate the problem is here. Am I misusing the > code or is it a bug? > > Cheers > > #!/usr/bin/python > import numpy as np > from scipy.interpolate import UnivariateSpline, splrep, splev > > # Works as expected > x = np.logspace(-4, 1) > y = x**2 > > sp_1 = UnivariateSpline(x,y,k=3) > print np.all((sp_1(x) - y) < 1e-10 ) > > # Doesn't work as expected > y = np.sin(x) > sp_2 = UnivariateSpline(x,y,k=3) > print np.all((sp_2(x) - y) < 1e-10 ) > > # Works if using low-level routines > tck = splrep(x,y,k=3) > print np.all((splev(x, tck) - y) < 1e-10 ) It appears that the default behavior for UnivariateSpline is to smooth the input data. You can fix your example above with the following: sp_2 = UnivariateSpline(x,y,k=3,s=0) This default behavior isn't obvious since the default value for the smoothing factor, `s`, is set to None, and the docstring doesn't mention what happens when `s = None`. This behavior is particularly weird because `splrep` also uses `s = None` as a default, but this None value gets changed to `s=0` (I don't know what UnivariateSpline does with `s=None` since some magic happens within Fortran code). Do any devs know if this difference in default behaviors is intentional? -Tony From josef.pktd at gmail.com Fri Jun 4 10:54:18 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 4 Jun 2010 10:54:18 -0400 Subject: [SciPy-User] UnivariateSpline broken? In-Reply-To: <6FFAADE2-0E86-475D-A5CE-A6B5D13A5EE0@gmail.com> References: <1275654609.11736.11.camel@robert-desktop-work> <6FFAADE2-0E86-475D-A5CE-A6B5D13A5EE0@gmail.com> Message-ID: On Fri, Jun 4, 2010 at 10:36 AM, Tony S Yu wrote: > > On Jun 4, 2010, at 8:30 AM, Robert Elsner wrote: > >> >> Hello, >> >> the UnivariateSpline implementation in scipy.interpolate seems to be >> broken (tested on 0.7). It just produces garbage for some use cases >> especially with logarithmic spacing of the x values. >> A sample script to illustrate the problem is here. Am I misusing the >> code or is it a bug? >> >> Cheers >> >> #!/usr/bin/python >> import numpy as np >> from scipy.interpolate import UnivariateSpline, splrep, splev >> >> # Works as expected >> x = np.logspace(-4, 1) >> y = x**2 >> >> sp_1 = UnivariateSpline(x,y,k=3) >> print np.all((sp_1(x) - y) < 1e-10 ) >> >> # Doesn't work as expected >> y = np.sin(x) >> sp_2 = UnivariateSpline(x,y,k=3) >> print np.all((sp_2(x) - y) < 1e-10 ) >> >> # Works if using low-level routines >> tck = splrep(x,y,k=3) >> print np.all((splev(x, tck) - y) < 1e-10 ) > > It appears that the default behavior for UnivariateSpline is to smooth the input data. You can fix your example above with the following: > > sp_2 = UnivariateSpline(x,y,k=3,s=0) > > This default behavior isn't obvious since the default value for the smoothing factor, `s`, is set to None, and the docstring doesn't mention what happens when `s = None`. This behavior is particularly weird because `splrep` also uses `s = None` as a default, but this None value gets changed to `s=0` (I don't know what UnivariateSpline does with `s=None` since some magic happens within Fortran code). > > Do any devs know if this difference in default behaviors is intentional? I think the behavior of UnivariateSpline overall is a bit weird (?), because it does confusing delegation to the subclasses. But, I think, nobody has done a systematic review of the spline classes recently. Hopefully, the docmarathon gets around to clean up the documentation for the spline classes. Josef > > -Tony > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From tsyu80 at gmail.com Fri Jun 4 11:07:05 2010 From: tsyu80 at gmail.com (Tony S Yu) Date: Fri, 4 Jun 2010 11:07:05 -0400 Subject: [SciPy-User] UnivariateSpline broken? In-Reply-To: References: <1275654609.11736.11.camel@robert-desktop-work> <6FFAADE2-0E86-475D-A5CE-A6B5D13A5EE0@gmail.com> Message-ID: <1902D1F0-5323-4700-ABCB-A94220475FC8@gmail.com> On Jun 4, 2010, at 10:54 AM, josef.pktd at gmail.com wrote: > > I think the behavior of UnivariateSpline overall is a bit weird (?), > because it does confusing delegation to the subclasses. But, I think, > nobody has done a systematic review of the spline classes recently. > > Hopefully, the docmarathon gets around to clean up the documentation > for the spline classes. > > Josef Actually, it appears a description of the default behavior has been added in the scipy documentation editor. The difference between defaults (for splrep and UnivariateSpline) is still strange, but at least it's documented (or will be, when the edits are merged). -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Fri Jun 4 11:55:48 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 4 Jun 2010 11:55:48 -0400 Subject: [SciPy-User] best way to convert a structured array to a float view (again) Message-ID: Say I have the following arrays that I want to view as/cast to plain ndarrays with float dtype import numpy as np arr = np.array([(24,),(24,),(24,),(24,),(24,)], dtype=[("var1",int)]) arr2 = np.array([(24,4.5),(24,4.5),(24,4.5),(24,4.5),(24,4.5)], dtype=[("var1",int),("var2",float)]) What I really want to be able to do is something like arr.view(float) or arr2.view((float,2)) But I realize that I can't do this because of how the structs are defined in memory. So my question is, is this the best (cheapest, easiest) way to get arr or arr2 as all floats. arr3 = np.zeros(len(arr), dtype=float) arr3[:] = arr.view(int) or arr4 = np.zeros(len(arr2), dtype=zip(arr2.dtype.names,['float']*len(arr2.dtype.names))) arr4[:] = arr2[:] arr5 = arr4.view((float,len(arr4.dtype.names))) So now I have arr3 and arr5. I need this to be rather general (can ignore strings and object types for now), so that's the reason for the approach I'm taking here. Thanks, Skipper From warren.weckesser at enthought.com Fri Jun 4 12:29:53 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Fri, 04 Jun 2010 11:29:53 -0500 Subject: [SciPy-User] ODEINT/ODE solvers redesign--anyone for a sprint at SciPy 2010? Message-ID: <4C092A01.9040905@enthought.com> It's about time we tackled the issue of the ODE solvers in SciPy. Some notes about the issue are on the wiki: http://projects.scipy.org/scipy/wiki/OdeintRedesign This would be a great topic for a sprint at the SciPy conference. I just added it to the list of suggested sprint topics, so give it a vote if you are going to be there and are interested in working on this. Warren From uclamathguy at gmail.com Fri Jun 4 12:30:35 2010 From: uclamathguy at gmail.com (Ryan R. Rosario) Date: Fri, 4 Jun 2010 09:30:35 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Problem with np.load() on Huge Sparse Matrix In-Reply-To: <1275655448.11736.15.camel@robert-desktop-work> References: <28776255.post@talk.nabble.com> <1275655448.11736.15.camel@robert-desktop-work> Message-ID: <28782286.post@talk.nabble.com> Oh. Yes, mymatrix and intersection_matrix are the same. I forgot to change the name. The number of nonzero elements is 1.2 billion. What is weird is that if I use np.load(...,'r') (memmap), it seems to read the file fine. But, if I do not use memmap, the data is corrupt. R. Robert Elsner wrote: > > > Hello, > > how sparse is your matrix (NNZ)? From your code it is not clear that > mymatrix and intersection_matrix are actually the same matrices. > > Cheers. > > Am Donnerstag, den 03.06.2010, 23:39 -0700 schrieb Ryan R. Rosario: >> Is this a bug? Has anybody else experienced this? >> >> Not being able to load a matrix from disk is a huge limitation for me. I >> would appreciate any help anyone can provide with this. >> >> Thanks, >> Ryan >> >> >> >> Ryan R. Rosario wrote: >> > >> > Hi, >> > >> > I have a very huge sparse (395000 x 395000) CSC matrix that I cannot >> > save in one pass, so I saved the data, indices, indptr and shape in >> > separate files as suggested by Dave Wade-Farley a few years back. >> > >> > When I try to read back the indices pickle: >> > >> >>> np.save("indices.pickle", mymatrix.indices) >> >>>> indices = np.load("indices.pickle.npy") >> >>>> indices >> > array([394852, 394649, 394533, ..., 0, 0, 0], >> dtype=int32) >> >>>> intersection_matrix.indices >> > array([394852, 394649, 394533, ..., 1557, 1223, 285], >> dtype=int32) >> > >> > Why is this happening? My only workaround is to print all of entries >> > of intersection_matrix.indices to a file, and read in back which takes >> > up to 2 hours. It would be great if I could get np.load to work >> > because it is much faster. >> > >> > Thanks, >> > Ryan >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/Problem-with-np.load%28%29-on-Huge-Sparse-Matrix-tp28719518p28782286.html Sent from the Scipy-User mailing list archive at Nabble.com. From kwgoodman at gmail.com Fri Jun 4 14:20:39 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 4 Jun 2010 11:20:39 -0700 Subject: [SciPy-User] [ANN] la 0.3, the labeled array Message-ID: The main class of the la package is a labeled array, larry. A larry consists of data and labels. The data is stored as a NumPy array and the labels as a list of lists (one list per dimension). Alignment by label is automatic when you add (or subtract, multiply, divide) two larrys. larry adds the convenience of labels, provides many built-in methods, and let's you use many of your existing array functions. Download: http://pypi.python.org/pypi/la docs ?http://larry.sourceforge.net code ?https://launchpad.net/larry ============= Release Notes ============= la 0.3 (banana) =============== *Release date: 2010-06-04* New larry methods ----------------- - astype: Copy of larry cast to specified type - geometric_mean: new method based on existing array function New functions ------------- - la.util.resample.cross_validation: k-fold cross validation index iterator - la.util.resample.bootstrap: bootstrap index iterator - la.util.misc.listmap: O(n) version of map(list1.index, list2) - la/src/clistmap.pyx: Cython version of listmap with python fallback Enhancements ------------ - Major performance boost in most larry methods! - You can now use an optional dtype when creating larrys - You can now optionally skip the integrity test when creating a new larry - Add ability to compare (==, >, !=, etc) larrys with lists and tuples - Documentation and unit tests Breakage from la 0.2 -------------------- - lastrank and lastrank_decay methods combined into one method: lastrank - Given shape (n,m) input, lastrank now returns shape (n,) instead of (n,1) - geometric_mean now reduces input in the same way as lastrank (see above) Bug fixes --------- - #571813 Three larry methods crashed on 1d input - #571737 skiprows missing from parameters section of the fromcsv doc string - #571899 label indexing fails when larry is 3d and index is a tuple of len 2 - #571830 prod, cumprod, and cumsum did not return NaN for all-NaN input - #572638 lastrank chokes on input with a shape tuple that contains zero - #573240 Reduce methods give wrong output with shapes that contain zero - #582579 la.afunc.nans: wrong output for str and object dtype - #583596 assert_larry_equal crashed when comparing float larry to str larry - #585694 cumsum and cumprod crashed on dtype=int Details ------- For further details see the change log in la/ChangeLog. From vincent at vincentdavis.net Fri Jun 4 15:38:24 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Fri, 4 Jun 2010 13:38:24 -0600 Subject: [SciPy-User] best way to convert a structured array to a float view (again) In-Reply-To: References: Message-ID: On Fri, Jun 4, 2010 at 9:55 AM, Skipper Seabold wrote: > Say I have the following arrays that I want to view as/cast to plain > ndarrays with float dtype > > import numpy as np > arr = np.array([(24,),(24,),(24,),(24,),(24,)], dtype=[("var1",int)]) > > arr2 = np.array([(24,4.5),(24,4.5),(24,4.5),(24,4.5),(24,4.5)], > dtype=[("var1",int),("var2",float)]) > > What I really want to be able to do is something like > > arr.view(float) I am going to do some timing but this looks promising. Glad to know I am not the onlyone that think going between data types is a hassel. arr2 = np.array([(24,4.5),(24,4.5),(24,4.5),(24,4.5),(24,4.5)], dtype=[("var1",int),("var2",float)]) >>> arr2.dtype=float >>> arr2 array([ 1.18575755e-322, 4.50000000e+000, 1.18575755e-322, 4.50000000e+000, 1.18575755e-322, 4.50000000e+000, 1.18575755e-322, 4.50000000e+000, 1.18575755e-322, 4.50000000e+000]) Of course if you want to leave arr2 untouched you need some type of copy. Vincent > > or > > arr2.view((float,2)) > > But I realize that I can't do this because of how the structs are > defined in memory. ?So my question is, is this the best (cheapest, > easiest) way to get arr or arr2 as all floats. > > arr3 = np.zeros(len(arr), dtype=float) > arr3[:] = arr.view(int) > > or > > arr4 = np.zeros(len(arr2), > dtype=zip(arr2.dtype.names,['float']*len(arr2.dtype.names))) > arr4[:] = arr2[:] > arr5 = arr4.view((float,len(arr4.dtype.names))) > > So now I have arr3 and arr5. ?I need this to be rather general (can > ignore strings and object types for now), so that's the reason for the > approach I'm taking here. > > Thanks, > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From vincent at vincentdavis.net Fri Jun 4 15:40:15 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Fri, 4 Jun 2010 13:40:15 -0600 Subject: [SciPy-User] best way to convert a structured array to a float view (again) In-Reply-To: References: Message-ID: On Fri, Jun 4, 2010 at 1:38 PM, Vincent Davis wrote: > On Fri, Jun 4, 2010 at 9:55 AM, Skipper Seabold wrote: >> Say I have the following arrays that I want to view as/cast to plain >> ndarrays with float dtype >> >> import numpy as np >> arr = np.array([(24,),(24,),(24,),(24,),(24,)], dtype=[("var1",int)]) >> >> arr2 = np.array([(24,4.5),(24,4.5),(24,4.5),(24,4.5),(24,4.5)], >> dtype=[("var1",int),("var2",float)]) >> >> What I really want to be able to do is something like >> >> arr.view(float) > > I am going to do some timing but this looks promising. Glad to know I > am not the onlyone that think going between data types is a hassel. > > arr2 = np.array([(24,4.5),(24,4.5),(24,4.5),(24,4.5),(24,4.5)], > ? ? ? ? ? ? ? ?dtype=[("var1",int),("var2",float)]) >>>> arr2.dtype=float >>>> arr2 > array([ ?1.18575755e-322, ? 4.50000000e+000, ? 1.18575755e-322, > ? ? ? ? 4.50000000e+000, ? 1.18575755e-322, ? 4.50000000e+000, > ? ? ? ? 1.18575755e-322, ? 4.50000000e+000, ? 1.18575755e-322, > ? ? ? ? 4.50000000e+000]) I just relived that that doesn't work for the int part, It really should give an error. Vincent > > Of course if you want to leave arr2 untouched you need some type of copy. > > Vincent > > >> >> or >> >> arr2.view((float,2)) >> >> But I realize that I can't do this because of how the structs are >> defined in memory. ?So my question is, is this the best (cheapest, >> easiest) way to get arr or arr2 as all floats. >> >> arr3 = np.zeros(len(arr), dtype=float) >> arr3[:] = arr.view(int) >> >> or >> >> arr4 = np.zeros(len(arr2), >> dtype=zip(arr2.dtype.names,['float']*len(arr2.dtype.names))) >> arr4[:] = arr2[:] >> arr5 = arr4.view((float,len(arr4.dtype.names))) >> >> So now I have arr3 and arr5. ?I need this to be rather general (can >> ignore strings and object types for now), so that's the reason for the >> approach I'm taking here. >> >> Thanks, >> >> Skipper >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From dagss at student.matnat.uio.no Fri Jun 4 15:40:32 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Fri, 04 Jun 2010 21:40:32 +0200 Subject: [SciPy-User] ODEINT/ODE solvers redesign--anyone for a sprint at SciPy 2010? In-Reply-To: <4C092A01.9040905@enthought.com> References: <4C092A01.9040905@enthought.com> Message-ID: <4C0956B0.60007@student.matnat.uio.no> Warren Weckesser wrote: > It's about time we tackled the issue of the ODE solvers in SciPy. Some > notes about the issue are on the wiki: > http://projects.scipy.org/scipy/wiki/OdeintRedesign > > This would be a great topic for a sprint at the SciPy conference. I > just added it to the list of suggested sprint topics, so give it a vote > if you are going to be there and are interested in working on this. > I'm not going to be there, but I have an interest in this... Anyway here's an idea that may or may not be beyond the scope of this redesign: It would be great with support for fast Cython callbacks, so that one can implement a class in Cython and have the whole process run in compiled code without any interpreter steps per loop. Basically, just do cdef class DoubleFunction: ... if not isinstance(callback, DoubleFunction): callback = CallPythonFunctionDoubleFunction(callback) Perhaps one could accept ctypes function pointers too... Dag Sverre From dagss at student.matnat.uio.no Fri Jun 4 15:42:50 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Fri, 04 Jun 2010 21:42:50 +0200 Subject: [SciPy-User] ODEINT/ODE solvers redesign--anyone for a sprint at SciPy 2010? In-Reply-To: <4C0956B0.60007@student.matnat.uio.no> References: <4C092A01.9040905@enthought.com> <4C0956B0.60007@student.matnat.uio.no> Message-ID: <4C09573A.5030709@student.matnat.uio.no> Dag Sverre Seljebotn wrote: > Warren Weckesser wrote: > >> It's about time we tackled the issue of the ODE solvers in SciPy. Some >> notes about the issue are on the wiki: >> http://projects.scipy.org/scipy/wiki/OdeintRedesign >> >> This would be a great topic for a sprint at the SciPy conference. I >> just added it to the list of suggested sprint topics, so give it a vote >> if you are going to be there and are interested in working on this. >> >> > I'm not going to be there, but I have an interest in this... > > Anyway here's an idea that may or may not be beyond the scope of this > redesign: It would be great with support for fast Cython callbacks, so > that one can implement a class in Cython and have the whole process run > in compiled code without any interpreter steps per loop. > > Basically, just do > > cdef class DoubleFunction: ... > > if not isinstance(callback, DoubleFunction): > callback = CallPythonFunctionDoubleFunction(callback) > > Perhaps one could accept ctypes function pointers too... > Nah, sorry, this is better left as a later independent task. Perhaps worth keeping in mind while designing the API and writing any new code though? Dag Sverre From jsseabold at gmail.com Fri Jun 4 15:43:18 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 4 Jun 2010 15:43:18 -0400 Subject: [SciPy-User] best way to convert a structured array to a float view (again) In-Reply-To: References: Message-ID: On Fri, Jun 4, 2010 at 3:38 PM, Vincent Davis wrote: > On Fri, Jun 4, 2010 at 9:55 AM, Skipper Seabold wrote: >> Say I have the following arrays that I want to view as/cast to plain >> ndarrays with float dtype >> >> import numpy as np >> arr = np.array([(24,),(24,),(24,),(24,),(24,)], dtype=[("var1",int)]) >> >> arr2 = np.array([(24,4.5),(24,4.5),(24,4.5),(24,4.5),(24,4.5)], >> dtype=[("var1",int),("var2",float)]) >> >> What I really want to be able to do is something like >> >> arr.view(float) > > I am going to do some timing but this looks promising. Glad to know I > am not the onlyone that think going between data types is a hassel. > > arr2 = np.array([(24,4.5),(24,4.5),(24,4.5),(24,4.5),(24,4.5)], > ? ? ? ? ? ? ? ?dtype=[("var1",int),("var2",float)]) >>>> arr2.dtype=float >>>> arr2 > array([ ?1.18575755e-322, ? 4.50000000e+000, ? 1.18575755e-322, > ? ? ? ? 4.50000000e+000, ? 1.18575755e-322, ? 4.50000000e+000, > ? ? ? ? 1.18575755e-322, ? 4.50000000e+000, ? 1.18575755e-322, > ? ? ? ? 4.50000000e+000]) > > Of course if you want to leave arr2 untouched you need some type of copy. > Yeah, you can't do it in place. The int data gets turned to garbage there, which is not what I want. Skipper From R.Springuel at umit.maine.edu Fri Jun 4 17:19:55 2010 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Fri, 04 Jun 2010 17:19:55 -0400 Subject: [SciPy-User] Speeding up a search algorithm In-Reply-To: References: Message-ID: <4C096DFB.1000704@umit.maine.edu> After playing around with some of the various suggestions, as well as a few of my own ideas, I've modified my algorithm to the following: c = [] d = [] for i in range(len(current)): c += [current[i]]*i d += current[:i] search = distancematrix[c,d] m = numpy.nanmin(search) mask = search == m n1 = c[mask.argmax()] n2 = d[mask.argmax()] if aggr != None: if aggr: p1 = 0 else: p1 = N for i in range(len(search)): if mask[i]: if c[i] < 0: p2 = tree.pop[c[i]] else: p2 = 1 if d[i] < 0: p2 += tree.pop[d[i]] else: p2 += 1 if p2 < p1 and not aggr: n1 = c[i] n2 = d[i] p1 = p2 elif p2 > p1 and aggr: n1 = c[i] n2 = d[i] p1 = p2 In testing on a 3000x3000 array, I'm seeing approximately a 7- to 8-fold increase in speed. While I believe that will serve my purposes for now, if anyone has any other ideas on how to increase the speed further, I'm all ears. According to my testing "search = distancematrix[c,d]" is consistently the slowest step but the "for i in range(len(search)):" loop takes about the same amount of time when it's run. -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By Appointment Only From stefan at sun.ac.za Fri Jun 4 19:19:29 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 4 Jun 2010 16:19:29 -0700 Subject: [SciPy-User] best way to convert a structured array to a float view (again) In-Reply-To: References: Message-ID: On 4 June 2010 08:55, Skipper Seabold wrote: > Say I have the following arrays that I want to view as/cast to plain > ndarrays with float dtype > > import numpy as np > arr = np.array([(24,),(24,),(24,),(24,),(24,)], dtype=[("var1",int)]) > > arr2 = np.array([(24,4.5),(24,4.5),(24,4.5),(24,4.5),(24,4.5)], > dtype=[("var1",int),("var2",float)]) How about arr.view(int).astype(float) St?fan From warren.weckesser at enthought.com Fri Jun 4 20:05:44 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Fri, 04 Jun 2010 19:05:44 -0500 Subject: [SciPy-User] ODEINT/ODE solvers redesign--anyone for a sprint at SciPy 2010? In-Reply-To: <4C09573A.5030709@student.matnat.uio.no> References: <4C092A01.9040905@enthought.com> <4C0956B0.60007@student.matnat.uio.no> <4C09573A.5030709@student.matnat.uio.no> Message-ID: <4C0994D8.2030103@enthought.com> Dag Sverre Seljebotn wrote: > Dag Sverre Seljebotn wrote: > >> Warren Weckesser wrote: >> >> >>> It's about time we tackled the issue of the ODE solvers in SciPy. Some >>> notes about the issue are on the wiki: >>> http://projects.scipy.org/scipy/wiki/OdeintRedesign >>> >>> This would be a great topic for a sprint at the SciPy conference. I >>> just added it to the list of suggested sprint topics, so give it a vote >>> if you are going to be there and are interested in working on this. >>> >>> >>> >> I'm not going to be there, but I have an interest in this... >> >> Anyway here's an idea that may or may not be beyond the scope of this >> redesign: It would be great with support for fast Cython callbacks, so >> that one can implement a class in Cython and have the whole process run >> in compiled code without any interpreter steps per loop. >> >> Basically, just do >> >> cdef class DoubleFunction: ... >> >> if not isinstance(callback, DoubleFunction): >> callback = CallPythonFunctionDoubleFunction(callback) >> >> Perhaps one could accept ctypes function pointers too... >> >> > Nah, sorry, this is better left as a later independent task. Perhaps > worth keeping in mind while designing the API and writing any new code > though? > I would love to get something like this working, and we should definitely keep it in mind during any discussions of the ODE solvers redesign. Warren > Dag Sverre > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From stephens.js at gmail.com Fri Jun 4 20:20:12 2010 From: stephens.js at gmail.com (Scott Stephens) Date: Fri, 4 Jun 2010 19:20:12 -0500 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: References: Message-ID: On Fri, Jun 4, 2010 at 7:31 AM, Ralf Gommers wrote: > On Fri, Jun 4, 2010 at 8:16 PM, Scott Stephens > wrote: >> Looks like all of the created files have the right architectures, and >> nothing in the unit tests fail because of import errors. I'm still not >> getting a flawless scipy.test() run though (6 errors and 1 failure). >> I haven't yet had a chance to do a deep dive and figure out what >> exactly is failing and if it might be related to build problems. >> > Can you post the errors and failure? Currently one test gives an error, > test_lapack_misaligned. Others that have been failing in recent checkouts > are one lambertw test and several matlab ones, so no need to investigate > those. Here they are: ERROR: test_complex_nonsymmetric_modes (test_arpack.TestEigenComplexNonSymmetric) ERROR: test_nonsymmetric_modes (test_arpack.TestEigenNonSymmetric) ERROR: test_starting_vector (test_arpack.TestEigenNonSymmetric) ERROR: test_add_function_ordered (test_catalog.TestCatalog) ERROR: Test persisting a function in the default catalog (test_add_function_persistent1 from scipy/weave/tests/test_catalog.py) ERROR: Shouldn't get a single file from the temp dir. (test_get_existing_files2 from scipy/weave/tests/test_catalog.py) FAIL: test_complex_symmetric_modes (test_arpack.TestEigenComplexSymmetric) This is scipy-0.7.1 from a tarball. I'm wondering if if some of the nonsymmetric failures are related to me not installing UMFPACK. At some point in my build adventure I got some messages about UMFPACK not being found, but nothing about it appears in the installation instructions, so I haven't done anything with it. -- Scott From vincent at vincentdavis.net Sat Jun 5 00:16:56 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Fri, 4 Jun 2010 22:16:56 -0600 Subject: [SciPy-User] best way to convert a structured array to a float view (again) In-Reply-To: References: Message-ID: 2010/6/4 St?fan van der Walt : > On 4 June 2010 08:55, Skipper Seabold wrote: >> Say I have the following arrays that I want to view as/cast to plain >> ndarrays with float dtype >> >> import numpy as np >> arr = np.array([(24,),(24,),(24,),(24,),(24,)], dtype=[("var1",int)]) >> >> arr2 = np.array([(24,4.5),(24,4.5),(24,4.5),(24,4.5),(24,4.5)], >> dtype=[("var1",int),("var2",float)]) How about this, arr2 = np.column_stack((arr[col] for col in arr.dtype.names)) > > How about > > arr.view(int).astype(float) > > St?fan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ralf.gommers at googlemail.com Sat Jun 5 06:45:48 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 5 Jun 2010 18:45:48 +0800 Subject: [SciPy-User] ANN: SciPy 0.8.0 beta 1 Message-ID: I'm pleased to announce the first beta release of SciPy 0.8.0. SciPy is a package of tools for science and engineering for Python. It includes modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, ODE solvers, and more. This beta release comes almost one and a half year after the 0.7.0 release and contains many new features, numerous bug-fixes, improved test coverage, and better documentation. Please note that SciPy 0.8.0b1 requires Python 2.4 or greater and NumPy 1.4.1 or greater. For information, please see the release notes: http://sourceforge.net/projects/scipy/files/scipy/0.8.0b1/NOTES.txt/view You can download the release from here: https://sourceforge.net/projects/scipy/ Python 2.5/2.6 binaries for Windows and OS X are available as well as source tarballs for other platforms. Thank you to everybody who contributed to this release. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sat Jun 5 04:24:40 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 5 Jun 2010 01:24:40 -0700 Subject: [SciPy-User] best way to convert a structured array to a float view (again) In-Reply-To: References: Message-ID: On 4 June 2010 21:16, Vincent Davis wrote: > 2010/6/4 St?fan van der Walt : >> On 4 June 2010 08:55, Skipper Seabold wrote: >>> Say I have the following arrays that I want to view as/cast to plain >>> ndarrays with float dtype >>> >>> import numpy as np >>> arr = np.array([(24,),(24,),(24,),(24,),(24,)], dtype=[("var1",int)]) >>> >>> arr2 = np.array([(24,4.5),(24,4.5),(24,4.5),(24,4.5),(24,4.5)], >>> dtype=[("var1",int),("var2",float)]) > > How about this, > arr2 = np.column_stack((arr[col] for col in arr.dtype.names)) That code is more complicated, and results in a 10x slow-down: In [5]: timeit np.column_stack((arr[col] for col in arr.dtype.names)) 100000 loops, best of 3: 13.4 us per loop In [6]: timeit arr.view(int).astype(float) 100000 loops, best of 3: 2.01 us per loop Regards St?fan From vincent at vincentdavis.net Sat Jun 5 11:07:03 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Sat, 5 Jun 2010 09:07:03 -0600 Subject: [SciPy-User] best way to convert a structured array to a float view (again) In-Reply-To: References: Message-ID: 2010/6/5 St?fan van der Walt : > On 4 June 2010 21:16, Vincent Davis wrote: >> 2010/6/4 St?fan van der Walt : >>> On 4 June 2010 08:55, Skipper Seabold wrote: >>>> Say I have the following arrays that I want to view as/cast to plain >>>> ndarrays with float dtype >>>> >>>> import numpy as np >>>> arr = np.array([(24,),(24,),(24,),(24,),(24,)], dtype=[("var1",int)]) >>>> >>>> arr2 = np.array([(24,4.5),(24,4.5),(24,4.5),(24,4.5),(24,4.5)], >>>> dtype=[("var1",int),("var2",float)]) >> >> How about this, >> arr2 = np.column_stack((arr[col] for col in arr.dtype.names)) > > That code is more complicated, and results in a 10x slow-down: > > In [5]: timeit np.column_stack((arr[col] for col in arr.dtype.names)) > 100000 loops, best of 3: 13.4 us per loop > > In [6]: timeit arr.view(int).astype(float) > 100000 loops, best of 3: 2.01 us per loop Yes but what do you do for a 2d array? arr.view(int).astype(float) doesn't really work for that? I might be solving a different problem from Skipper. >>> arr2 array([(24, 4.5), (24, 4.5), (24, 4.5), (24, 4.5), (24, 4.5)], dtype=[('var1', '>> arr2.view(int).astype(float) array([ 2.40000000e+01, 4.61675257e+18, 2.40000000e+01, 4.61675257e+18, 2.40000000e+01, 4.61675257e+18, 2.40000000e+01, 4.61675257e+18, 2.40000000e+01, 4.61675257e+18]) Vincent > Regards > St?fan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From gael.varoquaux at normalesup.org Sat Jun 5 16:26:03 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 5 Jun 2010 22:26:03 +0200 Subject: [SciPy-User] python for physics In-Reply-To: References: <20100516165143.GF19278@phare.normalesup.org> Message-ID: <20100605202603.GG14186@phare.normalesup.org> Hey Jeremy, Sorry for taking so long to reply. I can't manage properly the load of e-mail I get. The source code is here: http://github.com/GaelVaroquaux/scipy-tutorials Disclaimer: It's ugly, and might now work with your sphinx/matplotlib/scipy version. HTH, Ga?l On Mon, May 17, 2010 at 01:20:53PM -0600, Jeremy Conlin wrote: > Ga?l, > Thanks for posting these links, they look like a really good > introduction which I can use to help my coworkers. (I'm not even the > original poster.) > One question though is how you got the output from iPython into your > document. Of course you could just copy and paste it in, but for some > reason I believe you have this process automated. Is it automated and > are you willing to share how you did it? > Thanks, > Jeremy > > This is not really physics-related, and is more oriented towards image > > analysis than Physics, and on top of that it is unfinished, and I have > > been shying from publishing on the net, but the notes of the courses I > > give can be found here: > > http://gael-varoquaux.info/python4science-2x1.pdf > > Also, see Fernando's py4science page, full of useful material: > > http://fperez.org/py4science/starter_kit.html > > Ga?l > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Gael Varoquaux Research Fellow, INRIA Laboratoire de Neuro-Imagerie Assistee par Ordinateur NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France Phone: ++ 33-1-69-08-78-35 Mobile: ++ 33-6-28-25-64-62 http://gael-varoquaux.info From vincent at vincentdavis.net Sat Jun 5 20:55:23 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Sat, 5 Jun 2010 18:55:23 -0600 Subject: [SciPy-User] How to keep current scipy version installed Message-ID: I would like to keep the most current scipy installed. By this I mean the current development version not the most recent release. What is the best way to update. I would like to use git git://github.com/cournape/numpy.git and git://github.com/pv/scipy-work.git as it seems that scipy is moving to git. But I am unsure the best way to go about update. Any advice? Thanks Vincent From ralf.gommers at googlemail.com Sat Jun 5 23:49:38 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 6 Jun 2010 11:49:38 +0800 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: References: Message-ID: On Sat, Jun 5, 2010 at 8:20 AM, Scott Stephens wrote: > On Fri, Jun 4, 2010 at 7:31 AM, Ralf Gommers > wrote: > > On Fri, Jun 4, 2010 at 8:16 PM, Scott Stephens > > wrote: > >> Looks like all of the created files have the right architectures, and > >> nothing in the unit tests fail because of import errors. I'm still not > >> getting a flawless scipy.test() run though (6 errors and 1 failure). > >> I haven't yet had a chance to do a deep dive and figure out what > >> exactly is failing and if it might be related to build problems. > >> > > Can you post the errors and failure? Currently one test gives an error, > > test_lapack_misaligned. Others that have been failing in recent checkouts > > are one lambertw test and several matlab ones, so no need to investigate > > those. > > Here they are: > > ERROR: test_complex_nonsymmetric_modes > (test_arpack.TestEigenComplexNonSymmetric) > ERROR: test_nonsymmetric_modes (test_arpack.TestEigenNonSymmetric) > ERROR: test_starting_vector (test_arpack.TestEigenNonSymmetric) > ERROR: test_add_function_ordered (test_catalog.TestCatalog) > ERROR: Test persisting a function in the default catalog > (test_add_function_persistent1 from > scipy/weave/tests/test_catalog.py) > ERROR: Shouldn't get a single file from the temp dir. > (test_get_existing_files2 from scipy/weave/tests/test_catalog.py) > FAIL: test_complex_symmetric_modes (test_arpack.TestEigenComplexSymmetric) > > This is scipy-0.7.1 from a tarball. Could you try the 0.8.0 beta (http://sourceforge.net/projects/scipy/files/)? Some of this may be fixed, and you also get another 18 months worth or so of new features and bug fixes. > I'm wondering if if some of the > nonsymmetric failures are related to me not installing UMFPACK. At > some point in my build adventure I got some messages about UMFPACK not > being found, but nothing about it appears in the installation > instructions, so I haven't done anything with it. > > UMFPACK is not required for scipy and it not being installed shouldn't result in test failures. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sat Jun 5 23:52:54 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 6 Jun 2010 11:52:54 +0800 Subject: [SciPy-User] How to keep current scipy version installed In-Reply-To: References: Message-ID: On Sun, Jun 6, 2010 at 8:55 AM, Vincent Davis wrote: > I would like to keep the most current scipy installed. By this I mean > the current development version not the most recent release. What is > the best way to update. I would like to use git > git://github.com/cournape/numpy.git and > git://github.com/pv/scipy-work.git as it seems that scipy is moving to > git. But I am unsure the best way to go about update. > Any advice? > > Follow the instructions here: http://projects.scipy.org/numpy/wiki/GitMirror Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at vincentdavis.net Sun Jun 6 00:27:55 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Sat, 5 Jun 2010 22:27:55 -0600 Subject: [SciPy-User] How to keep current scipy version installed In-Reply-To: References: Message-ID: On Sat, Jun 5, 2010 at 9:52 PM, Ralf Gommers wrote: > > > On Sun, Jun 6, 2010 at 8:55 AM, Vincent Davis > wrote: >> >> I would like to keep the most current scipy installed. By this I mean >> the current development version not the most recent release. What is >> the best way to update. I would like to use git >> git://github.com/cournape/numpy.git and >> git://github.com/pv/scipy-work.git as it seems that scipy is moving to >> git. But I am unsure the best way to go about update. >> Any advice? >> > Follow the instructions here: http://projects.scipy.org/numpy/wiki/GitMirror > Well that doesn't really answer my question, I have ok knowledge of working with git. What I am asking is right now scipy is installed in (Mac osx) /Library/Frameworks/EPD64.framework/Versions/6.1/lib/python2.6/site-packages/scipy So if I want to update this installed scipy. To do this do I keep a scipy clone/branch/... in another folder and the do a norman install or is it possible to directly update. I suppose if only non-compiled files change I can just update those files rather than doing an install. Thanks Vincent > Cheers, > Ralf > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From stefan at sun.ac.za Sun Jun 6 01:30:09 2010 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 5 Jun 2010 22:30:09 -0700 Subject: [SciPy-User] How to keep current scipy version installed In-Reply-To: References: Message-ID: On 5 June 2010 21:27, Vincent Davis wrote: > So if I want to update this installed scipy. To do this do I keep a > scipy clone/branch/... in another folder and the do a norman install > or is it possible to directly update. I suppose if only non-compiled > files change I can just update those files rather than doing an > install. I just normally compile NumPy/SciPy in-place and then point my PYTHONPATH there. You may compile in-place using: python setupscons.py scons -i --jobs=2 (if you have numscons installed) Otherwise just python setup.py build_ext -i Regards St?fan From kuiper at jpl.nasa.gov Sun Jun 6 20:17:47 2010 From: kuiper at jpl.nasa.gov (Tom Kuiper) Date: Sun, 06 Jun 2010 17:17:47 -0700 Subject: [SciPy-User] memory usage question Message-ID: <4C0C3AAB.1080209@jpl.nasa.gov> Greetings all. I have a feeling that, coming at this with a background in FORTRAN and C, I'm missing some subtlety, possibly of an OO nature. Basically, I'm looping over very large data arrays and memory usage just keeps growing even though I re-use the arrays. Below is a stripped down version of what I'm doing. You'll recognize it as gulping a great quantity of data (1 million complex samples), Fourier transforming these by 1000 sample blocks into spectra, co-adding the spectra, and doing this 255 times, for a grand 1000 point total spectrum. At iteration 108 of the outer loop, I get a memory error. By then, according to 'top', ipython (or python) is using around 85% of 3.5 GB of memory. P = zeros(fft_size) nsecs = 255 fft_size = 1000 for i in range(nsecs): header,data = get_raw_record(fd_in) num_bytes = len(data) label, reclen, recver, softver, spcid, vsrid, schanid, bits_per_sample, \ ksamps_per_sec, sdplr, prdx_dss_id, prdx_sc_id, prdx_pass_num, \ prdx_uplink_band,prdx_downlink_band, trk_mode, uplink_dss_id, ddc_lo, \ rf_to_if_lo, data_error, year, doy, sec, data_time_offset, frov, fro, \ frr, sfro,rf_freq, schan_accum_phase, (scpp0,scpp1,scpp2,scpp3), \ schan_label = header # ksamp_per_sec = 1e3, number of complex samples in 'data' = 1e6 num_32bit_words = len(data)*8/BITS_PER_32BIT_WORD cmplx_samp_per_word = (BITS_PER_32BIT_WORD/(2*bits_per_sample)) cmplx_samples = unpack_vdr_data(num_32bit_words,cmplx_samp_per_word,data) del(data) # This makes no difference for j in range(0,ksamps_per_sec*1000/fft_size): index = int(j*fft_size) S = fft(cmplx_samples[index:index+fft_size]) P += S*conjugate(S) del(cmplx_samples) # This makes no difference if (i % 20) == 0: gc.collect(0) # This makes no difference P /= nsecs sample_period = 1./ksamps_per_sec # kHz f = fftfreq(fft_size, d=sample_period) What am I missing? Best regards Tom p.s. Many of you will see this twice, for which I apologize. From pgmdevlist at gmail.com Sun Jun 6 20:38:26 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 6 Jun 2010 20:38:26 -0400 Subject: [SciPy-User] failing to create a scikit.timeseries object In-Reply-To: References: <86ACE1D1-6B7E-4649-97EF-1A9E7DB534DD@gmail.com> Message-ID: <857DD23D-9BEE-4E11-BE64-C6CDD7AC6381@gmail.com> On Jun 3, 2010, at 3:36 AM, eneide.odissea wrote: > > I Thank you all and I apologize for my very bad code snippet. > Do you know whether in scikits.timeseries there is a command / option / configuration that allows to store time using long instead of integer? It's already done internally in the C code: the basic element that corresponds to a Date is the date_info struct, that uses long to store the absolute date and double the absolute time. For more details, please refer to cdates.c (in the src directory of the distribution). > Probably it might be necessary also to setup a callback somewhere able to convert the datetime into this internally stored number ; have you any idea about it? All done in C... Now, there are some limitations w the current approach, the most obvious one being that you can't define your own frequencies. There's some new code in numpy that should allow it, I need to work on that... From stephens.js at gmail.com Sun Jun 6 23:42:25 2010 From: stephens.js at gmail.com (Scott Stephens) Date: Sun, 6 Jun 2010 22:42:25 -0500 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: References: Message-ID: On Sat, Jun 5, 2010 at 10:49 PM, Ralf Gommers wrote: > Could you try the 0.8.0 beta (http://sourceforge.net/projects/scipy/files/)? > Some of this may be fixed, and you also get another 18 months worth or so of > new features and bug fixes. 0.8.0b1 doesn't build for me using numscons. Build log is attached. I'm using numpy 1.4.1 built from source (and tested, no errors or unknown failures) and numscons freshly checked out from the git repository (at least numscons-0.11 is required to build scipy-0.8.0b1, and only numscons-0.10 is available via easy_install; couldn't find source for numscons-0.11). -- Scott -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 68495 bytes Desc: not available URL: From cgohlke at uci.edu Sun Jun 6 23:55:37 2010 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sun, 06 Jun 2010 20:55:37 -0700 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: References: Message-ID: <4C0C6DB9.1000108@uci.edu> > On Sat, Jun 5, 2010 at 10:49 PM, Ralf Gommers > wrote: >> Could you try the 0.8.0 beta (http://sourceforge.net/projects/scipy/files/)? >> Some of this may be fixed, and you also get another 18 months worth or so of >> new features and bug fixes. > > 0.8.0b1 doesn't build for me using numscons. Build log is attached. > I'm using numpy 1.4.1 built from source (and tested, no errors or > unknown failures) and numscons freshly checked out from the git > repository (at least numscons-0.11 is required to build scipy-0.8.0b1, > and only numscons-0.10 is available via easy_install; couldn't find > source for numscons-0.11). > See ticket #1176 . -- Christoph From david at silveregg.co.jp Mon Jun 7 02:23:25 2010 From: david at silveregg.co.jp (David) Date: Mon, 07 Jun 2010 15:23:25 +0900 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: References: Message-ID: <4C0C905D.6080307@silveregg.co.jp> On 06/07/2010 12:42 PM, Scott Stephens wrote: > On Sat, Jun 5, 2010 at 10:49 PM, Ralf Gommers > wrote: >> Could you try the 0.8.0 beta (http://sourceforge.net/projects/scipy/files/)? >> Some of this may be fixed, and you also get another 18 months worth or so of >> new features and bug fixes. > > 0.8.0b1 doesn't build for me using numscons. Build log is attached. > I'm using numpy 1.4.1 built from source (and tested, no errors or > unknown failures) and numscons freshly checked out from the git > repository (at least numscons-0.11 is required to build scipy-0.8.0b1, > and only numscons-0.10 is available via easy_install; couldn't find > source for numscons-0.11). Could you see whether r6487 fixes it for you ? David From matthieu.brucher at gmail.com Mon Jun 7 07:27:46 2010 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 7 Jun 2010 13:27:46 +0200 Subject: [SciPy-User] Gaussian filter on an angle In-Reply-To: <9D023B54-6B32-460E-BF66-314376383005@yale.edu> References: <9D023B54-6B32-460E-BF66-314376383005@yale.edu> Message-ID: Hi, Sorry for the long delay, I didn't want to get over this on the week-end. I dealt a great deal with the topological issue (after all, my PhD was on data reduction on Riemannian manifolds), so I know the issues I'm facing. Well, almost. Anne > my issues ae the defect after the smoothing. The angle field is an azimuth for a tilt in 3D. The problem is that when the vertical angle is very smooth, the azimuth becomes less accurate and flip from 0 to pi. This causes my issues. Even when I smooth the original data before the conversion, the locations where those flips occure remain small and causes the next computation to diverge. I don't know if I can reconstruct the original data (the data I have are the result of the resolution of an inverse problem where the original data is unknown). I guess I will have to find another solution or in the end look further into phase smoothing... Thanks a lot of all your answers! Matthieu 2010/6/4 Zachary Pincus : > > On Jun 4, 2010, at 6:38 AM, Anne Archibald wrote: > >> On 4 June 2010 06:00, Matthieu Brucher >> wrote: >>> Hi, >>> >>> I'm trying to blur an angle field, but it's not easy ;) >>> Applying gaussian_filter (from ndimage) on the sinus and the cos is >>> not enough to have a smooth angle field, and of course applying >>> gaussian_filter directly on the angle field does not yeild >>> satisfactiry results. >>> Does anyone know of a function (even if it not in Python yet) that >>> could gaussian filter an angle field? Something like a Riemanian >>> filter (instead of an Euclidian one)... >> >> This isn't my field, but I suspect you will have problems with this. >> In particular, there is a *topological* obstacle to blurring angle >> fields. In the blurred field, you want each angle to be close to that >> of nearby pixels. But imagine following the angle around the image in >> a circle: the angle changes by one full turn as you go around this >> loop. Any smoothing mechanism must either introduce a discontinuity in >> this loop or retain one full turn around the loop. > > Anne's quite right -- I've banged my head on things like this before > too. I have a different idea about how to get around these issues, in > a killing-a-gnat-with-a-bazooka kind of way, though: you might be able > to pose this question as one of smoothing via curve-fitting instead of > via filtering? > > E.g. fit a bivariate spline or some other polynomial surface to your > angles in such a was as to minimize not the squared residuals > directly, but the square of the minimum angle between the fit surface > at that point and the data (going clockwise or counterclockwise, > whichever is smaller... I forget exactly but there's a closed-form way > to calculate that). This way you get a smooth underlying fit in a way > that is (I think?) immune to discontinuities in the angle data. > > Problem is either this will be very slow (fitting a many-parameter > surface) or probably over-smooth (fitting a low-parameter surface). > There are probably some multi-resolution methods you could use. I > think the nonlinear least squares optimizers in scipy would be the way > to go here? > > Zach > > > > > >> The former is >> unlikely to be desirable, and the latter is asking rather a lot of a >> smoothing method, and in any case still results in rapidly-changing >> angles around small loops. You could look into "phase unwrapping", >> techniques to reconstruct a function from its values modulo 2 pi; >> obviously once you had an unwrapped function blurring would work >> normally. In this setting unwrapping simply fails when there are >> topological obstacles. The alternative I would suggest is what you >> already tried, converting your angles to a vector field and smoothing >> that. You'll still get defects where the angles change rapidly, but I >> don't think that can be avoided, and the length of the resulting >> vectors will tell you something about the degree of defectiveness. >> >> The key to making any of this work is having original angles that are >> not too noisy. If you're extracting the angles from some underlying >> data, say by calculating an average direction over squares of an >> image, I recommend using enough averaging to get the noise on the >> angle quite small, so that defects will be rare. You may find yourself >> needing to resolve defects manually if you can't just live with them. >> >> >> Anne >> >> P.S. This sort of topological obstruction is the origin for >> hypothetical "cosmic strings" as well as some of the neat dynamics of >> vortices in inviscid fluids and magnetic fields in type II >> superconductors. -A >> >>> Matthieu >>> -- >>> Information System Engineer, Ph.D. >>> Blog: http://matt.eifelle.com >>> LinkedIn: http://www.linkedin.com/in/matthieubrucher >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From vincent at vincentdavis.net Mon Jun 7 10:32:57 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Mon, 7 Jun 2010 08:32:57 -0600 Subject: [SciPy-User] numpy python 3.1.2 on osx Message-ID: I have read that numpy has been ported to python 3. Is this correct ? I have tried to install it on py3.1.2 but have had no luck. (for me it is a matter of luck rather than skill) My most recent attempt resulted in what appeared to me an endless loop after typing "make" I gave it ~30min. I am looking for a some guidance. Like you should start here. If this is more appropriate on the numpy-user list let me know and I will repost there. Thanks Vincent From robert.kern at gmail.com Mon Jun 7 10:37:07 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 7 Jun 2010 10:37:07 -0400 Subject: [SciPy-User] numpy python 3.1.2 on osx In-Reply-To: References: Message-ID: On Mon, Jun 7, 2010 at 10:32, Vincent Davis wrote: > I have read that numpy has been ported to python 3. Is this correct ? > > I have tried to install it on py3.1.2 but have had no luck. (for me it > is a matter of luck rather than skill) > > My most recent attempt resulted in what appeared to me an endless loop > after typing "make" I gave it ~30min. Exactly what did you do? Exactly what did you see? numpy has no Makefile. > I am looking for a some guidance. Like you should start here. > > If this is more appropriate on the numpy-user list let me know and I > will repost there. Yes, numpy-discussion is the more appropriate list. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From vincent at vincentdavis.net Mon Jun 7 10:42:24 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Mon, 7 Jun 2010 08:42:24 -0600 Subject: [SciPy-User] numpy python 3.1.2 on osx In-Reply-To: References: Message-ID: Sorry, to many distractions while writing this email, I'll try again when I can think :) Vincent On Mon, Jun 7, 2010 at 8:37 AM, Robert Kern wrote: > On Mon, Jun 7, 2010 at 10:32, Vincent Davis wrote: >> I have read that numpy has been ported to python 3. Is this correct ? >> >> I have tried to install it on py3.1.2 but have had no luck. (for me it >> is a matter of luck rather than skill) >> >> My most recent attempt resulted in what appeared to me an endless loop >> after typing "make" I gave it ~30min. > > Exactly what did you do? Exactly what did you see? numpy has no Makefile. > >> I am looking for a some guidance. Like you should start here. >> >> If this is more appropriate on the numpy-user list let me know and I >> will repost there. > > Yes, numpy-discussion is the more appropriate list. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From jsseabold at gmail.com Mon Jun 7 10:42:14 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 7 Jun 2010 10:42:14 -0400 Subject: [SciPy-User] numpy python 3.1.2 on osx In-Reply-To: References: Message-ID: On Mon, Jun 7, 2010 at 10:32 AM, Vincent Davis wrote: > I have read that numpy has been ported to python 3. Is this correct ? Yes. > > I have tried to install it on py3.1.2 but have had no luck. (for me it > is a matter of luck rather than skill) > > My most recent attempt resulted in what appeared to me an endless loop > after typing "make" I gave it ~30min. > It should be the same as installing under Python < 3 except you use the version of Python you want. >From source (probably should delete the build directory if it's there already in the numpy source directory) python3.1 setup.py build python3.1 setup.py install Might need to sudo the last command. Don't know on OS X. Skipper From afraser at lanl.gov Mon Jun 7 12:18:10 2010 From: afraser at lanl.gov (Andy Fraser) Date: Mon, 07 Jun 2010 10:18:10 -0600 Subject: [SciPy-User] using multiple processors for particle filtering In-Reply-To: <8739xgndes.fsf@lanl.gov> (Andy Fraser's message of "Tue\, 25 May 2010 10\:39\:55 -0600") References: <8739xgndes.fsf@lanl.gov> Message-ID: <87iq5uzuil.fsf@lanl.gov> Thanks to all who offered advice. I am posting a final follow up in case anyone reads the thread looking for a solution. The weight calculations were taking more than 97% of the time in my code. By putting the loop in c++ and using pthreads for the most time consuming computations, I reduced the execution time by a factor of 68. I used the boost multi_array library for arrays and the numpy_boost code to do conversions from numpy to boost arrays. Lessons learned: 1. As Zack says, for speed don't loop over data in python 2. Don't use python multiprocessing for this kind of problem 3. Don't put calls to python interface functions (eg, PyObject_GetAttrString) in threaded c++ code. Extract data (or at least pointers) from python objects before starting multiple threads. 4. Do use numpy_boost (http://code.google.com/p/numpy-boost/) and pthreads. Here are the key lines of c++: void *weight_thread(void *thread_arg){ /* Code that runs in separate thread. The expensive calculations are done by t_d->c->weight(). The ugly code here, sets up data for that call using the single argument to this function. */ weight_data *t_d = (struct weight_data *) thread_arg; array_d2_ref par_list = *t_d->par_list; array_d1_ref w = *t_d->w; for (int i=t_d->start; istop; i++){ array_d1_ref X = t_d->XQ->X[i]; array_d1_ref q = t_d->XQ->q[i]; for (int k=0;k<3;k++){ par_list[i][k+4] = X[k]; } for (int k=0;k<4;k++){ par_list[i][k] = q[k]; } w[i] = t_d->c->weight(X, q, t_d->t); } pthread_exit(NULL); } static PyObject * weights_list(PyObject *self, PyObject *args){ /* Python call: weights_list(others,t,N_threads,w,par_list,Y,ref,data,mu,Q) This function fills the numpy arrays 'w' and 'par_list' with results of weight calculations for each plane in the list 'others'. Calls to the 'weight' method of a c++ camera 'c' built from 'Y', 'ref', 'data', 'mu' and 'Q' does the calculations. */ PyArrayObject *Py_ref, *Py_data; PyObject *others, *dummy; int t, N_threads; if (!PyArg_ParseTuple(args, "OiiOOOO&O&OO", &others, &t, &N_threads, &dummy, &dummy, &dummy, PyArray_Converter, &Py_ref, PyArray_Converter, &Py_data, &dummy, &dummy )) return NULL; Py_ssize_t i = 3; // Starting argument for conversions numpy_boost w(PySequence_GetItem(args, i++)); numpy_boost par_list(PySequence_GetItem(args, i++)); numpy_boost Y(PySequence_GetItem(args, i++)); PyImage ref = PyImage(Py_ref); PyImage data = PyImage(Py_data); Py_DECREF(Py_ref); Py_DECREF(Py_data); i += 2; numpy_boost mu(PySequence_GetItem(args, i++)); numpy_boost Q(PySequence_GetItem(args, i++)); Xq XQ(others); int N = PyList_Size(others); Camera c = Camera::Camera(&Y, &ref, &data, &mu, &Q); weight_data t_d[N_threads]; pthread_t t_id[N_threads]; pthread_attr_t attr; pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); for (int i=0; i>>>> "A" == Andy Fraser writes: A> I am using a particle filter to estimate the trajectory of a A> camera based on a sequence of images taken by the camera. The A> code is slow, but I have 8 processors in my desktop machine. A> I'd like to use them to get results 8 times faster. I've been A> looking at the following sections of A> http://docs.python.org/library: "16.6. multiprocessing" and A> "16.2. threading". I've also read some discussion from 2006 on A> scipy-user at scipy.org about seeds for random numbers in threads. A> I don't have any experience with multiprocessing and would A> appreciate advice. A> Here is a bit of code that I want to modify: A> for i in xrange(len(self.particles)): self.particles[i] A> = self.particles[i].random_fork() A> Each particle is a class instance that represents a possible A> camera state (position, orientation, and velocities). A> particle.random_fork() is a method that moves the position and A> orientation based on current velocities and then uses A> numpy.random.standard_normal((N,)) to perturb the velocities. A> I handle the correlation structure of the noise by matrices A> that are members of particle, and I do some of the calculations A> in c++. A> I would like to do something like: A> for i in xrange(len(self.particles)): nv = A> numpy.random.standard_normal((N,)) A> launch_on_any_available_processor( self.particles[i] = A> self.particles[i].random_fork(nv) ) wait_for_completions() A> But I don't see a command like A> "launch_on_any_available_processor". I would be grateful for A> any advice. From stephens.js at gmail.com Mon Jun 7 23:22:29 2010 From: stephens.js at gmail.com (Scott Stephens) Date: Mon, 7 Jun 2010 22:22:29 -0500 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: <4C0C905D.6080307@silveregg.co.jp> References: <4C0C905D.6080307@silveregg.co.jp> Message-ID: On Mon, Jun 7, 2010 at 1:23 AM, David wrote: > On 06/07/2010 12:42 PM, Scott Stephens wrote: >> On Sat, Jun 5, 2010 at 10:49 PM, Ralf Gommers >> ?wrote: >>> Could you try the 0.8.0 beta (http://sourceforge.net/projects/scipy/files/)? >>> Some of this may be fixed, and you also get another 18 months worth or so of >>> new features and bug fixes. >> >> 0.8.0b1 doesn't build for me using numscons. ?Build log is attached. >> I'm using numpy 1.4.1 built from source (and tested, no errors or >> unknown failures) and numscons freshly checked out from the git >> repository (at least numscons-0.11 is required to build scipy-0.8.0b1, >> and only numscons-0.10 is available via easy_install; couldn't find >> source for numscons-0.11). > > Could you see whether r6487 fixes it for you ? > Checked out and built the trunk (r6490). r6487 fixed the build problem, and some of the test failures I was getting in 0.7.1, but not all. The errors and failures I get now are: ERROR: test_decomp.test_lapack_misaligned(, (array([[ 1.734e-255, 8.189e-217, 4.025e-178, 1.903e-139, 9.344e-101, ERROR: test_complex_nonsymmetric_modes (test_arpack.TestEigenComplexNonSymmetric) ERROR: test_nonsymmetric_modes (test_arpack.TestEigenNonSymmetric) ERROR: test_starting_vector (test_arpack.TestEigenNonSymmetric) ERROR: test_continuous_basic.test_cont_basic(, (), 'wald') ERROR: test_continuous_basic.test_cont_basic(, (), 'wald') ERROR: test_continuous_basic.test_cont_basic(, (), 'wald') FAIL: test_complex_symmetric_modes (test_arpack.TestEigenComplexSymmetric) Full text of the test run is attached. -- Scott -------------- next part -------------- >>> import scipy >>> scipy.test() Running unit tests for scipy NumPy version 1.4.1 NumPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy SciPy version 0.9.0.dev6493 SciPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy Python version 2.6.4 (r264:75706, Mar 27 2010, 11:45:57) [GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] nose version 0.11.3 ............................................................................................................................................................................................................................................................................../Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/fitpack2.py:639: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ...../Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/fitpack2.py:580: UserWarning: The required storage space exceeds the available storage space: nxest or nyest too small, or s too small. The weighted least-squares spline corresponds to the current set of knots. warnings.warn(message) ...........................................K..K.........................................................................................................................................................................................................................................................................................................................Warning: 1000000 bytes requested, 20 bytes read. ./Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/utils.py:140: DeprecationWarning: `write_array` is deprecated! This function is replaced by numpy.savetxt which allows the same functionality through a different syntax. warnings.warn(depdoc, DeprecationWarning) /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/utils.py:140: DeprecationWarning: `read_array` is deprecated! The functionality of read_array is in numpy.loadtxt which allows the same functionality using different syntax. warnings.warn(depdoc, DeprecationWarning) ...........................................Exception AttributeError: "'netcdf_file' object has no attribute 'mode'" in > ignored ............/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/utils.py:140: DeprecationWarning: `npfile` is deprecated! You can achieve the same effect as using npfile using numpy.save and numpy.load. You can use memory-mapped arrays and data-types to map out a file format for direct manipulation in NumPy. warnings.warn(depdoc, DeprecationWarning) ........./Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/wavfile.py:30: WavFileWarning: Unfamiliar format bytes warnings.warn("Unfamiliar format bytes", WavFileWarning) /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/wavfile.py:120: WavFileWarning: chunk not understood warnings.warn("chunk not understood", WavFileWarning) ...............................................................................................................................................................................................................................SSSSSS......SSSSSS......SSSS...............................................................S...................................................................................................................................................................................................................E.....................................................................................................................................................................................................SSS.........S........................................................................................................................................................................................................................................................................................................................................................................................................................................................** On entry to DGEEV , parameter number 5 had an illegal value ** On entry to DGEEV , parameter number 5 had an illegal value ............/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/signal/filter_design.py:247: BadCoefficients: Badly conditioned filter coefficients (numerator): the results may be meaningless "results may be meaningless", BadCoefficients) ..................................................................................................................................................................................................................................................................................SSSSSSSSSSS.........EFEE..K.......................................................................................................................................K...............................................................K.........................................................................................................................................................KK.......................................................................................................................................................................................................................................................................................................................................................................................................K.K...................................................................................................................................................................................................................................................................................................................................................................................K........K.........SSSSS.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................EEE.........................................................................................................................................................S......................................................................................................................................................................................./Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/morestats.py:736: UserWarning: Ties preclude use of exact statistic. warnings.warn("Ties preclude use of exact statistic.") ...................................................................................................................................................................................................................................................................................................................................................................................................... ====================================================================== ERROR: test_decomp.test_lapack_misaligned(, (array([[ 1.734e-255, 8.189e-217, 4.025e-178, 1.903e-139, 9.344e-101, ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/linalg/tests/test_decomp.py", line 1074, in check_lapack_misaligned func(*a,**kwargs) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/linalg/basic.py", line 49, in solve a1, b1 = map(asarray_chkfinite,(a,b)) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/function_base.py", line 586, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== ERROR: test_complex_nonsymmetric_modes (test_arpack.TestEigenComplexNonSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 267, in test_complex_nonsymmetric_modes self.eval_evec(m,typ,k,which) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 248, in eval_evec eval,evec=eigen(a,k,which=which) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 397, in eigen params.iterate() File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 222, in iterate raise RuntimeError("Error info=%d in arpack" % self.info) RuntimeError: Error info=-8 in arpack ====================================================================== ERROR: test_nonsymmetric_modes (test_arpack.TestEigenNonSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 204, in test_nonsymmetric_modes self.eval_evec(m,typ,k,which) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 186, in eval_evec eval,evec=eigen(a,k,which=which,**kwds) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 397, in eigen params.iterate() File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 222, in iterate raise RuntimeError("Error info=%d in arpack" % self.info) RuntimeError: Error info=-8 in arpack ====================================================================== ERROR: test_starting_vector (test_arpack.TestEigenNonSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 214, in test_starting_vector self.eval_evec(self.symmetric[0],typ,k,which='LM',v0=v0) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 186, in eval_evec eval,evec=eigen(a,k,which=which,**kwds) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 397, in eigen params.iterate() File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 222, in iterate raise RuntimeError("Error info=%d in arpack" % self.info) RuntimeError: Error info=-8 in arpack ====================================================================== ERROR: test_continuous_basic.test_cont_basic(, (), 'wald') ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/tests/test_continuous_basic.py", line 291, in check_cdf_ppf npt.assert_almost_equal(distfn.cdf(distfn.ppf([0.001,0.5,0.999], *arg), *arg), File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/distributions.py", line 1324, in ppf place(output,cond,self._ppf(*goodargs)*scale + loc) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/distributions.py", line 1028, in _ppf return self.vecfunc(q,*args) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/function_base.py", line 1804, in __call__ theout = self.thefunc(*newargs) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/distributions.py", line 974, in _ppf_single_call return optimize.brentq(self._ppf_to_solve, self.xa, self.xb, args=(q,)+args, xtol=self.xtol) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/optimize/zeros.py", line 262, in brentq r = _zeros._brentq(f,a,b,xtol,maxiter,args,full_output,disp) ValueError: f(a) and f(b) must have different signs ====================================================================== ERROR: test_continuous_basic.test_cont_basic(, (), 'wald') ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/tests/test_continuous_basic.py", line 296, in check_sf_isf npt.assert_almost_equal(distfn.sf(distfn.isf([0.1,0.5,0.9], *arg), *arg), File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/distributions.py", line 1366, in isf place(output,cond,self._isf(*goodargs)*scale + loc) #PB use _isf instead of _ppf File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/distributions.py", line 1031, in _isf return self._ppf(1.0-q,*args) #use correct _ppf for subclasses File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/distributions.py", line 1028, in _ppf return self.vecfunc(q,*args) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/function_base.py", line 1804, in __call__ theout = self.thefunc(*newargs) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/distributions.py", line 974, in _ppf_single_call return optimize.brentq(self._ppf_to_solve, self.xa, self.xb, args=(q,)+args, xtol=self.xtol) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/optimize/zeros.py", line 262, in brentq r = _zeros._brentq(f,a,b,xtol,maxiter,args,full_output,disp) ValueError: f(a) and f(b) must have different signs ====================================================================== ERROR: test_continuous_basic.test_cont_basic(, (), 'wald') ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/tests/test_continuous_basic.py", line 306, in check_pdf median = distfn.ppf(0.5, *arg) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/distributions.py", line 1324, in ppf place(output,cond,self._ppf(*goodargs)*scale + loc) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/distributions.py", line 1028, in _ppf return self.vecfunc(q,*args) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/function_base.py", line 1804, in __call__ theout = self.thefunc(*newargs) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/distributions.py", line 974, in _ppf_single_call return optimize.brentq(self._ppf_to_solve, self.xa, self.xb, args=(q,)+args, xtol=self.xtol) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/optimize/zeros.py", line 262, in brentq r = _zeros._brentq(f,a,b,xtol,maxiter,args,full_output,disp) ValueError: f(a) and f(b) must have different signs ====================================================================== FAIL: test_complex_symmetric_modes (test_arpack.TestEigenComplexSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 156, in test_complex_symmetric_modes self.eval_evec(self.symmetric[0],typ,k,which) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 145, in eval_evec assert_array_almost_equal(eval,exact_eval,decimal=_ndigits[typ]) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/testing/utils.py", line 765, in assert_array_almost_equal header='Arrays are not almost equal') File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/testing/utils.py", line 609, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.07188725 +6.23436023e-08j, 4.91291142 -3.25412906e-08j], dtype=complex64) y: array([ 5.+0.j, 6.+0.j], dtype=complex64) ---------------------------------------------------------------------- Ran 4613 tests in 89.389s FAILED (KNOWNFAIL=11, SKIP=38, errors=7, failures=1) From mdekauwe at gmail.com Tue Jun 8 11:53:38 2010 From: mdekauwe at gmail.com (mdekauwe) Date: Tue, 8 Jun 2010 08:53:38 -0700 (PDT) Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> Message-ID: <28819759.post@talk.nabble.com> OK still haven't quite solved this in my head, so with a slightly different array where the makeup is... array[nummonths, numvars, numrows, numcols] where nummonths = 12, numvars = 1, numrows = 180, numcols = 360. However lets say I only want to extract some of the elements in the array, in a practical sense land points and ignore sea points. So I construct two arrays r and c which contain the rows and column indicies for these points, such that their length is numpts = 15238. If I wanted to just output these points into a new array (dims = nummonths x 15238), is there a good way to do this. Not quite sure what I am doing wrong, I guess it relates to a size mismatch? out_array = np.zeros((nummonths, numpts), dtype=np.float32) month = 0 out_array[month,:] = array[xrange(month, nummonth), 0, r, c] Thanks. josef.pktd wrote: > > On Fri, May 28, 2010 at 4:14 PM, mdekauwe wrote: >> >> ok - something like this then...but how would i get the index for the >> month >> for the data array (where month is 0, 1, 2, 4 ... 11)? >> >> data[month,:] = array[xrange(0, numyears * nummonths, nummonths),VAR,:,0] > > you would still need to start at the right month > data[month,:] = array[xrange(month, numyears * nummonths, > nummonths),VAR,:,0] > or > data[month,:] = array[month: numyears * nummonths : nummonths),VAR,:,0] > > an alternative would be a reshape with an extra month dimension and > then sum only once over the year axis. this might be faster but > trickier to get the correct reshape . > > Josef > >> >> and would that be quicker than making an array months... >> >> months = np.arange(numyears * nummonths) >> >> and you that instead like you suggested x[start:end:12,:]? >> >> Many thanks again... >> >> >> josef.pktd wrote: >>> >>> On Fri, May 28, 2010 at 3:53 PM, mdekauwe wrote: >>>> >>>> Ok thanks...I'll take a look. >>>> >>>> Back to my loops issue. What if instead this time I wanted to take an >>>> average so every march in 11 years, is there a quicker way to go about >>>> doing >>>> that than my current method? >>>> >>>> nummonths = 12 >>>> numyears = 11 >>>> >>>> for month in xrange(nummonths): >>>> ? ?for i in xrange(numpts): >>>> ? ? ? ?for ym in xrange(month, numyears * nummonths, nummonths): >>>> ? ? ? ? ? ?data[month, i] += array[ym, VAR, land_pts_index[i], 0] >>> >>> >>> x[start:end:12,:] gives you every 12th row of an array x >>> >>> something like this should work to get rid of the inner loop, or you >>> could directly put >>> range(month, numyears * nummonths, nummonths) into the array instead >>> of ym and sum() >>> >>> Josef >>> >>> >>>> >>>> so for each point in the array for a given month i am jumping through >>>> and >>>> getting the next years month and so on, summing it. >>>> >>>> Thanks... >>>> >>>> >>>> josef.pktd wrote: >>>>> >>>>> On Wed, May 26, 2010 at 5:03 PM, mdekauwe wrote: >>>>>> >>>>>> Could you possibly if you have time explain further your comment re >>>>>> the >>>>>> p-values, your suggesting I am misusing them? >>>>> >>>>> Depends on your use and interpretation >>>>> >>>>> test statistics, p-values are random variables, if you look at several >>>>> tests at the same time, some p-values will be large just by chance. >>>>> If, for example you just look at the largest test statistic, then the >>>>> distribution for the max of several test statistics is not the same as >>>>> the distribution for a single test statistic >>>>> >>>>> http://en.wikipedia.org/wiki/Multiple_comparisons >>>>> http://www.itl.nist.gov/div898/handbook/prc/section4/prc47.htm >>>>> >>>>> we also just had a related discussion for ANOVA post-hoc tests on the >>>>> pystatsmodels group. >>>>> >>>>> Josef >>>>>> >>>>>> Thanks. >>>>>> >>>>>> >>>>>> josef.pktd wrote: >>>>>>> >>>>>>> On Sat, May 22, 2010 at 6:21 AM, mdekauwe >>>>>>> wrote: >>>>>>>> >>>>>>>> Sounds like I am stuck with the loop as I need to do the comparison >>>>>>>> for >>>>>>>> each >>>>>>>> pixel of the world and then I have a basemap function call which I >>>>>>>> guess >>>>>>>> slows it down further...hmm >>>>>>> >>>>>>> I don't see much that could be done differently, after a brief look. >>>>>>> >>>>>>> stats.pearsonr could be replaced by an array version using directly >>>>>>> the formula for correlation even with nans. wilcoxon looks slow, and >>>>>>> I >>>>>>> never tried or seen a faster version. >>>>>>> >>>>>>> just a reminder, the p-values are for a single test, when you have >>>>>>> many of them, then they don't have the right size/confidence level >>>>>>> for >>>>>>> an overall or joint test. (some packages report a Bonferroni >>>>>>> correction in this case) >>>>>>> >>>>>>> Josef >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> i.e. >>>>>>>> >>>>>>>> def compareSnowData(jules_var): >>>>>>>> ? ?# Extract the 11 years of snow data and return >>>>>>>> ? ?outrows = 180 >>>>>>>> ? ?outcols = 360 >>>>>>>> ? ?numyears = 11 >>>>>>>> ? ?nummonths = 12 >>>>>>>> >>>>>>>> ? ?# Read various files >>>>>>>> ? ?fname="world_valid_jules_pts.ascii" >>>>>>>> ? ?(numpts, land_pts_index, latitude, longitude, rows, cols) = >>>>>>>> jo.read_land_points_ascii(fname, 1.0) >>>>>>>> >>>>>>>> ? ?fname = "globalSnowRun_1985_96.GSWP2.nsmax0.mon.gra" >>>>>>>> ? ?jules_data1 = jo.readJulesOutBinary(fname, numrows=15238, >>>>>>>> numcols=1, >>>>>>>> \ >>>>>>>> ? ? ? ? ? ? ? ? ? ? ? timesteps=132, numvars=26) >>>>>>>> ? ?fname = "globalSnowRun_1985_96.GSWP2.nsmax3.mon.gra" >>>>>>>> ? ?jules_data2 = jo.readJulesOutBinary(fname, numrows=15238, >>>>>>>> numcols=1, >>>>>>>> \ >>>>>>>> ? ? ? ? ? ? ? ? ? ? ? timesteps=132, numvars=26) >>>>>>>> >>>>>>>> ? ?# grab some space >>>>>>>> ? ?data1_snow = np.zeros((nummonths * numyears, numpts), >>>>>>>> dtype=np.float32) >>>>>>>> ? ?data2_snow = np.zeros((nummonths * numyears, numpts), >>>>>>>> dtype=np.float32) >>>>>>>> ? ?pearsonsr_snow = np.ones((outrows, outcols), dtype=np.float32) * >>>>>>>> np.nan >>>>>>>> ? ?wilcoxStats_snow = np.ones((outrows, outcols), dtype=np.float32) >>>>>>>> * >>>>>>>> np.nan >>>>>>>> >>>>>>>> ? ?# extract the data >>>>>>>> ? ?data1_snow = jules_data1[:,jules_var,:,0] >>>>>>>> ? ?data2_snow = jules_data2[:,jules_var,:,0] >>>>>>>> ? ?data1_snow = np.where(data1_snow < 0.0, np.nan, data1_snow) >>>>>>>> ? ?data2_snow = np.where(data2_snow < 0.0, np.nan, data2_snow) >>>>>>>> ? ?#for month in xrange(numyears * nummonths): >>>>>>>> ? ?# ? ?for i in xrange(numpts): >>>>>>>> ? ?# ? ? ? ?data1 = >>>>>>>> jules_data1[month,jules_var,land_pts_index[i],0] >>>>>>>> ? ?# ? ? ? ?data2 = >>>>>>>> jules_data2[month,jules_var,land_pts_index[i],0] >>>>>>>> ? ?# ? ? ? ?if data1 >= 0.0: >>>>>>>> ? ?# ? ? ? ? ? ?data1_snow[month,i] = data1 >>>>>>>> ? ?# ? ? ? ?else: >>>>>>>> ? ?# ? ? ? ? ? ?data1_snow[month,i] = np.nan >>>>>>>> ? ?# ? ? ? ?if data2 > 0.0: >>>>>>>> ? ?# ? ? ? ? ? ?data2_snow[month,i] = data2 >>>>>>>> ? ?# ? ? ? ?else: >>>>>>>> ? ?# ? ? ? ? ? ?data2_snow[month,i] = np.nan >>>>>>>> >>>>>>>> ? ?# exclude any months from *both* arrays where we have dodgy >>>>>>>> data, >>>>>>>> else >>>>>>>> we >>>>>>>> ? ?# can't do the correlations correctly!! >>>>>>>> ? ?data1_snow = np.where(np.isnan(data2_snow), np.nan, data1_snow) >>>>>>>> ? ?data2_snow = np.where(np.isnan(data1_snow), np.nan, data1_snow) >>>>>>>> >>>>>>>> ? ?# put data on a regular grid... >>>>>>>> ? ?print 'regridding landpts...' >>>>>>>> ? ?for i in xrange(numpts): >>>>>>>> ? ? ? ?# exclude the NaN, note masking them doesn't work in the >>>>>>>> stats >>>>>>>> func >>>>>>>> ? ? ? ?x = data1_snow[:,i] >>>>>>>> ? ? ? ?x = x[np.isfinite(x)] >>>>>>>> ? ? ? ?y = data2_snow[:,i] >>>>>>>> ? ? ? ?y = y[np.isfinite(y)] >>>>>>>> >>>>>>>> ? ? ? ?# r^2 >>>>>>>> ? ? ? ?# exclude v.small arrays, i.e. we need just less over 4 >>>>>>>> years >>>>>>>> of >>>>>>>> data >>>>>>>> ? ? ? ?if len(x) and len(y) > 50: >>>>>>>> ? ? ? ? ? ?pearsonsr_snow[((180-1)-(rows[i]-1)),cols[i]-1] = >>>>>>>> (stats.pearsonr(x, y)[0])**2 >>>>>>>> >>>>>>>> ? ? ? ?# wilcox signed rank test >>>>>>>> ? ? ? ?# make sure we have enough samples to do the test >>>>>>>> ? ? ? ?d = x - y >>>>>>>> ? ? ? ?d = np.compress(np.not_equal(d,0), d ,axis=-1) # Keep all >>>>>>>> non-zero >>>>>>>> differences >>>>>>>> ? ? ? ?count = len(d) >>>>>>>> ? ? ? ?if count > 10: >>>>>>>> ? ? ? ? ? ?z, pval = stats.wilcoxon(x, y) >>>>>>>> ? ? ? ? ? ?# only map out sign different data >>>>>>>> ? ? ? ? ? ?if pval < 0.05: >>>>>>>> ? ? ? ? ? ? ? ?wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] = >>>>>>>> np.mean(x - y) >>>>>>>> >>>>>>>> ? ?return (pearsonsr_snow, wilcoxStats_snow) >>>>>>>> >>>>>>>> >>>>>>>> josef.pktd wrote: >>>>>>>>> >>>>>>>>> On Fri, May 21, 2010 at 10:14 PM, mdekauwe >>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> Also I then need to remap the 2D array I make onto another grid >>>>>>>>>> (the >>>>>>>>>> world in >>>>>>>>>> this case). Which again I had am doing with a loop (note numpts >>>>>>>>>> is >>>>>>>>>> a >>>>>>>>>> lot >>>>>>>>>> bigger than my example above). >>>>>>>>>> >>>>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), dtype=np.float32) >>>>>>>>>> * >>>>>>>>>> np.nan >>>>>>>>>> for i in xrange(numpts): >>>>>>>>>> ? ? ? ?# exclude the NaN, note masking them doesn't work in the >>>>>>>>>> stats >>>>>>>>>> func >>>>>>>>>> ? ? ? ?x = data1_snow[:,i] >>>>>>>>>> ? ? ? ?x = x[np.isfinite(x)] >>>>>>>>>> ? ? ? ?y = data2_snow[:,i] >>>>>>>>>> ? ? ? ?y = y[np.isfinite(y)] >>>>>>>>>> >>>>>>>>>> ? ? ? ?# wilcox signed rank test >>>>>>>>>> ? ? ? ?# make sure we have enough samples to do the test >>>>>>>>>> ? ? ? ?d = x - y >>>>>>>>>> ? ? ? ?d = np.compress(np.not_equal(d,0), d ,axis=-1) # Keep all >>>>>>>>>> non-zero >>>>>>>>>> differences >>>>>>>>>> ? ? ? ?count = len(d) >>>>>>>>>> ? ? ? ?if count > 10: >>>>>>>>>> ? ? ? ? ? ?z, pval = stats.wilcoxon(x, y) >>>>>>>>>> ? ? ? ? ? ?# only map out sign different data >>>>>>>>>> ? ? ? ? ? ?if pval < 0.05: >>>>>>>>>> ? ? ? ? ? ? ? ?wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] >>>>>>>>>> = >>>>>>>>>> np.mean(x - y) >>>>>>>>>> >>>>>>>>>> Now I think I can push the data in one move into the >>>>>>>>>> wilcoxStats_snow >>>>>>>>>> array >>>>>>>>>> by removing the index, >>>>>>>>>> but I can't see how I will get the individual x and y pts for >>>>>>>>>> each >>>>>>>>>> array >>>>>>>>>> member correctly without the loop, this was my attempt which of >>>>>>>>>> course >>>>>>>>>> doesn't work! >>>>>>>>>> >>>>>>>>>> x = data1_snow[:,:] >>>>>>>>>> x = x[np.isfinite(x)] >>>>>>>>>> y = data2_snow[:,:] >>>>>>>>>> y = y[np.isfinite(y)] >>>>>>>>>> >>>>>>>>>> # r^2 >>>>>>>>>> # exclude v.small arrays, i.e. we need just less over 4 years of >>>>>>>>>> data >>>>>>>>>> if len(x) and len(y) > 50: >>>>>>>>>> ? ?pearsonsr_snow[((180-1)-(rows-1)),cols-1] = (stats.pearsonr(x, >>>>>>>>>> y)[0])**2 >>>>>>>>> >>>>>>>>> >>>>>>>>> If you want to do pairwise comparisons with stats.wilcoxon, then >>>>>>>>> you >>>>>>>>> might be stuck with the loop, since wilcoxon takes only two 1d >>>>>>>>> arrays >>>>>>>>> at a time (if I read the help correctly). >>>>>>>>> >>>>>>>>> Also the presence of nans might force the use a loop. stats.mstats >>>>>>>>> has >>>>>>>>> masked array versions, but I didn't see wilcoxon in the list. >>>>>>>>> (Even >>>>>>>>> when vectorized operations would work with regular arrays, nan or >>>>>>>>> masked array versions still have to loop in many cases.) >>>>>>>>> >>>>>>>>> If you have many columns with count <= 10, so that wilcoxon is not >>>>>>>>> calculated then it might be worth to use only array operations up >>>>>>>>> to >>>>>>>>> that point. If wilcoxon is calculated most of the time, then it's >>>>>>>>> not >>>>>>>>> worth thinking too hard about this. >>>>>>>>> >>>>>>>>> Josef >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> thanks. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> mdekauwe wrote: >>>>>>>>>>> >>>>>>>>>>> Yes as Zachary said index is only 0 to 15237, so both methods >>>>>>>>>>> work. >>>>>>>>>>> >>>>>>>>>>> I don't quite get what you mean about slicing with axis > 3. Is >>>>>>>>>>> there >>>>>>>>>>> a >>>>>>>>>>> link you can recommend I should read? Does that mean given I >>>>>>>>>>> have >>>>>>>>>>> 4dims >>>>>>>>>>> that Josef's suggestion would be more advised in this case? >>>>>>>>> >>>>>>>>> There were several discussions on the mailing lists (fancy slicing >>>>>>>>> and >>>>>>>>> indexing). Your case is safe, but if you run in future into funny >>>>>>>>> shapes, you can look up the details. >>>>>>>>> when in doubt, I use np.arange(...) >>>>>>>>> >>>>>>>>> Josef >>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> josef.pktd wrote: >>>>>>>>>>>> >>>>>>>>>>>> On Fri, May 21, 2010 at 10:55 AM, mdekauwe >>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks that works... >>>>>>>>>>>>> >>>>>>>>>>>>> So the way to do it is with np.arange(tsteps)[:,None], that >>>>>>>>>>>>> was >>>>>>>>>>>>> the >>>>>>>>>>>>> step >>>>>>>>>>>>> I >>>>>>>>>>>>> was struggling with, so this forms a 2D array which replaces >>>>>>>>>>>>> the >>>>>>>>>>>>> the >>>>>>>>>>>>> two >>>>>>>>>>>>> for >>>>>>>>>>>>> loops? Do I have that right? >>>>>>>>>>>> >>>>>>>>>>>> Yes, but as Zachary showed, if you need the full index in a >>>>>>>>>>>> dimension, >>>>>>>>>>>> then you can use slicing. It might be faster. >>>>>>>>>>>> And a warning, mixing slices and index arrays with 3 or more >>>>>>>>>>>> dimensions can have some surprise switching of axes. >>>>>>>>>>>> >>>>>>>>>>>> Josef >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> A lot quicker...! >>>>>>>>>>>>> >>>>>>>>>>>>> Martin >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> josef.pktd wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, May 21, 2010 at 8:59 AM, mdekauwe >>>>>>>>>>>>>> >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I am trying to extract data from a 4D array and store it in >>>>>>>>>>>>>>> a >>>>>>>>>>>>>>> 2D >>>>>>>>>>>>>>> array, >>>>>>>>>>>>>>> but >>>>>>>>>>>>>>> avoid my current usage of the for loops for speed, as in >>>>>>>>>>>>>>> reality >>>>>>>>>>>>>>> the >>>>>>>>>>>>>>> arrays >>>>>>>>>>>>>>> sizes are quite big. Could someone also try and explain the >>>>>>>>>>>>>>> solution >>>>>>>>>>>>>>> as >>>>>>>>>>>>>>> well >>>>>>>>>>>>>>> if they have a spare moment as I am still finding it quite >>>>>>>>>>>>>>> difficult >>>>>>>>>>>>>>> to >>>>>>>>>>>>>>> get >>>>>>>>>>>>>>> over the habit of using loops (C convert for my sins). I get >>>>>>>>>>>>>>> that >>>>>>>>>>>>>>> one >>>>>>>>>>>>>>> could >>>>>>>>>>>>>>> precompute the indices's i and j i.e. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> i = np.arange(tsteps) >>>>>>>>>>>>>>> j = np.arange(numpts) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> but just can't get my head round how i then use them... >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> import numpy as np >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> numpts=10 >>>>>>>>>>>>>>> tsteps = 12 >>>>>>>>>>>>>>> vari = 22 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> data = np.random.random((tsteps, vari, numpts, 1)) >>>>>>>>>>>>>>> new_data = np.zeros((tsteps, numpts), dtype=np.float32) >>>>>>>>>>>>>>> index = np.arange(numpts) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> for i in xrange(tsteps): >>>>>>>>>>>>>>> ? ?for j in xrange(numpts): >>>>>>>>>>>>>>> ? ? ? ?new_data[i,j] = data[i,5,index[j],0] >>>>>>>>>>>>>> >>>>>>>>>>>>>> The index arrays need to be broadcastable against each other. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I think this should do it >>>>>>>>>>>>>> >>>>>>>>>>>>>> new_data = data[np.arange(tsteps)[:,None], 5, >>>>>>>>>>>>>> np.arange(numpts), >>>>>>>>>>>>>> 0] >>>>>>>>>>>>>> >>>>>>>>>>>>>> Josef >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> View this message in context: >>>>>>>>>>>>>>> http://old.nabble.com/removing-for-loops...-tp28633477p28633477.html >>>>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>> SciPy-User mailing list >>>>>>>>>>>>>>> SciPy-User at scipy.org >>>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>>>>>>>>> >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> SciPy-User mailing list >>>>>>>>>>>>>> SciPy-User at scipy.org >>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> View this message in context: >>>>>>>>>>>>> http://old.nabble.com/removing-for-loops...-tp28633477p28634924.html >>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> SciPy-User mailing list >>>>>>>>>>>>> SciPy-User at scipy.org >>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> SciPy-User mailing list >>>>>>>>>>>> SciPy-User at scipy.org >>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> View this message in context: >>>>>>>>>> http://old.nabble.com/removing-for-loops...-tp28633477p28640656.html >>>>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> SciPy-User mailing list >>>>>>>>>> SciPy-User at scipy.org >>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> SciPy-User mailing list >>>>>>>>> SciPy-User at scipy.org >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> View this message in context: >>>>>>>> http://old.nabble.com/removing-for-loops...-tp28633477p28642434.html >>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> SciPy-User mailing list >>>>>>>> SciPy-User at scipy.org >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> SciPy-User mailing list >>>>>>> SciPy-User at scipy.org >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> View this message in context: >>>>>> http://old.nabble.com/removing-for-loops...-tp28633477p28686356.html >>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>>>>> >>>>>> _______________________________________________ >>>>>> SciPy-User mailing list >>>>>> SciPy-User at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>>> >>>> >>>> -- >>>> View this message in context: >>>> http://old.nabble.com/removing-for-loops...-tp28633477p28711249.html >>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> -- >> View this message in context: >> http://old.nabble.com/removing-for-loops...-tp28633477p28711444.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/removing-for-loops...-tp28633477p28819759.html Sent from the Scipy-User mailing list archive at Nabble.com. From mdekauwe at gmail.com Tue Jun 8 12:00:59 2010 From: mdekauwe at gmail.com (mdekauwe) Date: Tue, 8 Jun 2010 09:00:59 -0700 (PDT) Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: <28819759.post@talk.nabble.com> References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28819759.post@talk.nabble.com> Message-ID: <28819859.post@talk.nabble.com> Similarly, mths = np.arange(12) pts = np.arange(numpts) out_array[mths, pts] = array[mths, 0, r, c] Does not work either... -- View this message in context: http://old.nabble.com/removing-for-loops...-tp28633477p28819859.html Sent from the Scipy-User mailing list archive at Nabble.com. From jlconlin at gmail.com Tue Jun 8 12:36:34 2010 From: jlconlin at gmail.com (Jeremy Conlin) Date: Tue, 8 Jun 2010 10:36:34 -0600 Subject: [SciPy-User] curve_fit error: Optional parameters not found... Message-ID: I downloaded scipy 0.8b1 yesterday; I was excited to try out the new curve_fit function. Today I have been playing with it and some of the time it works. Other times I get the error: RuntimeError: Optimal parameters not found: Both actual and predicted relative reductions in the sum of squares are at most 0.000000 and the relative error between two consecutive iterates is at most 0.000000 I know this has been discussed before (see http://mail.scipy.org/pipermail/scipy-user/2009-August/022088.html), but apparently has not been fixed. Can someone explain why I get this error and how I can avoid it? Thanks, Jeremy From cel48 at st-andrews.ac.uk Tue Jun 8 12:47:05 2010 From: cel48 at st-andrews.ac.uk (Christine) Date: Tue, 8 Jun 2010 16:47:05 +0000 (UTC) Subject: [SciPy-User] problems with splrep,splev References: <114880320904170839q26855a1doa3fce5423901f8c1@mail.gmail.com> <2F3F6D5A-1F16-40DD-94F5-0AA1EE7358BB@cs.toronto.edu> Message-ID: Hi, I'm stumbling on xb and xe as well. > > Right, but then I don't get the meaning of xb and xe at all. What > > sense > > does it make to choose a fit interval larger than the input data? IMHO > > x[0] < xb < xe < x[-1] should hold but obviously the docs tell the > > opposite. > > What sense does it make to fit using a smaller interval than x[0] ... > x[-1]? You'd then be throwing away some of your observations. An xb > and xe value < or > might do something > > I don't see why you're specifying xb and xe to begin with. If you're > fitting it to all the data, simply omitting those arguments makes most > sense. You can slice your input arrays if you'd prefer not to use all > of your data. What sense does it make to fit using a _bigger_ interval than x[0]...x[-1]? There are no more data points, so there is nothing more to fit to however much you extend the interval, right? Of course, slicing the input arrays is an acceptable workaround, if I want to fit only part of my data (which, indeed, I quite often want to do). But if xb, xe are not intended to limit that interval, they look perfectly useless to me. Thanks for any future enlightenment! Christine From vincent at vincentdavis.net Tue Jun 8 13:11:56 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Tue, 8 Jun 2010 11:11:56 -0600 Subject: [SciPy-User] Mail list proposal/idea Message-ID: I prefer the mail list but stackoverflow.com is good. But what I really like about stackoverflow is not the answers but the ability to search and even the dynamic search when I start to type a question. Here are a few ideas 1) When the first email of a tread is processed by the mail serve a search of similar question is done and the relevant posts are attached (as links) at the bottom of the message. 2) A web based interface that is similar to stackoverflow in that a user can search and post within the same page and as they type a question suggested relevant posts are shown. I probably should be posting this elsewhere, like on some mailman list. Do you think this is feasible (cost, benefit) or desirable? Vincent From vincent at vincentdavis.net Tue Jun 8 13:15:09 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Tue, 8 Jun 2010 11:15:09 -0600 Subject: [SciPy-User] Mail list proposal/idea In-Reply-To: References: Message-ID: I meant to post this on the numpy list since there was already a thread going about the mail list, So I would prefer to keep the discussion there but don't really care. Vincent On Tue, Jun 8, 2010 at 11:11 AM, Vincent Davis wrote: > I prefer the mail list but stackoverflow.com is good. But what I > really like about stackoverflow is not the answers but the ability to > search and even the dynamic search when I start to type a question. > Here are a few ideas > > 1) When the first email of a tread is processed by the mail serve a > search of similar question is done and the relevant posts are attached > (as links) at the bottom of the message. > > 2) A web based interface that is similar to stackoverflow in that a > user can search and post within the same page and as they type a > question suggested relevant posts are shown. > > I probably should be posting this elsewhere, like on some mailman list. > > Do you think this is feasible (cost, benefit) or desirable? > > Vincent > From fernando.ferreira at poli.ufrj.br Fri Jun 4 19:30:21 2010 From: fernando.ferreira at poli.ufrj.br (=?ISO-8859-1?Q?Fernando_Guimar=E3es_Ferreira?=) Date: Fri, 4 Jun 2010 20:30:21 -0300 Subject: [SciPy-User] scipy.io.matlab.loadmat error In-Reply-To: References: <8CA9D85A-CA93-4B7F-8434-02F633C44090@gmail.com> Message-ID: So, Things have change... I rebuilt numpy and scipy. It turns out that scipy.io.matlab.loadmat is working again.... However scipy.test('1', '10') is still failing I attached the output... i can't understand why... I installed the dmg package from the sourceForge repository. Any idea? Cheers, Fernando On Mon, May 31, 2010 at 11:58 PM, Matthew Brett wrote: > Hi, > ... > > TypeError: Expecting miMATRIX type here, got 1296630016 > > In [5]: > > > > Same file.... But it does not work at all... > > What version of numpy do you have? I can't imagine it makes a > difference, but still. > > Did you run the scipy tests? Did the scipy.io.matlab tests pass? > > Best, > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- Last login: Fri Jun 4 16:56:34 on ttys006 1 [fguimara] ~ > python Python 2.6.5 (r265:79359, Mar 24 2010, 01:32:55) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.__version__ '0.7.2' >>> scipy.test('1', '10') Running unit tests for scipy NumPy version 1.4.1 NumPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy SciPy version 0.7.2 SciPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy Python version 2.6.5 (r265:79359, Mar 24 2010, 01:32:55) [GCC 4.0.1 (Apple Inc. build 5493)] nose version 0.11.3 nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext', 'array_from_pyobj'] nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/fftpack/convolve.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/integrate/vode.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/dfitpack.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/numpyio.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/lib/blas/cblas.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/lib/blas/fblas.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/lib/lapack/atlas_version.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/lib/lapack/calc_lwork.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/lib/lapack/clapack.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/lib/lapack/flapack.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/linalg/atlas_version.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/linalg/calc_lwork.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/linalg/cblas.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/linalg/clapack.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/linalg/fblas.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/linalg/flapack.so is executable; skipped /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/linsolve/__init__.py:4: DeprecationWarning: scipy.linsolve has moved to scipy.sparse.linalg.dsolve warn('scipy.linsolve has moved to scipy.sparse.linalg.dsolve', DeprecationWarning) nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/optimize/minpack2.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/optimize/moduleTNC.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/signal/sigtools.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/signal/spline.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/spatial/ckdtree.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/specfun.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/futil.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/mvn.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/statlib.so is executable; skipped nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/vonmises_cython.so is executable; skipped Tests cophenet(Z) on tdist data set. ... ok Tests cophenet(Z, Y) on tdist data set. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. Correspondance should be false. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. Correspondance should be false. ... ok Tests correspond(Z, y) with empty linkage and condensed distance matrix. ... ok Tests num_obs_linkage with observation matrices of multiple sizes. ... ok Tests fcluster(Z, criterion='maxclust', t=2) on a random 3-cluster data set. ... ok Tests fcluster(Z, criterion='maxclust', t=3) on a random 3-cluster data set. ... ok Tests fcluster(Z, criterion='maxclust', t=4) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=2) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=3) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=4) on a random 3-cluster data set. ... ok Tests from_mlab_linkage on empty linkage array. ... ok Tests from_mlab_linkage on linkage array with multiple rows. ... ok Tests from_mlab_linkage on linkage array with single row. ... ok Tests inconsistency matrix calculation (depth=1) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=2) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=3) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=4) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=1, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=2, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=3, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=4, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=1) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=2) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=3) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=4) on a single linkage. ... ok Tests is_isomorphic on test case #1 (one flat cluster, different labellings) ... ok Tests is_isomorphic on test case #2 (two flat clusters, different labelings) ... ok Tests is_isomorphic on test case #3 (no flat clusters) ... ok Tests is_isomorphic on test case #4A (3 flat clusters, different labelings, isomorphic) ... ok Tests is_isomorphic on test case #4B (3 flat clusters, different labelings, nonisomorphic) ... ok Tests is_isomorphic on test case #4C (3 flat clusters, different labelings, isomorphic) ... ok Tests is_isomorphic on test case #5A (1000 observations, 2 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5B (1000 observations, 3 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5C (1000 observations, 5 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5A (1000 observations, 2 random clusters, random permutation of the labeling, slightly nonisomorphic.) Run 3 times. ... ok Tests is_isomorphic on test case #5B (1000 observations, 3 random clusters, random permutation of the labeling, slightly nonisomorphic.) Run 3 times. ... ok Tests is_isomorphic on test case #5C (1000 observations, 5 random clusters, random permutation of the labeling, slightly non-isomorphic.) Run 3 times. ... ok Tests is_monotonic(Z) on 1x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on 2x4 linkage. Expecting False. ... ok Tests is_monotonic(Z) on 2x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 1). Expecting False. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 2). Expecting False. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 3). Expecting False ... ok Tests is_monotonic(Z) on 3x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on an empty linkage. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on Iris data set. Expecting True. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on tdist data set. Perturbing. Expecting False. ... ok Tests is_valid_im(R) on im over 2 observations. ... ok Tests is_valid_im(R) on im over 3 observations. ... ok Tests is_valid_im(R) with 3 columns. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3). ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link counts. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link height means. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link height standard deviations. ... ok Tests is_valid_im(R) with 5 columns. ... ok Tests is_valid_im(R) with empty inconsistency matrix. ... ok Tests is_valid_im(R) with integer type. ... ok Tests is_valid_linkage(Z) on linkage over 2 observations. ... ok Tests is_valid_linkage(Z) on linkage over 3 observations. ... ok Tests is_valid_linkage(Z) with 3 columns. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3). ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative counts. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative distances. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative indices (left). ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative indices (right). ... ok Tests is_valid_linkage(Z) with 5 columns. ... ok Tests is_valid_linkage(Z) with empty linkage. ... ok Tests is_valid_linkage(Z) with integer type. ... ok Tests leaders using a flat clustering generated by single linkage. ... ok Tests leaves_list(Z) on a 1x4 linkage. ... ok Tests leaves_list(Z) on a 2x4 linkage. ... ok Tests leaves_list(Z) on the Iris data set using average linkage. ... ok Tests leaves_list(Z) on the Iris data set using centroid linkage. ... ok Tests leaves_list(Z) on the Iris data set using complete linkage. ... ok Tests leaves_list(Z) on the Iris data set using median linkage. ... ok Tests leaves_list(Z) on the Iris data set using single linkage. ... ok Tests leaves_list(Z) on the Iris data set using ward linkage. ... ok Tests linkage(Y, 'average') on the tdist data set. ... ok Tests linkage(Y, 'centroid') on the Q data set. ... ok Tests linkage(Y, 'complete') on the Q data set. ... ok Tests linkage(Y, 'complete') on the tdist data set. ... ok Tests linkage(Y) where Y is a 0x4 linkage matrix. Exception expected. ... ok Tests linkage(Y, 'single') on the Q data set. ... ok Tests linkage(Y, 'single') on the tdist data set. ... ok Tests linkage(Y, 'weighted') on the Q data set. ... ok Tests linkage(Y, 'weighted') on the tdist data set. ... ok Tests maxdists(Z) on the Q data set using centroid linkage. ... ok Tests maxdists(Z) on the Q data set using complete linkage. ... ok Tests maxdists(Z) on the Q data set using median linkage. ... ok Tests maxdists(Z) on the Q data set using single linkage. ... ok Tests maxdists(Z) on the Q data set using Ward linkage. ... ok Tests maxdists(Z) on empty linkage. Expecting exception. ... ok Tests maxdists(Z) on linkage with one cluster. ... ok Tests maxinconsts(Z, R) on the Q data set using centroid linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using complete linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using median linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using single linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using Ward linkage. ... ok Tests maxinconsts(Z, R) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxinconsts(Z, R) on empty linkage. Expecting exception. ... ok Tests maxinconsts(Z, R) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 0) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 0) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 0) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 0) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 1) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 1) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 2) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 2) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 3) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3.3). Expecting exception. ... ok Tests maxRstat(Z, R, -1). Expecting exception. ... ok Tests maxRstat(Z, R, 4). Expecting exception. ... ok Tests num_obs_linkage(Z) on linkage over 2 observations. ... ok Tests num_obs_linkage(Z) on linkage over 3 observations. ... ok Tests num_obs_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3). ... ok Tests num_obs_linkage(Z) with empty linkage. ... ok Tests to_mlab_linkage on linkage array with multiple rows. ... ok Tests to_mlab_linkage on empty linkage array. ... ok Tests to_mlab_linkage on linkage array with single row. ... ok test_hierarchy.load_testing_files ... ok Ticket #505. ... ok Testing that kmeans2 init methods work. ... ok Testing simple call to kmeans2 with rank 1 data. ... ok Testing simple call to kmeans2 with rank 1 data. ... ok Testing simple call to kmeans2 and its results. ... ok Regression test for #546: fail when k arg is 0. ... ok This will cause kmean to have a cluster with no points. ... ok test_kmeans_simple (test_vq.TestKMean) ... ok test_py_vq (test_vq.TestVq) ... ok test_py_vq2 (test_vq.TestVq) ... ok test_vq (test_vq.TestVq) ... ok Test special rank 1 vq algo, python implementation. ... ok nose.selector: INFO: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/cluster/tests/vq_test.py is executable; skipped test_definition (test_basic.TestFft) ... ok test_djbfft (test_basic.TestFft) ... ok test_n_argument_real (test_basic.TestFft) ... ok test_axes_argument (test_basic.TestFftn) ... ok test_definition (test_basic.TestFftn) ... ok test_shape_argument (test_basic.TestFftn) ... ok test_shape_argument_more (test_basic.TestFftn) ... ok test_shape_axes_argument (test_basic.TestFftn) ... ok test_shape_axes_argument2 (test_basic.TestFftn) ... ok test_definition (test_basic.TestIfft) ... ok test_djbfft (test_basic.TestIfft) ... ok test_random_complex (test_basic.TestIfft) ... ok test_random_real (test_basic.TestIfft) ... ok test_definition (test_basic.TestIfftn) ... ok test_random_complex (test_basic.TestIfftn) ... ok test_definition (test_basic.TestIrfft) ... ok test_djbfft (test_basic.TestIrfft) ... ok test_random_real (test_basic.TestIrfft) ... ok test_definition (test_basic.TestRfft) ... ok test_djbfft (test_basic.TestRfft) ... ok fft returns wrong result with axes parameter. ... ok test_definition (test_helper.TestFFTFreq) ... ok test_definition (test_helper.TestFFTShift) ... ok test_inverse (test_helper.TestFFTShift) ... ok test_definition (test_helper.TestRFFTFreq) ... ok test_definition (test_pseudo_diffs.TestDiff) ... ok test_expr (test_pseudo_diffs.TestDiff) ... ok test_expr_large (test_pseudo_diffs.TestDiff) ... ok test_int (test_pseudo_diffs.TestDiff) ... ok test_period (test_pseudo_diffs.TestDiff) ... ok test_random_even (test_pseudo_diffs.TestDiff) ... ok test_random_odd (test_pseudo_diffs.TestDiff) ... ok test_sin (test_pseudo_diffs.TestDiff) ... ok test_zero_nyquist (test_pseudo_diffs.TestDiff) ... ok test_definition (test_pseudo_diffs.TestHilbert) ... ok test_random_even (test_pseudo_diffs.TestHilbert) ... ok test_random_odd (test_pseudo_diffs.TestHilbert) ... ok test_tilbert_relation (test_pseudo_diffs.TestHilbert) ... ok test_definition (test_pseudo_diffs.TestIHilbert) ... ok test_itilbert_relation (test_pseudo_diffs.TestIHilbert) ... ok test_definition (test_pseudo_diffs.TestITilbert) ... ok test_definition (test_pseudo_diffs.TestShift) ... ok test_definition (test_pseudo_diffs.TestTilbert) ... ok test_random_even (test_pseudo_diffs.TestTilbert) ... ok test_random_odd (test_pseudo_diffs.TestTilbert) ... ok Check the vode solver ... ok Check the zvode solver ... ok test_odeint (test_integrate.TestOdeint) ... ok test_algebraic_log_weight (test_quadpack.TestQuad) ... ok test_cauchypv_weight (test_quadpack.TestQuad) ... ok test_cosine_weighted_infinite (test_quadpack.TestQuad) ... ok test_double_integral (test_quadpack.TestQuad) ... ok test_indefinite (test_quadpack.TestQuad) ... ok test_sine_weighted_finite (test_quadpack.TestQuad) ... ok test_sine_weighted_infinite (test_quadpack.TestQuad) ... ok test_singular (test_quadpack.TestQuad) ... ok test_triple_integral (test_quadpack.TestQuad) ... ok test_typical (test_quadpack.TestQuad) ... ok test_non_dtype (test_quadrature.TestQuadrature) ... ok test_quadrature (test_quadrature.TestQuadrature) ... ok test_romb (test_quadrature.TestQuadrature) ... ok test_romberg (test_quadrature.TestQuadrature) ... ok test_bilinearity (test_fitpack.TestLSQBivariateSpline) ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/fitpack2.py:498: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ok test_integral (test_fitpack.TestLSQBivariateSpline) ... ok test_linear_constant (test_fitpack.TestLSQBivariateSpline) ... ok test_defaults (test_fitpack.TestRectBivariateSpline) ... ok test_evaluate (test_fitpack.TestRectBivariateSpline) ... ok test_integral (test_fitpack.TestSmoothBivariateSpline) ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/fitpack2.py:439: UserWarning: The required storage space exceeds the available storage space: nxest or nyest too small, or s too small. The weighted least-squares spline corresponds to the current set of knots. warnings.warn(message) ok test_linear_1d (test_fitpack.TestSmoothBivariateSpline) ... ok test_linear_constant (test_fitpack.TestSmoothBivariateSpline) ... ok test_linear_1d (test_fitpack.TestUnivariateSpline) ... ok test_linear_constant (test_fitpack.TestUnivariateSpline) ... ok test_subclassing (test_fitpack.TestUnivariateSpline) ... ok test_interpolate.TestInterp1D.test_bounds('linear',) ... ok test_interpolate.TestInterp1D.test_bounds('linear',) ... ok test_interpolate.TestInterp1D.test_bounds('cubic',) ... ok test_interpolate.TestInterp1D.test_bounds('cubic',) ... ok test_interpolate.TestInterp1D.test_bounds('nearest',) ... ok test_interpolate.TestInterp1D.test_bounds('nearest',) ... ok test_interpolate.TestInterp1D.test_bounds('slinear',) ... ok test_interpolate.TestInterp1D.test_bounds('slinear',) ... ok test_interpolate.TestInterp1D.test_bounds('zero',) ... ok test_interpolate.TestInterp1D.test_bounds('zero',) ... ok test_interpolate.TestInterp1D.test_bounds('quadratic',) ... ok test_interpolate.TestInterp1D.test_bounds('quadratic',) ... ok test_interpolate.TestInterp1D.test_complex(, 'linear') ... ok test_interpolate.TestInterp1D.test_complex(, 'linear') ... ok test_interpolate.TestInterp1D.test_complex(, 'nearest') ... ok test_interpolate.TestInterp1D.test_complex(, 'nearest') ... ok test_interpolate.TestInterp1D.test_complex(, 'cubic') ... ok test_interpolate.TestInterp1D.test_complex(, 'cubic') ... ok test_interpolate.TestInterp1D.test_complex(, 'slinear') ... ok test_interpolate.TestInterp1D.test_complex(, 'slinear') ... ok test_interpolate.TestInterp1D.test_complex(, 'quadratic') ... ok test_interpolate.TestInterp1D.test_complex(, 'quadratic') ... ok test_interpolate.TestInterp1D.test_complex(, 'zero') ... ok test_interpolate.TestInterp1D.test_complex(, 'zero') ... ok Check the actual implementation of spline interpolation. ... ok Check that the attributes are initialized appropriately by the ... ok Check the actual implementation of linear interpolation. ... ok test_interpolate.TestInterp1D.test_nd('linear',) ... ok test_interpolate.TestInterp1D.test_nd('linear',) ... ok test_interpolate.TestInterp1D.test_nd('cubic',) ... ok test_interpolate.TestInterp1D.test_nd('cubic',) ... ok test_interpolate.TestInterp1D.test_nd('slinear',) ... ok test_interpolate.TestInterp1D.test_nd('slinear',) ... ok test_interpolate.TestInterp1D.test_nd('quadratic',) ... ok test_interpolate.TestInterp1D.test_nd('quadratic',) ... ok test_interpolate.TestInterp1D.test_nd('nearest',) ... ok test_interpolate.TestInterp1D.test_nd('nearest',) ... ok test_interpolate.TestInterp1D.test_nd_zero_spline ... KNOWNFAIL: zero-order splines fail for the last point Check the actual implementation of nearest-neighbour interpolation. ... ok Make sure that appropriate exceptions are raised when invalid values ... ok Check the actual implementation of zero-order spline interpolation. ... KNOWNFAIL: zero-order splines fail for the last point test_interp2d (test_interpolate.TestInterp2D) ... ok test_interp2d_meshgrid_input (test_interpolate.TestInterp2D) ... ok test_lagrange (test_interpolate.TestLagrange) ... ok test_block_average_above (test_interpolate_wrapper.Test) ... ok test_linear (test_interpolate_wrapper.Test) ... ok test_linear2 (test_interpolate_wrapper.Test) ... ok test_logarithmic (test_interpolate_wrapper.Test) ... ok test_nearest (test_interpolate_wrapper.Test) ... ok test_append (test_polyint.CheckBarycentric) ... ok test_delayed (test_polyint.CheckBarycentric) ... ok test_lagrange (test_polyint.CheckBarycentric) ... ok test_scalar (test_polyint.CheckBarycentric) ... ok test_shapes_1d_vectorvalue (test_polyint.CheckBarycentric) ... ok test_shapes_scalarvalue (test_polyint.CheckBarycentric) ... ok test_shapes_vectorvalue (test_polyint.CheckBarycentric) ... ok test_vector (test_polyint.CheckBarycentric) ... ok test_wrapper (test_polyint.CheckBarycentric) ... ok test_derivative (test_polyint.CheckKrogh) ... ok test_derivatives (test_polyint.CheckKrogh) ... ok test_empty (test_polyint.CheckKrogh) ... ok test_hermite (test_polyint.CheckKrogh) ... ok test_high_derivative (test_polyint.CheckKrogh) ... ok test_lagrange (test_polyint.CheckKrogh) ... ok test_low_derivatives (test_polyint.CheckKrogh) ... ok test_scalar (test_polyint.CheckKrogh) ... ok test_shapes_1d_vectorvalue (test_polyint.CheckKrogh) ... ok test_shapes_scalarvalue (test_polyint.CheckKrogh) ... ok test_shapes_scalarvalue_derivative (test_polyint.CheckKrogh) ... ok test_shapes_vectorvalue (test_polyint.CheckKrogh) ... ok test_shapes_vectorvalue_derivative (test_polyint.CheckKrogh) ... ok test_vector (test_polyint.CheckKrogh) ... ok test_wrapper (test_polyint.CheckKrogh) ... ok test_construction (test_polyint.CheckPiecewise) ... ok test_derivative (test_polyint.CheckPiecewise) ... ok test_derivatives (test_polyint.CheckPiecewise) ... ok test_incremental (test_polyint.CheckPiecewise) ... ok test_scalar (test_polyint.CheckPiecewise) ... ok test_shapes_scalarvalue (test_polyint.CheckPiecewise) ... ok test_shapes_scalarvalue_derivative (test_polyint.CheckPiecewise) ... ok test_shapes_vectorvalue (test_polyint.CheckPiecewise) ... ok test_shapes_vectorvalue_1d (test_polyint.CheckPiecewise) ... ok test_shapes_vectorvalue_derivative (test_polyint.CheckPiecewise) ... ok test_vector (test_polyint.CheckPiecewise) ... ok test_wrapper (test_polyint.CheckPiecewise) ... ok test_exponential (test_polyint.CheckTaylor) ... ok test_rbf.test_rbf_interpolation('multiquadric',) ... ok test_rbf.test_rbf_interpolation('multiquadric',) ... ok test_rbf.test_rbf_interpolation('multiquadric',) ... ok test_rbf.test_rbf_interpolation('inverse multiquadric',) ... ok test_rbf.test_rbf_interpolation('inverse multiquadric',) ... ok test_rbf.test_rbf_interpolation('inverse multiquadric',) ... ok test_rbf.test_rbf_interpolation('gaussian',) ... ok test_rbf.test_rbf_interpolation('gaussian',) ... ok test_rbf.test_rbf_interpolation('gaussian',) ... ok test_rbf.test_rbf_interpolation('cubic',) ... ok test_rbf.test_rbf_interpolation('cubic',) ... ok test_rbf.test_rbf_interpolation('cubic',) ... ok test_rbf.test_rbf_interpolation('quintic',) ... ok test_rbf.test_rbf_interpolation('quintic',) ... ok test_rbf.test_rbf_interpolation('quintic',) ... ok test_rbf.test_rbf_interpolation('thin-plate',) ... ok test_rbf.test_rbf_interpolation('thin-plate',) ... ok test_rbf.test_rbf_interpolation('thin-plate',) ... ok test_rbf.test_rbf_interpolation('linear',) ... ok test_rbf.test_rbf_interpolation('linear',) ... ok test_rbf.test_rbf_interpolation('linear',) ... ok test_rbf.test_rbf_regularity('multiquadric', 0.050000000000000003) ... ok test_rbf.test_rbf_regularity('inverse multiquadric', 0.02) ... ok test_rbf.test_rbf_regularity('gaussian', 0.01) ... ok test_rbf.test_rbf_regularity('cubic', 0.14999999999999999) ... ok test_rbf.test_rbf_regularity('quintic', 0.10000000000000001) ... ok test_rbf.test_rbf_regularity('thin-plate', 0.10000000000000001) ... ok test_rbf.test_rbf_regularity('linear', 0.20000000000000001) ... ok test_byteordercodes.test_native ... ok test_byteordercodes.test_to_numpy ... ok test_mio.test_load('double', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_7.4_GLNX86.mat'], {'testdouble': array([[ 0. , 0.78539816, 1.57079633, 2.35619449, 3.14159265, ... ok test_mio.test_load('string', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststring_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststring_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststring_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststring_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststring_7.4_GLNX86.mat'], {'teststring': array([u'"Do nine men interpret?" "Nine men," I nod.'], ... ok test_mio.test_load('complex', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcomplex_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcomplex_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcomplex_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcomplex_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcomplex_7.4_GLNX86.mat'], {'testcomplex': array([[ 1.00000000e+00 +0.00000000e+00j, ... ok test_mio.test_load('matrix', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmatrix_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmatrix_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmatrix_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmatrix_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmatrix_7.4_GLNX86.mat'], {'testmatrix': array([[ 1., 2., 3., 4., 5.], ... ok test_mio.test_load('sparse', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_7.4_GLNX86.mat'], {'testsparse': <3x5 sparse matrix of type '' ... ok test_mio.test_load('sparsecomplex', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_7.4_GLNX86.mat'], {'testsparsecomplex': <3x5 sparse matrix of type '' ... ok test_mio.test_load('multi', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmulti_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmulti_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testmulti_7.4_GLNX86.mat'], {'a': array([[ 1., 2., 3., 4., 5.], ... ok test_mio.test_load('minus', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testminus_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testminus_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testminus_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testminus_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testminus_7.4_GLNX86.mat'], {'testminus': array([[-1]])}) ... ok test_mio.test_load('onechar', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testonechar_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testonechar_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testonechar_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testonechar_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testonechar_7.4_GLNX86.mat'], {'testonechar': array([u'r'], ... ok test_mio.test_load('cell', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcell_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcell_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcell_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcell_7.4_GLNX86.mat'], {'testcell': array([[[u'This cell contains this string and 3 arrays of increasing length'], ... ok test_mio.test_load('scalarcell', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testscalarcell_7.4_GLNX86.mat'], {'testscalarcell': array([[[[1]]]], dtype=object)}) ... ok test_mio.test_load('emptycell', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testemptycell_5.3_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testemptycell_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testemptycell_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testemptycell_7.4_GLNX86.mat'], {'testemptycell': array([[[[1]], [[2]], [], [], [[3]]]], dtype=object)}) ... ok test_mio.test_load('stringarray', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststringarray_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststringarray_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststringarray_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststringarray_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststringarray_7.4_GLNX86.mat'], {'teststringarray': array([u'one ', u'two ', u'three'], ... ok test_mio.test_load('3dmatrix', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/test3dmatrix_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/test3dmatrix_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/test3dmatrix_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/test3dmatrix_7.4_GLNX86.mat'], {'test3dmatrix': array([[[ 1, 7, 13, 19], ... ok test_mio.test_load('struct', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststruct_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststruct_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststruct_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststruct_7.4_GLNX86.mat'], {'teststruct': array([[ ([u'Rats live on no evil star.'], [[1.4142135623730951, 2.7182818284590451, 3.1415926535897931]], [[(1.4142135623730951+1.4142135623730951j), (2.7182818284590451+2.7182818284590451j), (3.1415926535897931+3.1415926535897931j)]])]], ... ok test_mio.test_load('cellnest', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcellnest_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcellnest_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcellnest_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testcellnest_7.4_GLNX86.mat'], {'testcellnest': array([[[[1]], [[[[2]] [[3]] [[[[4]] [[5]]]]]]]], dtype=object)}) ... ok test_mio.test_load('structnest', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructnest_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructnest_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructnest_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructnest_7.4_GLNX86.mat'], {'teststructnest': array([[([[1]], [[(array([u'number 3'], ... ok test_mio.test_load('structarr', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructarr_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructarr_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructarr_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/teststructarr_7.4_GLNX86.mat'], {'teststructarr': array([[([[1]], [[2]]), ([u'number 1'], [u'number 2'])]], ... ok test_mio.test_load('object', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testobject_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testobject_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testobject_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testobject_7.4_GLNX86.mat'], {'testobject': MatlabObject([[([u'x'], [u' x = INLINE_INPUTS_{1};'], [u'x'], [[0]], [[1]], [[1]])]], ... ok test_mio.test_load('unicode', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testunicode_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testunicode_7.4_GLNX86.mat'], {'testunicode': array([ u'Japanese: \n\u3059\u3079\u3066\u306e\u4eba\u9593\u306f\u3001\u751f\u307e\u308c\u306a\u304c\u3089\u306b\u3057\u3066\u81ea\u7531\u3067\u3042\u308a\u3001\n\u304b\u3064\u3001\u5c0a\u53b3\u3068\u6a29\u5229\u3068 \u306b\u3064\u3044\u3066\u5e73\u7b49\u3067\u3042\u308b\u3002\n\u4eba\u9593\u306f\u3001\u7406\u6027\u3068\u826f\u5fc3\u3068\u3092\u6388\u3051\u3089\u308c\u3066\u304a\u308a\u3001\n\u4e92\u3044\u306b\u540c\u80de\u306e\u7cbe\u795e\u3092\u3082\u3063\u3066\u884c\u52d5\u3057\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\u3002'], ... ok test_mio.test_load('sparse', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparse_7.4_GLNX86.mat'], {'testsparse': <3x5 sparse matrix of type '' ... ok test_mio.test_load('sparsecomplex', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_4.2c_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_6.1_SOL2.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_6.5.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_7.1_GLNX86.mat', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_7.4_GLNX86.mat'], {'testsparsecomplex': <3x5 sparse matrix of type '' ... ok test_mio.test_load('func', ['/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testfunc_7.4_GLNX86.mat'], {'testfunc': 'Read error: Cannot read matlab functions'}) ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/mio.py:111: Warning: Unreadable variable "testfunc", because "Cannot read matlab functions" matfile_dict = MR.get_variables() ok test_mio.test_round_trip('double_round_trip', {'testdouble': array([[ 0. , 0.78539816, 1.57079633, 2.35619449, 3.14159265, ... ok test_mio.test_round_trip('string_round_trip', {'teststring': array([u'"Do nine men interpret?" "Nine men," I nod.'], ... ok test_mio.test_round_trip('complex_round_trip', {'testcomplex': array([[ 1.00000000e+00 +0.00000000e+00j, ... ok test_mio.test_round_trip('matrix_round_trip', {'testmatrix': array([[ 1., 2., 3., 4., 5.], ... ok test_mio.test_round_trip('sparse_round_trip', {'testsparse': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('sparsecomplex_round_trip', {'testsparsecomplex': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('multi_round_trip', {'a': array([[ 1., 2., 3., 4., 5.], ... ok test_mio.test_round_trip('minus_round_trip', {'testminus': array([[-1]])}, '4') ... ok test_mio.test_round_trip('onechar_round_trip', {'testonechar': array([u'r'], ... ok test_mio.test_round_trip('cell_round_trip', {'testcell': array([[[u'This cell contains this string and 3 arrays of increasing length'], ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/mio.py:165: FutureWarning: Using oned_as default value ('column') This will change to 'row' in future versions oned_as=oned_as) ok test_mio.test_round_trip('scalarcell_round_trip', {'testscalarcell': array([[[[1]]]], dtype=object)}, '5') ... ok test_mio.test_round_trip('emptycell_round_trip', {'testemptycell': array([[[[1]], [[2]], [], [], [[3]]]], dtype=object)}, '5') ... ok test_mio.test_round_trip('stringarray_round_trip', {'teststringarray': array([u'one ', u'two ', u'three'], ... ok test_mio.test_round_trip('3dmatrix_round_trip', {'test3dmatrix': array([[[ 1, 7, 13, 19], ... ok test_mio.test_round_trip('struct_round_trip', {'teststruct': array([[ ([u'Rats live on no evil star.'], [[1.4142135623730951, 2.7182818284590451, 3.1415926535897931]], [[(1.4142135623730951+1.4142135623730951j), (2.7182818284590451+2.7182818284590451j), (3.1415926535897931+3.1415926535897931j)]])]], ... ok test_mio.test_round_trip('cellnest_round_trip', {'testcellnest': array([[[[1]], [[[[2]] [[3]] [[[[4]] [[5]]]]]]]], dtype=object)}, '5') ... ok test_mio.test_round_trip('structnest_round_trip', {'teststructnest': array([[([[1]], [[(array([u'number 3'], ... ok test_mio.test_round_trip('structarr_round_trip', {'teststructarr': array([[([[1]], [[2]]), ([u'number 1'], [u'number 2'])]], ... ok test_mio.test_round_trip('object_round_trip', {'testobject': MatlabObject([[([u'x'], [u' x = INLINE_INPUTS_{1};'], [u'x'], [[0]], [[1]], [[1]])]], ... ok test_mio.test_round_trip('unicode_round_trip', {'testunicode': array([ u'Japanese: \n\u3059\u3079\u3066\u306e\u4eba\u9593\u306f\u3001\u751f\u307e\u308c\u306a\u304c\u3089\u306b\u3057\u3066\u81ea\u7531\u3067\u3042\u308a\u3001\n\u304b\u3064\u3001\u5c0a\u53b3\u3068\u6a29\u5229\u3068 \u306b\u3064\u3044\u3066\u5e73\u7b49\u3067\u3042\u308b\u3002\n\u4eba\u9593\u306f\u3001\u7406\u6027\u3068\u826f\u5fc3\u3068\u3092\u6388\u3051\u3089\u308c\u3066\u304a\u308a\u3001\n\u4e92\u3044\u306b\u540c\u80de\u306e\u7cbe\u795e\u3092\u3082\u3063\u3066\u884c\u52d5\u3057\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\u3002'], ... ok test_mio.test_round_trip('sparse_round_trip', {'testsparse': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('sparsecomplex_round_trip', {'testsparsecomplex': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('objectarray_round_trip', {'testobjectarray': MatlabObject([[([u'x'], [u' x = INLINE_INPUTS_{1};'], [u'x'], [[0]], [[1]], [[1]]), ... ok test_mio.test_gzip_simple ... ok test_mio.test_mat73 ... ok test_mio.test_warnings(, , '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_7.1_GLNX86.mat') ... ok test_mio.test_warnings(, , '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_7.1_GLNX86.mat') ... ok test_mio.test_warnings((, , '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/data/testdouble_7.1_GLNX86.mat'), {'basename': 'raw', 'struct_as_record': True}) ... ok Regression test for #653. ... ok test_mio.test_structname_len ... ok test_mio.test_4_and_long_field_names_incompatible ... ok test_mio.test_long_field_names ... ok test_mio.test_long_field_names_in_struct ... ok test_mio.test_cell_with_one_thing_in_it ... ok /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/tests/test_mio.py:438: FutureWarning: Using oned_as default value ('column') This will change to 'row' in future versions mfw = MatFile5Writer(StringIO()) test_mio.test_writer_properties([], []) ... ok test_mio.test_writer_properties(['avar'], ['avar']) ... ok test_mio.test_writer_properties(False, False) ... ok test_mio.test_writer_properties(True, True) ... ok test_mio.test_writer_properties(False, False) ... ok test_mio.test_writer_properties(True, True) ... ok test_mio.test_use_small_element(True,) ... ok test_mio.test_use_small_element(True,) ... ok test_mio.test_save_dict ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/io/matlab/mio.py:84: FutureWarning: Using struct_as_record default value (False) This will change to True in future versions return MatFile5Reader(byte_stream, **kwargs) ok test_mio.test_1d_shape((5, 1), (5, 1)) ... ok test_mio.test_1d_shape((1, 5), (1, 5)) ... ok test_mio.test_1d_shape((5, 1), (5, 1)) ... ok test_mio.test_1d_shape((1, 5), (1, 5)) ... ok test_mio.test_1d_shape((5, 1), (5, 1)) ... ok test_mio.test_1d_shape((1, 5), (1, 5)) ... ok test_mio.test_compression(array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ... ok test_mio.test_compression(array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ... ok test_mio.test_compression(True,) ... ok test_mio.test_compression(array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ... ok test_mio.test_compression(array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ... ok test_mio.test_single_object ... ok test_mio.test_skip_variable(True,) ... ok test_mio.test_skip_variable(True,) ... ok test_mio.test_skip_variable(True,) ... ok test_mio.test_empty_struct((1, 1), (1, 1)) ... ok test_mio.test_empty_struct(dtype('object'), dtype('object')) ... ok test_mio.test_empty_struct(True,) ... ok test_mio.test_empty_struct(array([], ... ok test_mio.test_recarray(array([[ 0.5]]), 0.5) ... ok test_mio.test_recarray(array([u'python'], ... ok test_mio.test_recarray(array([[ 0.5]]), 0.5) ... ok test_mio.test_recarray(array([u'python'], ... ok test_mio.test_recarray(dtype([('f1', '|O4'), ('f2', '|O4')]), dtype([('f1', '|O4'), ('f2', '|O4')])) ... ok test_mio.test_recarray(array([[ 99.]]), 99) ... ok test_mio.test_recarray(array([u'not perl'], ... ok test_mio.test_save_object(array([[1]]), 1) ... ok test_mio.test_save_object(array([u'a string'], ... ok test_mio.test_save_object(array([[1]]), 1) ... ok test_mio.test_save_object(array([u'a string'], ... ok test_basic (test_array_import.TestNumpyio) ... Warning: 1000000 bytes requested, 20 bytes read. ok test_complex (test_array_import.TestReadArray) ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/utils.py:140: DeprecationWarning: `write_array` is deprecated! This function is replaced by numpy.savetxt which allows the same functionality through a different syntax. warnings.warn(depdoc, DeprecationWarning) /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/utils.py:140: DeprecationWarning: `read_array` is deprecated! The functionality of read_array is in numpy.loadtxt which allows the same functionality using different syntax. warnings.warn(depdoc, DeprecationWarning) ok test_float (test_array_import.TestReadArray) ... ok test_integer (test_array_import.TestReadArray) ... ok test_get_open_file_works_with_filelike_objects (test_array_import.TestRegression) ... ok test_random_rect_real (test_mmio.TestMMIOArray) ... ok test_random_symmetric_real (test_mmio.TestMMIOArray) ... ok test_simple (test_mmio.TestMMIOArray) ... ok test_simple_complex (test_mmio.TestMMIOArray) ... ok test_simple_hermitian (test_mmio.TestMMIOArray) ... ok test_simple_real (test_mmio.TestMMIOArray) ... ok test_simple_rectangular (test_mmio.TestMMIOArray) ... ok test_simple_rectangular_real (test_mmio.TestMMIOArray) ... ok test_simple_skew_symmetric (test_mmio.TestMMIOArray) ... ok test_simple_skew_symmetric_float (test_mmio.TestMMIOArray) ... ok test_simple_symmetric (test_mmio.TestMMIOArray) ... ok test_complex_write_read (test_mmio.TestMMIOCoordinate) ... ok test_empty_write_read (test_mmio.TestMMIOCoordinate) ... ok read a general matrix ... ok read a hermitian matrix ... ok read a skew-symmetric matrix ... ok read a symmetric pattern matrix ... ok test_real_write_read (test_mmio.TestMMIOCoordinate) ... ok test_sparse_formats (test_mmio.TestMMIOCoordinate) ... ok test_init (test_npfile.TestNpFile) ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/utils.py:140: DeprecationWarning: `npfile` is deprecated! You can achieve the same effect as using npfile, using ndarray.tofile and numpy.fromfile. Even better you can use memory-mapped arrays and data-types to map out a file format for direct manipulation in NumPy. warnings.warn(depdoc, DeprecationWarning) ok test_parse_endian (test_npfile.TestNpFile) ... ok test_read_write_array (test_npfile.TestNpFile) ... ok test_read_write_raw (test_npfile.TestNpFile) ... ok test_remaining_bytes (test_npfile.TestNpFile) ... ok test_cast_to_fp (test_recaster.TestRecaster) ... ok test_init (test_recaster.TestRecaster) ... ok test_recasts (test_recaster.TestRecaster) ... ok test_smallest_int_sctype (test_recaster.TestRecaster) ... ok test_blas (test_blas.TestBLAS) ... ok test_cblas (test_blas.TestBLAS) ... ok test_fblas (test_blas.TestBLAS) ... ok test_axpy (test_blas.TestCBLAS1Simple) ... ok test_amax (test_blas.TestFBLAS1Simple) ... ok test_asum (test_blas.TestFBLAS1Simple) ... ok test_axpy (test_blas.TestFBLAS1Simple) ... ok test_copy (test_blas.TestFBLAS1Simple) ... ok test_dot (test_blas.TestFBLAS1Simple) ... ok test_nrm2 (test_blas.TestFBLAS1Simple) ... ok test_scal (test_blas.TestFBLAS1Simple) ... ok test_swap (test_blas.TestFBLAS1Simple) ... ok test_gemv (test_blas.TestFBLAS2Simple) ... ok test_ger (test_blas.TestFBLAS2Simple) ... ok test_gemm (test_blas.TestFBLAS3Simple) ... ok test_gemm2 (test_blas.TestFBLAS3Simple) ... ok test_default_a (test_fblas.TestCaxpy) ... ok test_simple (test_fblas.TestCaxpy) ... ok test_x_and_y_stride (test_fblas.TestCaxpy) ... ok test_x_bad_size (test_fblas.TestCaxpy) ... ok test_x_stride (test_fblas.TestCaxpy) ... ok test_y_bad_size (test_fblas.TestCaxpy) ... ok test_y_stride (test_fblas.TestCaxpy) ... ok test_simple (test_fblas.TestCcopy) ... ok test_x_and_y_stride (test_fblas.TestCcopy) ... ok test_x_bad_size (test_fblas.TestCcopy) ... ok test_x_stride (test_fblas.TestCcopy) ... ok test_y_bad_size (test_fblas.TestCcopy) ... ok test_y_stride (test_fblas.TestCcopy) ... ok test_default_beta_y (test_fblas.TestCgemv) ... ok test_simple (test_fblas.TestCgemv) ... ok test_simple_transpose (test_fblas.TestCgemv) ... ok test_simple_transpose_conj (test_fblas.TestCgemv) ... ok test_x_stride (test_fblas.TestCgemv) ... ok test_x_stride_assert (test_fblas.TestCgemv) ... ok test_x_stride_transpose (test_fblas.TestCgemv) ... ok test_y_stride (test_fblas.TestCgemv) ... ok test_y_stride_assert (test_fblas.TestCgemv) ... ok test_y_stride_transpose (test_fblas.TestCgemv) ... ok test_simple (test_fblas.TestCscal) ... ok test_x_bad_size (test_fblas.TestCscal) ... ok test_x_stride (test_fblas.TestCscal) ... ok test_simple (test_fblas.TestCswap) ... ok test_x_and_y_stride (test_fblas.TestCswap) ... ok test_x_bad_size (test_fblas.TestCswap) ... ok test_x_stride (test_fblas.TestCswap) ... ok test_y_bad_size (test_fblas.TestCswap) ... ok test_y_stride (test_fblas.TestCswap) ... ok test_default_a (test_fblas.TestDaxpy) ... ok test_simple (test_fblas.TestDaxpy) ... ok test_x_and_y_stride (test_fblas.TestDaxpy) ... ok test_x_bad_size (test_fblas.TestDaxpy) ... ok test_x_stride (test_fblas.TestDaxpy) ... ok test_y_bad_size (test_fblas.TestDaxpy) ... ok test_y_stride (test_fblas.TestDaxpy) ... ok test_simple (test_fblas.TestDcopy) ... ok test_x_and_y_stride (test_fblas.TestDcopy) ... ok test_x_bad_size (test_fblas.TestDcopy) ... ok test_x_stride (test_fblas.TestDcopy) ... ok test_y_bad_size (test_fblas.TestDcopy) ... ok test_y_stride (test_fblas.TestDcopy) ... ok test_default_beta_y (test_fblas.TestDgemv) ... ok test_simple (test_fblas.TestDgemv) ... ok test_simple_transpose (test_fblas.TestDgemv) ... ok test_simple_transpose_conj (test_fblas.TestDgemv) ... ok test_x_stride (test_fblas.TestDgemv) ... ok test_x_stride_assert (test_fblas.TestDgemv) ... ok test_x_stride_transpose (test_fblas.TestDgemv) ... ok test_y_stride (test_fblas.TestDgemv) ... ok test_y_stride_assert (test_fblas.TestDgemv) ... ok test_y_stride_transpose (test_fblas.TestDgemv) ... ok test_simple (test_fblas.TestDscal) ... ok test_x_bad_size (test_fblas.TestDscal) ... ok test_x_stride (test_fblas.TestDscal) ... ok test_simple (test_fblas.TestDswap) ... ok test_x_and_y_stride (test_fblas.TestDswap) ... ok test_x_bad_size (test_fblas.TestDswap) ... ok test_x_stride (test_fblas.TestDswap) ... ok test_y_bad_size (test_fblas.TestDswap) ... ok test_y_stride (test_fblas.TestDswap) ... ok test_default_a (test_fblas.TestSaxpy) ... ok test_simple (test_fblas.TestSaxpy) ... ok test_x_and_y_stride (test_fblas.TestSaxpy) ... ok test_x_bad_size (test_fblas.TestSaxpy) ... ok test_x_stride (test_fblas.TestSaxpy) ... ok test_y_bad_size (test_fblas.TestSaxpy) ... ok test_y_stride (test_fblas.TestSaxpy) ... ok test_simple (test_fblas.TestScopy) ... ok test_x_and_y_stride (test_fblas.TestScopy) ... ok test_x_bad_size (test_fblas.TestScopy) ... ok test_x_stride (test_fblas.TestScopy) ... ok test_y_bad_size (test_fblas.TestScopy) ... ok test_y_stride (test_fblas.TestScopy) ... ok test_default_beta_y (test_fblas.TestSgemv) ... ok test_simple (test_fblas.TestSgemv) ... ok test_simple_transpose (test_fblas.TestSgemv) ... ok test_simple_transpose_conj (test_fblas.TestSgemv) ... ok test_x_stride (test_fblas.TestSgemv) ... ok test_x_stride_assert (test_fblas.TestSgemv) ... ok test_x_stride_transpose (test_fblas.TestSgemv) ... ok test_y_stride (test_fblas.TestSgemv) ... ok test_y_stride_assert (test_fblas.TestSgemv) ... ok test_y_stride_transpose (test_fblas.TestSgemv) ... ok test_simple (test_fblas.TestSscal) ... ok test_x_bad_size (test_fblas.TestSscal) ... ok test_x_stride (test_fblas.TestSscal) ... ok test_simple (test_fblas.TestSswap) ... ok test_x_and_y_stride (test_fblas.TestSswap) ... ok test_x_bad_size (test_fblas.TestSswap) ... ok test_x_stride (test_fblas.TestSswap) ... ok test_y_bad_size (test_fblas.TestSswap) ... ok test_y_stride (test_fblas.TestSswap) ... ok test_default_a (test_fblas.TestZaxpy) ... ok test_simple (test_fblas.TestZaxpy) ... ok test_x_and_y_stride (test_fblas.TestZaxpy) ... ok test_x_bad_size (test_fblas.TestZaxpy) ... ok test_x_stride (test_fblas.TestZaxpy) ... ok test_y_bad_size (test_fblas.TestZaxpy) ... ok test_y_stride (test_fblas.TestZaxpy) ... ok test_simple (test_fblas.TestZcopy) ... ok test_x_and_y_stride (test_fblas.TestZcopy) ... ok test_x_bad_size (test_fblas.TestZcopy) ... ok test_x_stride (test_fblas.TestZcopy) ... ok test_y_bad_size (test_fblas.TestZcopy) ... ok test_y_stride (test_fblas.TestZcopy) ... ok test_default_beta_y (test_fblas.TestZgemv) ... ok test_simple (test_fblas.TestZgemv) ... ok test_simple_transpose (test_fblas.TestZgemv) ... ok test_simple_transpose_conj (test_fblas.TestZgemv) ... ok test_x_stride (test_fblas.TestZgemv) ... ok test_x_stride_assert (test_fblas.TestZgemv) ... ok test_x_stride_transpose (test_fblas.TestZgemv) ... ok test_y_stride (test_fblas.TestZgemv) ... ok test_y_stride_assert (test_fblas.TestZgemv) ... ok test_y_stride_transpose (test_fblas.TestZgemv) ... ok test_simple (test_fblas.TestZscal) ... ok test_x_bad_size (test_fblas.TestZscal) ... ok test_x_stride (test_fblas.TestZscal) ... ok test_simple (test_fblas.TestZswap) ... ok test_x_and_y_stride (test_fblas.TestZswap) ... ok test_x_bad_size (test_fblas.TestZswap) ... ok test_x_stride (test_fblas.TestZswap) ... ok test_y_bad_size (test_fblas.TestZswap) ... ok test_y_stride (test_fblas.TestZswap) ... ok test_clapack_dsyev (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_dsyev Clapack empty, skip clapack test test_clapack_dsyevr (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_dsyevr Clapack empty, skip clapack test test_clapack_dsyevr_ranges (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_dsyevr_ranges Clapack empty, skip clapack test test_clapack_ssyev (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_ssyev Clapack empty, skip clapack test test_clapack_ssyevr (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_ssyevr Clapack empty, skip clapack test test_clapack_ssyevr_ranges (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_ssyevr_ranges Clapack empty, skip clapack test test_dsyev (test_esv.TestEsv) ... ok test_dsyevr (test_esv.TestEsv) ... ok test_dsyevr_ranges (test_esv.TestEsv) ... ok test_ssyev (test_esv.TestEsv) ... ok test_ssyevr (test_esv.TestEsv) ... ok test_ssyevr_ranges (test_esv.TestEsv) ... ok test_clapack_dsygv_1 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_dsygv_1 Clapack empty, skip flapack test test_clapack_dsygv_2 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_dsygv_2 Clapack empty, skip flapack test test_clapack_dsygv_3 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_dsygv_3 Clapack empty, skip flapack test test_clapack_ssygv_1 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ssygv_1 Clapack empty, skip flapack test test_clapack_ssygv_2 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ssygv_2 Clapack empty, skip flapack test test_clapack_ssygv_3 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ssygv_3 Clapack empty, skip flapack test test_dsygv_1 (test_gesv.TestSygv) ... ok test_dsygv_2 (test_gesv.TestSygv) ... ok test_dsygv_3 (test_gesv.TestSygv) ... ok test_ssygv_1 (test_gesv.TestSygv) ... ok test_ssygv_2 (test_gesv.TestSygv) ... ok test_ssygv_3 (test_gesv.TestSygv) ... ok test_clapack_dgebal (test_lapack.TestLapack) ... SKIP: Skipping test: test_clapack_dgebal Clapack empty, skip flapack test test_clapack_dgehrd (test_lapack.TestLapack) ... SKIP: Skipping test: test_clapack_dgehrd Clapack empty, skip flapack test test_clapack_sgebal (test_lapack.TestLapack) ... SKIP: Skipping test: test_clapack_sgebal Clapack empty, skip flapack test test_clapack_sgehrd (test_lapack.TestLapack) ... SKIP: Skipping test: test_clapack_sgehrd Clapack empty, skip flapack test test_dgebal (test_lapack.TestLapack) ... ok test_dgehrd (test_lapack.TestLapack) ... ok test_sgebal (test_lapack.TestLapack) ... ok test_sgehrd (test_lapack.TestLapack) ... ok NO ATLAS INFO AVAILABLE test_random (test_basic.TestDet) ... ok test_random_complex (test_basic.TestDet) ... ok test_simple (test_basic.TestDet) ... ok test_simple_complex (test_basic.TestDet) ... ok test_basic (test_basic.TestHankel) ... ok test_random (test_basic.TestInv) ... ok test_random_complex (test_basic.TestInv) ... ok test_simple (test_basic.TestInv) ... ok test_simple_complex (test_basic.TestInv) ... ok test_random_complex_exact (test_basic.TestLstsq) ... ok test_random_complex_overdet (test_basic.TestLstsq) ... ok test_random_exact (test_basic.TestLstsq) ... ok test_random_overdet (test_basic.TestLstsq) ... ok test_random_overdet_large (test_basic.TestLstsq) ... ok test_simple_exact (test_basic.TestLstsq) ... ok test_simple_overdet (test_basic.TestLstsq) ... ok test_simple_underdet (test_basic.TestLstsq) ... ok test_simple (test_basic.TestPinv) ... ok test_simple_0det (test_basic.TestPinv) ... ok test_simple_cols (test_basic.TestPinv) ... ok test_simple_rows (test_basic.TestPinv) ... ok test_20Feb04_bug (test_basic.TestSolve) ... ok test_nils_20Feb04 (test_basic.TestSolve) ... ok test_random (test_basic.TestSolve) ... ok test_random_complex (test_basic.TestSolve) ... ok test_random_sym (test_basic.TestSolve) ... ok test_random_sym_complex (test_basic.TestSolve) ... ok test_simple (test_basic.TestSolve) ... ok test_simple_complex (test_basic.TestSolve) ... ok test_simple_sym (test_basic.TestSolve) ... ok test_simple_sym_complex (test_basic.TestSolve) ... ok test_simple (test_basic.TestSolveBanded) ... ok test_basic (test_basic.TestToeplitz) ... ok test_2d (test_basic.TestTri) ... ok test_basic (test_basic.TestTri) ... ok test_diag (test_basic.TestTri) ... ok test_diag2d (test_basic.TestTri) ... ok test_basic (test_basic.TestTril) ... ok test_diag (test_basic.TestTril) ... ok test_basic (test_basic.TestTriu) ... ok test_diag (test_basic.TestTriu) ... ok test_cblas (test_blas.TestBLAS) ... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ok test_fblas (test_blas.TestBLAS) ... ok test_axpy (test_blas.TestCBLAS1Simple) ... ok test_amax (test_blas.TestFBLAS1Simple) ... ok test_asum (test_blas.TestFBLAS1Simple) ... ok test_axpy (test_blas.TestFBLAS1Simple) ... ok test_complex_dotc (test_blas.TestFBLAS1Simple) ... ok test_complex_dotu (test_blas.TestFBLAS1Simple) ... ok test_copy (test_blas.TestFBLAS1Simple) ... ok test_dot (test_blas.TestFBLAS1Simple) ... ok test_nrm2 (test_blas.TestFBLAS1Simple) ... ok test_scal (test_blas.TestFBLAS1Simple) ... ok test_swap (test_blas.TestFBLAS1Simple) ... ok test_gemv (test_blas.TestFBLAS2Simple) ... ok test_ger (test_blas.TestFBLAS2Simple) ... ok test_gemm (test_blas.TestFBLAS3Simple) ... ok test_lapack (test_build.TestF77Mismatch) ... SKIP: Skipping test: test_lapack Skipping fortran compiler mismatch on non Linux platform test_random (test_decomp.TestCholesky) ... ok test_random_complex (test_decomp.TestCholesky) ... ok test_simple (test_decomp.TestCholesky) ... ok test_simple_complex (test_decomp.TestCholesky) ... ok test_datanotshared (test_decomp.TestDataNotShared) ... ok test_simple (test_decomp.TestDiagSVD) ... ok Test matrices giving some Nan generalized eigen values. ... ok test_simple (test_decomp.TestEig) ... ok test_simple_complex (test_decomp.TestEig) ... ok Test singular pair ... ok Compare dgbtrf LU factorisation with the LU factorisation result ... ok Compare dgbtrs solutions for linear equation system A*x = b ... ok Compare dsbev eigenvalues and eigenvectors with ... ok Compare dsbevd eigenvalues and eigenvectors with ... ok Compare dsbevx eigenvalues and eigenvectors ... ok Compare eigenvalues and eigenvectors of eig_banded ... ok Compare eigenvalues of eigvals_banded with those of linalg.eig. ... ok Compare zgbtrf LU factorisation with the LU factorisation result ... ok Compare zgbtrs solutions for linear equation system A*x = b ... ok Compare zhbevd eigenvalues and eigenvectors ... ok Compare zhbevx eigenvalues and eigenvectors ... ok test_simple (test_decomp.TestEigVals) ... ok test_simple_complex (test_decomp.TestEigVals) ... ok test_simple_tr (test_decomp.TestEigVals) ... ok test_random (test_decomp.TestHessenberg) ... ok test_random_complex (test_decomp.TestHessenberg) ... ok test_simple (test_decomp.TestHessenberg) ... ok test_simple2 (test_decomp.TestHessenberg) ... ok test_simple_complex (test_decomp.TestHessenberg) ... ok test_hrectangular (test_decomp.TestLU) ... ok test_hrectangular_complex (test_decomp.TestLU) ... ok Check lu decomposition on medium size, rectangular matrix. ... ok Check lu decomposition on medium size, rectangular matrix. ... ok test_simple (test_decomp.TestLU) ... ok test_simple2 (test_decomp.TestLU) ... ok test_simple2_complex (test_decomp.TestLU) ... ok test_simple_complex (test_decomp.TestLU) ... ok test_vrectangular (test_decomp.TestLU) ... ok test_vrectangular_complex (test_decomp.TestLU) ... ok test_hrectangular (test_decomp.TestLUSingle) ... ok test_hrectangular_complex (test_decomp.TestLUSingle) ... ok Check lu decomposition on medium size, rectangular matrix. ... ok Check lu decomposition on medium size, rectangular matrix. ... ok test_simple (test_decomp.TestLUSingle) ... ok test_simple2 (test_decomp.TestLUSingle) ... ok test_simple2_complex (test_decomp.TestLUSingle) ... ok test_simple_complex (test_decomp.TestLUSingle) ... ok test_vrectangular (test_decomp.TestLUSingle) ... ok test_vrectangular_complex (test_decomp.TestLUSingle) ... ok test_lu (test_decomp.TestLUSolve) ... ok test_random (test_decomp.TestQR) ... ok test_random_complex (test_decomp.TestQR) ... ok test_random_tall (test_decomp.TestQR) ... ok test_random_tall_e (test_decomp.TestQR) ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/linalg/decomp.py:1177: DeprecationWarning: qr econ argument will be removed after scipy 0.7. The economy transform will then be available through the mode='economic' argument. "the mode='economic' argument.", DeprecationWarning) ok test_random_trap (test_decomp.TestQR) ... ok test_simple (test_decomp.TestQR) ... ok test_simple_complex (test_decomp.TestQR) ... ok test_simple_tall (test_decomp.TestQR) ... ok test_simple_tall_e (test_decomp.TestQR) ... ok test_simple_trap (test_decomp.TestQR) ... ok test_random (test_decomp.TestRQ) ... ok test_simple (test_decomp.TestRQ) ... ok test_random (test_decomp.TestSVD) ... ok test_random_complex (test_decomp.TestSVD) ... ok test_simple (test_decomp.TestSVD) ... ok test_simple_complex (test_decomp.TestSVD) ... ok test_simple_overdet (test_decomp.TestSVD) ... ok test_simple_singular (test_decomp.TestSVD) ... ok test_simple_underdet (test_decomp.TestSVD) ... ok test_simple (test_decomp.TestSVDVals) ... ok test_simple_complex (test_decomp.TestSVDVals) ... ok test_simple_overdet (test_decomp.TestSVDVals) ... ok test_simple_overdet_complex (test_decomp.TestSVDVals) ... ok test_simple_underdet (test_decomp.TestSVDVals) ... ok test_simple_underdet_complex (test_decomp.TestSVDVals) ... ok test_simple (test_decomp.TestSchur) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'f', True, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'f', True, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'f', True, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'f', True, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'f', False, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'f', False, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'f', False, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'f', False, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'd', True, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'd', True, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'd', True, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'd', True, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'd', False, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'd', False, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'F', True, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'F', True, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'F', True, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'F', True, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'F', False, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'F', False, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'F', False, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'F', False, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'D', True, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'D', True, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'D', True, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'D', True, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'D', False, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'D', False, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'D', False, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'D', False, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', False, False, False, (2, 4)) ... ok test_decomp.test_eigh_integer ... ok test_default_a (test_fblas.TestCaxpy) ... ok test_simple (test_fblas.TestCaxpy) ... ok test_x_and_y_stride (test_fblas.TestCaxpy) ... ok test_x_bad_size (test_fblas.TestCaxpy) ... ok test_x_stride (test_fblas.TestCaxpy) ... ok test_y_bad_size (test_fblas.TestCaxpy) ... ok test_y_stride (test_fblas.TestCaxpy) ... ok test_simple (test_fblas.TestCcopy) ... ok test_x_and_y_stride (test_fblas.TestCcopy) ... ok test_x_bad_size (test_fblas.TestCcopy) ... ok test_x_stride (test_fblas.TestCcopy) ... ok test_y_bad_size (test_fblas.TestCcopy) ... ok test_y_stride (test_fblas.TestCcopy) ... ok test_default_beta_y (test_fblas.TestCgemv) ... ok test_simple (test_fblas.TestCgemv) ... ok test_simple_transpose (test_fblas.TestCgemv) ... ok test_simple_transpose_conj (test_fblas.TestCgemv) ... ok test_x_stride (test_fblas.TestCgemv) ... ok test_x_stride_assert (test_fblas.TestCgemv) ... ok test_x_stride_transpose (test_fblas.TestCgemv) ... ok test_y_stride (test_fblas.TestCgemv) ... ok test_y_stride_assert (test_fblas.TestCgemv) ... ok test_y_stride_transpose (test_fblas.TestCgemv) ... ok test_simple (test_fblas.TestCscal) ... ok test_x_bad_size (test_fblas.TestCscal) ... ok test_x_stride (test_fblas.TestCscal) ... ok test_simple (test_fblas.TestCswap) ... ok test_x_and_y_stride (test_fblas.TestCswap) ... ok test_x_bad_size (test_fblas.TestCswap) ... ok test_x_stride (test_fblas.TestCswap) ... ok test_y_bad_size (test_fblas.TestCswap) ... ok test_y_stride (test_fblas.TestCswap) ... ok test_default_a (test_fblas.TestDaxpy) ... ok test_simple (test_fblas.TestDaxpy) ... ok test_x_and_y_stride (test_fblas.TestDaxpy) ... ok test_x_bad_size (test_fblas.TestDaxpy) ... ok test_x_stride (test_fblas.TestDaxpy) ... ok test_y_bad_size (test_fblas.TestDaxpy) ... ok test_y_stride (test_fblas.TestDaxpy) ... ok test_simple (test_fblas.TestDcopy) ... ok test_x_and_y_stride (test_fblas.TestDcopy) ... ok test_x_bad_size (test_fblas.TestDcopy) ... ok test_x_stride (test_fblas.TestDcopy) ... ok test_y_bad_size (test_fblas.TestDcopy) ... ok test_y_stride (test_fblas.TestDcopy) ... ok test_default_beta_y (test_fblas.TestDgemv) ... ok test_simple (test_fblas.TestDgemv) ... ok test_simple_transpose (test_fblas.TestDgemv) ... ok test_simple_transpose_conj (test_fblas.TestDgemv) ... ok test_x_stride (test_fblas.TestDgemv) ... ok test_x_stride_assert (test_fblas.TestDgemv) ... ok test_x_stride_transpose (test_fblas.TestDgemv) ... ok test_y_stride (test_fblas.TestDgemv) ... ok test_y_stride_assert (test_fblas.TestDgemv) ... ok test_y_stride_transpose (test_fblas.TestDgemv) ... ok test_simple (test_fblas.TestDscal) ... ok test_x_bad_size (test_fblas.TestDscal) ... ok test_x_stride (test_fblas.TestDscal) ... ok test_simple (test_fblas.TestDswap) ... ok test_x_and_y_stride (test_fblas.TestDswap) ... ok test_x_bad_size (test_fblas.TestDswap) ... ok test_x_stride (test_fblas.TestDswap) ... ok test_y_bad_size (test_fblas.TestDswap) ... ok test_y_stride (test_fblas.TestDswap) ... ok test_default_a (test_fblas.TestSaxpy) ... ok test_simple (test_fblas.TestSaxpy) ... ok test_x_and_y_stride (test_fblas.TestSaxpy) ... ok test_x_bad_size (test_fblas.TestSaxpy) ... ok test_x_stride (test_fblas.TestSaxpy) ... ok test_y_bad_size (test_fblas.TestSaxpy) ... ok test_y_stride (test_fblas.TestSaxpy) ... ok test_simple (test_fblas.TestScopy) ... ok test_x_and_y_stride (test_fblas.TestScopy) ... ok test_x_bad_size (test_fblas.TestScopy) ... ok test_x_stride (test_fblas.TestScopy) ... ok test_y_bad_size (test_fblas.TestScopy) ... ok test_y_stride (test_fblas.TestScopy) ... ok test_default_beta_y (test_fblas.TestSgemv) ... ok test_simple (test_fblas.TestSgemv) ... ok test_simple_transpose (test_fblas.TestSgemv) ... ok test_simple_transpose_conj (test_fblas.TestSgemv) ... ok test_x_stride (test_fblas.TestSgemv) ... ok test_x_stride_assert (test_fblas.TestSgemv) ... ok test_x_stride_transpose (test_fblas.TestSgemv) ... ok test_y_stride (test_fblas.TestSgemv) ... ok test_y_stride_assert (test_fblas.TestSgemv) ... ok test_y_stride_transpose (test_fblas.TestSgemv) ... ok test_simple (test_fblas.TestSscal) ... ok test_x_bad_size (test_fblas.TestSscal) ... ok test_x_stride (test_fblas.TestSscal) ... ok test_simple (test_fblas.TestSswap) ... ok test_x_and_y_stride (test_fblas.TestSswap) ... ok test_x_bad_size (test_fblas.TestSswap) ... ok test_x_stride (test_fblas.TestSswap) ... ok test_y_bad_size (test_fblas.TestSswap) ... ok test_y_stride (test_fblas.TestSswap) ... ok test_default_a (test_fblas.TestZaxpy) ... ok test_simple (test_fblas.TestZaxpy) ... ok test_x_and_y_stride (test_fblas.TestZaxpy) ... ok test_x_bad_size (test_fblas.TestZaxpy) ... ok test_x_stride (test_fblas.TestZaxpy) ... ok test_y_bad_size (test_fblas.TestZaxpy) ... ok test_y_stride (test_fblas.TestZaxpy) ... ok test_simple (test_fblas.TestZcopy) ... ok test_x_and_y_stride (test_fblas.TestZcopy) ... ok test_x_bad_size (test_fblas.TestZcopy) ... ok test_x_stride (test_fblas.TestZcopy) ... ok test_y_bad_size (test_fblas.TestZcopy) ... ok test_y_stride (test_fblas.TestZcopy) ... ok test_default_beta_y (test_fblas.TestZgemv) ... ok test_simple (test_fblas.TestZgemv) ... ok test_simple_transpose (test_fblas.TestZgemv) ... ok test_simple_transpose_conj (test_fblas.TestZgemv) ... ok test_x_stride (test_fblas.TestZgemv) ... ok test_x_stride_assert (test_fblas.TestZgemv) ... ok test_x_stride_transpose (test_fblas.TestZgemv) ... ok test_y_stride (test_fblas.TestZgemv) ... ok test_y_stride_assert (test_fblas.TestZgemv) ... ok test_y_stride_transpose (test_fblas.TestZgemv) ... ok test_simple (test_fblas.TestZscal) ... ok test_x_bad_size (test_fblas.TestZscal) ... ok test_x_stride (test_fblas.TestZscal) ... ok test_simple (test_fblas.TestZswap) ... ok test_x_and_y_stride (test_fblas.TestZswap) ... ok test_x_bad_size (test_fblas.TestZswap) ... ok test_x_stride (test_fblas.TestZswap) ... ok test_y_bad_size (test_fblas.TestZswap) ... ok test_y_stride (test_fblas.TestZswap) ... ok test_gebal (test_lapack.TestFlapackSimple) ... ok test_gehrd (test_lapack.TestFlapackSimple) ... ok test_clapack (test_lapack.TestLapack) ... **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ok test_flapack (test_lapack.TestLapack) ... ok test_zero (test_matfuncs.TestExpM) ... ok test_nils (test_matfuncs.TestLogM) ... Result may be inaccurate, approximate err = 7.68933681286e-09 ok test_defective1 (test_matfuncs.TestSignM) ... ok test_defective2 (test_matfuncs.TestSignM) ... ok test_defective3 (test_matfuncs.TestSignM) ... Result may be inaccurate, approximate err = 7.27595761418e-12 ok test_nils (test_matfuncs.TestSignM) ... ok test_bad (test_matfuncs.TestSqrtM) ... ok test_logsumexp (test_maxentropy.TestMaxentropy) ... ok test_simple (test_maxentropy.TestMaxentropy) ... ok test_bytescale (test_pilutil.TestPILUtil) ... ok test_imresize (test_pilutil.TestPILUtil) ... ERROR Test generator for parametric tests ... FAIL Test generator for parametric tests ... FAIL Test generator for parametric tests ... FAIL test_doccer.test_unindent('Another test\n with some indent', 'Another test\n with some indent') ... ok test_doccer.test_unindent('Another test, one line', 'Another test, one line') ... ok test_doccer.test_unindent('Another test\n with some indent', 'Another test\n with some indent') ... ok test_doccer.test_unindent_dict('Another test\n with some indent', 'Another test\n with some indent') ... ok test_doccer.test_unindent_dict('Another test, one line', 'Another test, one line') ... ok test_doccer.test_unindent_dict('Another test\n with some indent', 'Another test\n with some indent') ... ok test_doccer.test_docformat('Docstring\n Another test\n with some indent\n Another test, one line\n Another test\n with some indent\n', 'Docstring\n Another test\n with some indent\n Another test, one line\n Another test\n with some indent\n') ... ok test_doccer.test_docformat('Single line doc Another test\n with some indent', 'Single line doc Another test\n with some indent') ... ok test_doccer.test_decorator(' Docstring\n Another test\n with some indent\n ', ' Docstring\n Another test\n with some indent\n ') ... ok test_doccer.test_decorator(' Docstring\n Another test\n with some indent\n ', ' Docstring\n Another test\n with some indent\n ') ... ok test_filters.test_ticket_701 ... ok test_filters.test_orders_gauss(0, array([ 0.])) ... ok test_filters.test_orders_gauss(0, array([ 0.])) ... ok test_filters.test_orders_gauss(, , array([ 0.]), 1, -1) ... ok test_filters.test_orders_gauss(, , array([ 0.]), 1, 4) ... ok test_filters.test_orders_gauss(0, array([ 0.])) ... ok test_filters.test_orders_gauss(0, array([ 0.])) ... ok test_filters.test_orders_gauss(, , array([ 0.]), 1, -1, -1) ... ok test_filters.test_orders_gauss(, , array([ 0.]), 1, -1, 4) ... ok affine_transform 1 ... ok affine transform 2 ... ok affine transform 3 ... ok affine transform 4 ... ok affine transform 5 ... ok affine transform 6 ... ok affine transform 7 ... ok affine transform 8 ... ok affine transform 9 ... ok affine transform 10 ... ok affine transform 11 ... ok affine transform 12 ... ok affine transform 13 ... ok affine transform 14 ... ok affine transform 15 ... ok affine transform 16 ... ok affine transform 17 ... ok affine transform 18 ... ok affine transform 19 ... ok affine transform 20 ... ok affine transform 21 ... ok binary closing 1 ... ok binary closing 2 ... ok binary dilation 1 ... ok binary dilation 2 ... ok binary dilation 3 ... ok binary dilation 4 ... ok binary dilation 5 ... ok binary dilation 6 ... ok binary dilation 7 ... ok binary dilation 8 ... ok binary dilation 9 ... ok binary dilation 10 ... ok binary dilation 11 ... ok binary dilation 12 ... ok binary dilation 13 ... ok binary dilation 14 ... ok binary dilation 15 ... ok binary dilation 16 ... ok binary dilation 17 ... ok binary dilation 18 ... ok binary dilation 19 ... ok binary dilation 20 ... ok binary dilation 21 ... ok binary dilation 22 ... ok binary dilation 23 ... ok binary dilation 24 ... ok binary dilation 25 ... ok binary dilation 26 ... ok binary dilation 27 ... ok binary dilation 28 ... ok binary dilation 29 ... ok binary dilation 30 ... ok binary dilation 31 ... ok binary dilation 32 ... ok binary dilation 33 ... ok binary dilation 34 ... ok binary dilation 35 ... ok binary erosion 1 ... ok binary erosion 2 ... ok binary erosion 3 ... ok binary erosion 4 ... ok binary erosion 5 ... ok binary erosion 6 ... ok binary erosion 7 ... ok binary erosion 8 ... ok binary erosion 9 ... ok binary erosion 10 ... ok binary erosion 11 ... ok binary erosion 12 ... ok binary erosion 13 ... ok binary erosion 14 ... ok binary erosion 15 ... ok binary erosion 16 ... ok binary erosion 17 ... ok binary erosion 18 ... ok binary erosion 19 ... ok binary erosion 20 ... ok binary erosion 21 ... ok binary erosion 22 ... ok binary erosion 23 ... ok binary erosion 24 ... ok binary erosion 25 ... ok binary erosion 26 ... ok binary erosion 27 ... ok binary erosion 28 ... ok binary erosion 29 ... ok binary erosion 30 ... ok binary erosion 31 ... ok binary erosion 32 ... ok binary erosion 33 ... ok binary erosion 34 ... ok binary erosion 35 ... ok binary erosion 36 ... ok binary fill holes 1 ... ok binary fill holes 2 ... ok binary fill holes 3 ... ok binary opening 1 ... ok binary opening 2 ... ok binary propagation 1 ... ok binary propagation 2 ... ok black tophat 1 ... ok black tophat 2 ... ok boundary modes ... ok boundary modes 2 ... ok center of mass 1 ... ok center of mass 2 ... ok center of mass 3 ... ok center of mass 4 ... ok center of mass 5 ... ok center of mass 6 ... ok center of mass 7 ... ok center of mass 8 ... ok center of mass 9 ... ok correlation 1 ... ok correlation 2 ... ok correlation 3 ... ok correlation 4 ... ok correlation 5 ... ok correlation 6 ... ok correlation 7 ... ok correlation 8 ... ok correlation 9 ... ok correlation 10 ... ok correlation 11 ... ok correlation 12 ... ok correlation 13 ... ok correlation 14 ... ok correlation 15 ... ok correlation 16 ... ok correlation 17 ... ok correlation 18 ... ok correlation 19 ... ok correlation 20 ... ok correlation 21 ... ok correlation 22 ... ok correlation 23 ... ok correlation 24 ... ok correlation 25 ... ok brute force distance transform 1 ... ok brute force distance transform 2 ... ok brute force distance transform 3 ... ok brute force distance transform 4 ... ok brute force distance transform 5 ... ok brute force distance transform 6 ... ok chamfer type distance transform 1 ... ok chamfer type distance transform 2 ... ok chamfer type distance transform 3 ... ok euclidean distance transform 1 ... ok euclidean distance transform 2 ... ok euclidean distance transform 3 ... ok euclidean distance transform 4 ... ok line extension 1 ... ok line extension 2 ... ok line extension 3 ... ok line extension 4 ... ok line extension 5 ... ok line extension 6 ... ok line extension 7 ... ok line extension 8 ... ok line extension 9 ... ok line extension 10 ... ok extrema 1 ... ok extrema 2 ... ok extrema 3 ... ok extrema 4 ... ok find_objects 1 ... ok find_objects 2 ... ok find_objects 3 ... ok find_objects 4 ... ok find_objects 5 ... ok find_objects 6 ... ok find_objects 7 ... ok find_objects 8 ... ok find_objects 9 ... ok ellipsoid fourier filter for complex transforms 1 ... ok ellipsoid fourier filter for real transforms 1 ... ok gaussian fourier filter for complex transforms 1 ... ok gaussian fourier filter for real transforms 1 ... ok shift filter for complex transforms 1 ... ok shift filter for real transforms 1 ... ok uniform fourier filter for complex transforms 1 ... ok uniform fourier filter for real transforms 1 ... ok gaussian filter 1 ... ok gaussian filter 2 ... ok gaussian filter 3 ... ok gaussian filter 4 ... ok gaussian filter 5 ... ok gaussian filter 6 ... ok gaussian gradient magnitude filter 1 ... ok gaussian gradient magnitude filter 2 ... ok gaussian laplace filter 1 ... ok gaussian laplace filter 2 ... ok generation of a binary structure 1 ... ok generation of a binary structure 2 ... ok generation of a binary structure 3 ... ok generation of a binary structure 4 ... ok generic filter 1 ... ok generic 1d filter 1 ... ok generic gradient magnitude 1 ... ok generic laplace filter 1 ... ok geometric transform 1 ... ok geometric transform 2 ... ok geometric transform 3 ... ok geometric transform 4 ... ok geometric transform 5 ... ok geometric transform 6 ... ok geometric transform 7 ... ok geometric transform 8 ... ok geometric transform 10 ... ok geometric transform 13 ... ok geometric transform 14 ... ok geometric transform 15 ... ok geometric transform 16 ... ok geometric transform 17 ... ok geometric transform 18 ... ok geometric transform 19 ... ok geometric transform 20 ... ok geometric transform 21 ... ok geometric transform 22 ... ok geometric transform 23 ... ok geometric transform 24 ... ok grey closing 1 ... ok grey closing 2 ... ok grey dilation 1 ... ok grey dilation 2 ... ok grey dilation 3 ... ok grey erosion 1 ... ok grey erosion 2 ... ok grey erosion 3 ... ok grey opening 1 ... ok grey opening 2 ... ok histogram 1 ... ok histogram 2 ... ok histogram 3 ... ok binary hit-or-miss transform 1 ... ok binary hit-or-miss transform 2 ... ok binary hit-or-miss transform 3 ... ok iterating a structure 1 ... ok iterating a structure 2 ... ok iterating a structure 3 ... ok label 1 ... ok label 2 ... ok label 3 ... ok label 4 ... ok label 5 ... ok label 6 ... ok label 7 ... ok label 8 ... ok label 9 ... ok label 10 ... ok label 11 ... ok label 12 ... ok label 13 ... ok laplace filter 1 ... ok laplace filter 2 ... ok map coordinates 1 ... ok map coordinates 2 ... ok maximum 1 ... ok maximum 2 ... ok maximum 3 ... ok maximum 4 ... ok Ticket #501 ... ok maximum filter 1 ... ok maximum filter 2 ... ok maximum filter 3 ... ok maximum filter 4 ... ok maximum filter 5 ... ok maximum filter 6 ... ok maximum filter 7 ... ok maximum filter 8 ... ok maximum filter 9 ... ok maximum position 1 ... ok maximum position 2 ... ok maximum position 3 ... ok maximum position 4 ... ok maximum position 5 ... ok maximum position 6 ... ok mean 1 ... ok mean 2 ... ok mean 3 ... ok mean 4 ... ok minimum 1 ... ok minimum 2 ... ok minimum 3 ... ok minimum 4 ... ok minimum filter 1 ... ok minimum filter 2 ... ok minimum filter 3 ... ok minimum filter 4 ... ok minimum filter 5 ... ok minimum filter 6 ... ok minimum filter 7 ... ok minimum filter 8 ... ok minimum filter 9 ... ok minimum position 1 ... ok minimum position 2 ... ok minimum position 3 ... ok minimum position 4 ... ok minimum position 5 ... ok minimum position 6 ... ok minimum position 7 ... ok morphological gradient 1 ... ok morphological gradient 2 ... ok morphological laplace 1 ... ok morphological laplace 2 ... ok prewitt filter 1 ... ok prewitt filter 2 ... ok prewitt filter 3 ... ok prewitt filter 4 ... ok rank filter 1 ... ok rank filter 2 ... ok rank filter 3 ... ok rank filter 4 ... ok rank filter 5 ... ok rank filter 6 ... ok rank filter 7 ... ok median filter 8 ... ok rank filter 9 ... ok rank filter 10 ... ok rank filter 11 ... ok rank filter 12 ... ok rank filter 13 ... ok rank filter 14 ... ok rotate 1 ... ok rotate 2 ... ok rotate 3 ... ok rotate 4 ... ok rotate 5 ... ok rotate 6 ... ok rotate 7 ... ok rotate 8 ... ok shift 1 ... ok shift 2 ... ok shift 3 ... ok shift 4 ... ok shift 5 ... ok shift 6 ... ok shift 7 ... ok shift 8 ... ok shift 9 ... ok sobel filter 1 ... ok sobel filter 2 ... ok sobel filter 3 ... ok sobel filter 4 ... ok spline filter 1 ... ok spline filter 2 ... ok spline filter 3 ... ok spline filter 4 ... ok spline filter 5 ... ok standard deviation 1 ... ok standard deviation 2 ... ok standard deviation 3 ... ok standard deviation 4 ... ok standard deviation 5 ... ok standard deviation 6 ... ok sum 1 ... ok sum 2 ... ok sum 3 ... ok sum 4 ... ok sum 5 ... ok sum 6 ... ok sum 7 ... ok sum 8 ... ok sum 9 ... ok sum 10 ... ok sum 11 ... ok sum 12 ... ok sum 13 ... ok uniform filter 1 ... ok uniform filter 2 ... ok uniform filter 3 ... ok uniform filter 4 ... ok uniform filter 5 ... ok uniform filter 6 ... ok variance 1 ... ok variance 2 ... ok variance 3 ... ok variance 4 ... ok variance 5 ... ok variance 6 ... ok watershed_ift 1 ... ok watershed_ift 2 ... ok watershed_ift 3 ... ok watershed_ift 4 ... ok watershed_ift 5 ... ok watershed_ift 6 ... ok watershed_ift 7 ... ok white tophat 1 ... ok white tophat 2 ... ok zoom 1 ... ok zoom 2 ... ok zoom by affine transformation 1 ... ok Regression test for #413: median_filter does not handle bytes orders. ... ok Ticket #643 ... ok test_explicit (test_odr.TestODR) ... ok test_implicit (test_odr.TestODR) ... ok test_lorentz (test_odr.TestODR) ... ok test_multi (test_odr.TestODR) ... ok test_pearson (test_odr.TestODR) ... ok test_simple (test_cobyla.TestCobyla) ... ok test_nnls (test_nnls.TestNNLS) ... ok test_anderson (test_nonlin.TestNonlin) ... ok test_anderson2 (test_nonlin.TestNonlin) ... ok test_broyden1 (test_nonlin.TestNonlin) ... ok test_broyden1modified (test_nonlin.TestNonlin) ... ok test_broyden2 (test_nonlin.TestNonlin) ... ok test_broyden3 (test_nonlin.TestNonlin) ... ok test_broydengeneralized (test_nonlin.TestNonlin) ... ok test_exciting (test_nonlin.TestNonlin) ... ok test_linearmixing (test_nonlin.TestNonlin) ... ok test_vackar (test_nonlin.TestNonlin) ... ok test_basic (test_optimize.TestLeastSq) ... ok test_full_output (test_optimize.TestLeastSq) ... ok test_input_untouched (test_optimize.TestLeastSq) ... ok Broyden-Fletcher-Goldfarb-Shanno optimization routine ... ok brent algorithm ... ok conjugate gradient optimization routine ... ok Test fminbound ... ok limited-memory bound-constrained BFGS algorithm ... ok line-search Newton conjugate gradient optimization routine ... ok Nelder-Mead simplex algorithm ... ok Powell (direction set) optimization routine ... ok test_tnc (test_optimize.TestTnc) ... ok test_bound_approximated (test_slsqp.TestSLSQP) ... ok test_bound_equality_given (test_slsqp.TestSLSQP) ... ok test_bound_equality_inequality_given (test_slsqp.TestSLSQP) ... ok test_unbounded_approximated (test_slsqp.TestSLSQP) ... ok test_unbounded_given (test_slsqp.TestSLSQP) ... ok test_bisect (test_zeros.TestBasic) ... ok test_brenth (test_zeros.TestBasic) ... ok test_brentq (test_zeros.TestBasic) ... ok test_ridder (test_zeros.TestBasic) ... ok Regression test for #651: better handling of badly conditionned ... ok test_simple (test_filter_design.TestTf2zpk) ... ok test_basic (test_signaltools.TestCSpline1DEval) ... ok test_signaltools.TestChebWin.test_cheb_even ... ok test_signaltools.TestChebWin.test_cheb_odd ... ok test_basic (test_signaltools.TestConvolve) ... ok test_complex (test_signaltools.TestFFTConvolve) ... ok test_real (test_signaltools.TestFFTConvolve) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterComplex128) ... ok test_rank2 (test_signaltools.TestLinearFilterComplex128) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterComplex128) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterComplex128) ... ok test_rank3 (test_signaltools.TestLinearFilterComplex128) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterComplex64) ... ok test_rank2 (test_signaltools.TestLinearFilterComplex64) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterComplex64) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterComplex64) ... ok test_rank3 (test_signaltools.TestLinearFilterComplex64) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterDecimal) ... ok test_rank2 (test_signaltools.TestLinearFilterDecimal) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterDecimal) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterDecimal) ... ok test_rank3 (test_signaltools.TestLinearFilterDecimal) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterFloat32) ... ok test_rank2 (test_signaltools.TestLinearFilterFloat32) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterFloat32) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterFloat32) ... ok test_rank3 (test_signaltools.TestLinearFilterFloat32) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterFloat64) ... ok test_rank2 (test_signaltools.TestLinearFilterFloat64) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterFloat64) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterFloat64) ... ok test_rank3 (test_signaltools.TestLinearFilterFloat64) ... ok test_basic (test_signaltools.TestMedFilt) ... ok test_basic (test_signaltools.TestOrderFilt) ... ok test_basic (test_signaltools.TestWiener) ... ok test_log_chirp_at_zero (test_waveforms.TestChirp) ... ok test_cascade (test_wavelets.TestWavelets) ... ok test_daub (test_wavelets.TestWavelets) ... ok test_morlet (test_wavelets.TestWavelets) ... ok test_qmf (test_wavelets.TestWavelets) ... ok Getting factors of complex matrix ... SKIP: Skipping test: test_complex_lu UMFPACK appears not to be compiled Getting factors of real matrix ... SKIP: Skipping test: test_real_lu UMFPACK appears not to be compiled Getting factors of complex matrix ... SKIP: Skipping test: test_complex_lu UMFPACK appears not to be compiled Getting factors of real matrix ... SKIP: Skipping test: test_real_lu UMFPACK appears not to be compiled Prefactorize (with UMFPACK) matrix for solving with multiple rhs ... SKIP: Skipping test: test_factorized_umfpack UMFPACK appears not to be compiled Prefactorize matrix for solving with multiple rhs ... SKIP: Skipping test: test_factorized_without_umfpack UMFPACK appears not to be compiled Solve with UMFPACK: double precision complex ... SKIP: Skipping test: test_solve_complex_umfpack UMFPACK appears not to be compiled Solve: single precision complex ... SKIP: Skipping test: test_solve_complex_without_umfpack UMFPACK appears not to be compiled Solve with UMFPACK: double precision, sparse rhs ... SKIP: Skipping test: test_solve_sparse_rhs UMFPACK appears not to be compiled Solve with UMFPACK: double precision ... SKIP: Skipping test: test_solve_umfpack UMFPACK appears not to be compiled Solve: single precision ... SKIP: Skipping test: test_solve_without_umfpack UMFPACK appears not to be compiled test_twodiags (test_linsolve.TestLinsolve) ... ok test_complex_nonsymmetric_modes (test_arpack.TestEigenComplexNonSymmetric) ... ok test_complex_symmetric_modes (test_arpack.TestEigenComplexSymmetric) ... ok test_nonsymmetric_modes (test_arpack.TestEigenNonSymmetric) ... ok test_starting_vector (test_arpack.TestEigenNonSymmetric) ... ok test_starting_vector (test_arpack.TestEigenSymmetric) ... ok test_symmetric_modes (test_arpack.TestEigenSymmetric) ... ok test (test_speigs.TestEigs) ... ok test_lobpcg.test_Small ... ok test_lobpcg.test_ElasticRod ... ok test_lobpcg.test_MikotaPair ... ok test_callback (test_iterative.TestGMRES) ... ok test whether all methods converge ... ok test whether maxiter is respected ... ok test whether all methods accept a trivial preconditioner ... ok Check that QMR works with left and right preconditioners ... ok test_basic (test_interface.TestAsLinearOperator) ... ok test_matvec (test_interface.TestLinearOperator) ... ok test_abs (test_base.TestBSR) ... ok test_add (test_base.TestBSR) ... ok adding a dense matrix to a sparse matrix ... ok test_add_sub (test_base.TestBSR) ... ok test_asfptype (test_base.TestBSR) ... ok test_astype (test_base.TestBSR) ... ok test_bsr_matvec (test_base.TestBSR) ... ok test_bsr_matvecs (test_base.TestBSR) ... ok check native BSR format constructor ... ok construct from dense ... ok Check whether the copy=True and copy=False keywords work ... ok Does the matrix's .diagonal() method work? ... ok test_elementwise_divide (test_base.TestBSR) ... ok test_elementwise_multiply (test_base.TestBSR) ... ok test_eliminate_zeros (test_base.TestBSR) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_from_array (test_base.TestBSR) ... ok test_from_list (test_base.TestBSR) ... ok test_from_matrix (test_base.TestBSR) ... ok test_from_sparse (test_base.TestBSR) ... ok test_getcol (test_base.TestBSR) ... ok test_getrow (test_base.TestBSR) ... ok test_idiv_scalar (test_base.TestBSR) ... ok test_imag (test_base.TestBSR) ... ok test_imul_scalar (test_base.TestBSR) ... ok test_invalid_shapes (test_base.TestBSR) ... ok test_matmat_dense (test_base.TestBSR) ... ok test_matmat_sparse (test_base.TestBSR) ... ok test_matvec (test_base.TestBSR) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mu (test_base.TestBSR) ... ok test_mul_scalar (test_base.TestBSR) ... ok test_neg (test_base.TestBSR) ... ok test_nonzero (test_base.TestBSR) ... ok test_pow (test_base.TestBSR) ... ok test_radd (test_base.TestBSR) ... ok test_real (test_base.TestBSR) ... ok test_repr (test_base.TestBSR) ... ok test_rmatvec (test_base.TestBSR) ... ok test_rmul_scalar (test_base.TestBSR) ... ok test_rsub (test_base.TestBSR) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok test_sparse_format_conversions (test_base.TestBSR) ... ok test_str (test_base.TestBSR) ... ok test_sub (test_base.TestBSR) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestBSR) ... ok test_tobsr (test_base.TestBSR) ... ok test_todense (test_base.TestBSR) ... ok test_transpose (test_base.TestBSR) ... ok test_abs (test_base.TestCOO) ... ok test_add (test_base.TestCOO) ... ok adding a dense matrix to a sparse matrix ... ok test_asfptype (test_base.TestCOO) ... ok test_astype (test_base.TestCOO) ... ok unsorted triplet format ... ok unsorted triplet format with duplicates (which are summed) ... ok empty matrix ... ok from dense matrix ... ok Check whether the copy=True and copy=False keywords work ... ok Does the matrix's .diagonal() method work? ... ok test_elementwise_divide (test_base.TestCOO) ... ok test_elementwise_multiply (test_base.TestCOO) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_from_array (test_base.TestCOO) ... ok test_from_list (test_base.TestCOO) ... ok test_from_matrix (test_base.TestCOO) ... ok test_from_sparse (test_base.TestCOO) ... ok test_getcol (test_base.TestCOO) ... ok test_getrow (test_base.TestCOO) ... ok test_imag (test_base.TestCOO) ... ok test_invalid_shapes (test_base.TestCOO) ... ok test_matmat_dense (test_base.TestCOO) ... ok test_matmat_sparse (test_base.TestCOO) ... ok test_matvec (test_base.TestCOO) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mul_scalar (test_base.TestCOO) ... ok test_neg (test_base.TestCOO) ... ok test_nonzero (test_base.TestCOO) ... ok test_pow (test_base.TestCOO) ... ok test_radd (test_base.TestCOO) ... ok test_real (test_base.TestCOO) ... ok test_repr (test_base.TestCOO) ... ok test_rmatvec (test_base.TestCOO) ... ok test_rmul_scalar (test_base.TestCOO) ... ok test_rsub (test_base.TestCOO) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok test_sparse_format_conversions (test_base.TestCOO) ... ok test_str (test_base.TestCOO) ... ok test_sub (test_base.TestCOO) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestCOO) ... ok test_tobsr (test_base.TestCOO) ... ok test_todense (test_base.TestCOO) ... ok test_transpose (test_base.TestCOO) ... ok test_abs (test_base.TestCSC) ... ok test_add (test_base.TestCSC) ... ok adding a dense matrix to a sparse matrix ... ok test_add_sub (test_base.TestCSC) ... ok test_asfptype (test_base.TestCSC) ... ok test_astype (test_base.TestCSC) ... ok test_constructor1 (test_base.TestCSC) ... ok test_constructor2 (test_base.TestCSC) ... ok test_constructor3 (test_base.TestCSC) ... ok using (data, ij) format ... ok infer dimensions from arrays ... ok Check whether the copy=True and copy=False keywords work ... ok Does the matrix's .diagonal() method work? ... ok test_elementwise_divide (test_base.TestCSC) ... ok test_elementwise_multiply (test_base.TestCSC) ... ok test_eliminate_zeros (test_base.TestCSC) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_fancy_indexing (test_base.TestCSC) ... ok test_from_array (test_base.TestCSC) ... ok test_from_list (test_base.TestCSC) ... ok test_from_matrix (test_base.TestCSC) ... ok test_from_sparse (test_base.TestCSC) ... ok Test for new slice functionality (EJS) ... ok test_get_slices (test_base.TestCSC) ... ok Test for new slice functionality (EJS) ... ok test_getcol (test_base.TestCSC) ... ok test_getelement (test_base.TestCSC) ... ok test_getrow (test_base.TestCSC) ... ok test_idiv_scalar (test_base.TestCSC) ... ok test_imag (test_base.TestCSC) ... ok test_imul_scalar (test_base.TestCSC) ... ok test_invalid_shapes (test_base.TestCSC) ... ok test_matmat_dense (test_base.TestCSC) ... ok test_matmat_sparse (test_base.TestCSC) ... ok test_matvec (test_base.TestCSC) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mu (test_base.TestCSC) ... ok test_mul_scalar (test_base.TestCSC) ... ok test_neg (test_base.TestCSC) ... ok test_nonzero (test_base.TestCSC) ... ok test_pow (test_base.TestCSC) ... ok test_radd (test_base.TestCSC) ... ok test_real (test_base.TestCSC) ... ok test_repr (test_base.TestCSC) ... ok test_rmatvec (test_base.TestCSC) ... ok test_rmul_scalar (test_base.TestCSC) ... ok test_rsub (test_base.TestCSC) ... ok test_setelement (test_base.TestCSC) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok Test whether the lu_solve command segfaults, as reported by Nils ... ok test_sort_indices (test_base.TestCSC) ... ok test_sparse_format_conversions (test_base.TestCSC) ... ok test_str (test_base.TestCSC) ... ok test_sub (test_base.TestCSC) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestCSC) ... ok test_tobsr (test_base.TestCSC) ... ok test_todense (test_base.TestCSC) ... ok test_transpose (test_base.TestCSC) ... ok test_abs (test_base.TestCSR) ... ok test_add (test_base.TestCSR) ... ok adding a dense matrix to a sparse matrix ... ok test_add_sub (test_base.TestCSR) ... ok test_asfptype (test_base.TestCSR) ... ok test_astype (test_base.TestCSR) ... ok test_constructor1 (test_base.TestCSR) ... ok test_constructor2 (test_base.TestCSR) ... ok test_constructor3 (test_base.TestCSR) ... ok using (data, ij) format ... ok infer dimensions from arrays ... ok Check whether the copy=True and copy=False keywords work ... ok Does the matrix's .diagonal() method work? ... ok test_elementwise_divide (test_base.TestCSR) ... ok test_elementwise_multiply (test_base.TestCSR) ... ok test_eliminate_zeros (test_base.TestCSR) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_fancy_indexing (test_base.TestCSR) ... ok test_from_array (test_base.TestCSR) ... ok test_from_list (test_base.TestCSR) ... ok test_from_matrix (test_base.TestCSR) ... ok test_from_sparse (test_base.TestCSR) ... ok Test for new slice functionality (EJS) ... ok test_get_slices (test_base.TestCSR) ... ok Test for new slice functionality (EJS) ... ok test_getcol (test_base.TestCSR) ... ok test_getelement (test_base.TestCSR) ... ok test_getrow (test_base.TestCSR) ... ok test_idiv_scalar (test_base.TestCSR) ... ok test_imag (test_base.TestCSR) ... ok test_imul_scalar (test_base.TestCSR) ... ok test_invalid_shapes (test_base.TestCSR) ... ok test_matmat_dense (test_base.TestCSR) ... ok test_matmat_sparse (test_base.TestCSR) ... ok test_matvec (test_base.TestCSR) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mu (test_base.TestCSR) ... ok test_mul_scalar (test_base.TestCSR) ... ok test_neg (test_base.TestCSR) ... ok test_nonzero (test_base.TestCSR) ... ok test_pow (test_base.TestCSR) ... ok test_radd (test_base.TestCSR) ... ok test_real (test_base.TestCSR) ... ok test_repr (test_base.TestCSR) ... ok test_rmatvec (test_base.TestCSR) ... ok test_rmul_scalar (test_base.TestCSR) ... ok test_rsub (test_base.TestCSR) ... ok test_setelement (test_base.TestCSR) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok Test whether the lu_solve command segfaults, as reported by Nils ... ok test_sort_indices (test_base.TestCSR) ... ok test_sparse_format_conversions (test_base.TestCSR) ... ok test_str (test_base.TestCSR) ... ok test_sub (test_base.TestCSR) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestCSR) ... ok test_tobsr (test_base.TestCSR) ... ok test_todense (test_base.TestCSR) ... ok test_transpose (test_base.TestCSR) ... ok test_abs (test_base.TestDIA) ... ok test_add (test_base.TestDIA) ... ok adding a dense matrix to a sparse matrix ... ok test_add_sub (test_base.TestDIA) ... ok test_asfptype (test_base.TestDIA) ... ok test_astype (test_base.TestDIA) ... ok test_constructor1 (test_base.TestDIA) ... ok Check whether the copy=True and copy=False keywords work ... ok Does the matrix's .diagonal() method work? ... ok test_elementwise_divide (test_base.TestDIA) ... ok test_elementwise_multiply (test_base.TestDIA) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_from_array (test_base.TestDIA) ... ok test_from_list (test_base.TestDIA) ... ok test_from_matrix (test_base.TestDIA) ... ok test_from_sparse (test_base.TestDIA) ... ok test_getcol (test_base.TestDIA) ... ok test_getrow (test_base.TestDIA) ... ok test_imag (test_base.TestDIA) ... ok test_invalid_shapes (test_base.TestDIA) ... ok test_matmat_dense (test_base.TestDIA) ... ok test_matmat_sparse (test_base.TestDIA) ... ok test_matvec (test_base.TestDIA) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mu (test_base.TestDIA) ... ok test_mul_scalar (test_base.TestDIA) ... ok test_neg (test_base.TestDIA) ... ok test_nonzero (test_base.TestDIA) ... ok test_pow (test_base.TestDIA) ... ok test_radd (test_base.TestDIA) ... ok test_real (test_base.TestDIA) ... ok test_repr (test_base.TestDIA) ... ok test_rmatvec (test_base.TestDIA) ... ok test_rmul_scalar (test_base.TestDIA) ... ok test_rsub (test_base.TestDIA) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok test_sparse_format_conversions (test_base.TestDIA) ... ok test_str (test_base.TestDIA) ... ok test_sub (test_base.TestDIA) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestDIA) ... ok test_tobsr (test_base.TestDIA) ... ok test_todense (test_base.TestDIA) ... ok test_transpose (test_base.TestDIA) ... ok test_abs (test_base.TestDOK) ... ok test_add (test_base.TestDOK) ... ok adding a dense matrix to a sparse matrix ... ok test_asfptype (test_base.TestDOK) ... ok test_astype (test_base.TestDOK) ... ok Test provided by Andrew Straw. Fails in SciPy <= r1477. ... ok Check whether the copy=True and copy=False keywords work ... ok test_ctor (test_base.TestDOK) ... ok Does the matrix's .diagonal() method work? ... ok test_elementwise_divide (test_base.TestDOK) ... ok test_elementwise_multiply (test_base.TestDOK) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_from_array (test_base.TestDOK) ... ok test_from_list (test_base.TestDOK) ... ok test_from_matrix (test_base.TestDOK) ... ok test_from_sparse (test_base.TestDOK) ... ok test_getcol (test_base.TestDOK) ... ok test_getelement (test_base.TestDOK) ... ok test_getrow (test_base.TestDOK) ... ok test_imag (test_base.TestDOK) ... ok test_invalid_shapes (test_base.TestDOK) ... ok test_matmat_dense (test_base.TestDOK) ... ok test_matmat_sparse (test_base.TestDOK) ... ok test_matvec (test_base.TestDOK) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mul_scalar (test_base.TestDOK) ... ok test_mult (test_base.TestDOK) ... ok test_neg (test_base.TestDOK) ... ok test_nonzero (test_base.TestDOK) ... ok test_pow (test_base.TestDOK) ... ok test_radd (test_base.TestDOK) ... ok test_real (test_base.TestDOK) ... ok test_repr (test_base.TestDOK) ... ok test_rmatvec (test_base.TestDOK) ... ok test_rmul_scalar (test_base.TestDOK) ... ok test_rsub (test_base.TestDOK) ... ok Test for slice functionality (EJS) ... ok test_setelement (test_base.TestDOK) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok Test whether the lu_solve command segfaults, as reported by Nils ... ok test_sparse_format_conversions (test_base.TestDOK) ... ok test_str (test_base.TestDOK) ... ok test_sub (test_base.TestDOK) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestDOK) ... ok test_tobsr (test_base.TestDOK) ... ok test_todense (test_base.TestDOK) ... ok test_transpose (test_base.TestDOK) ... ok test_abs (test_base.TestLIL) ... ok test_add (test_base.TestLIL) ... ok adding a dense matrix to a sparse matrix ... ok test_add_sub (test_base.TestLIL) ... ok test_asfptype (test_base.TestLIL) ... ok test_astype (test_base.TestLIL) ... ok Check whether the copy=True and copy=False keywords work ... ok Does the matrix's .diagonal() method work? ... ok test_dot (test_base.TestLIL) ... ok test_elementwise_divide (test_base.TestLIL) ... ok test_elementwise_multiply (test_base.TestLIL) ... ok create empty matrices ... ok Test manipulating empty matrices. Fails in SciPy SVN <= r1768 ... ok test_fancy_indexing (test_base.TestLIL) ... ok test_from_array (test_base.TestLIL) ... ok test_from_list (test_base.TestLIL) ... ok test_from_matrix (test_base.TestLIL) ... ok test_from_sparse (test_base.TestLIL) ... ok Test for new slice functionality (EJS) ... ok test_get_slices (test_base.TestLIL) ... ok Test for new slice functionality (EJS) ... ok test_getcol (test_base.TestLIL) ... ok test_getelement (test_base.TestLIL) ... ok test_getrow (test_base.TestLIL) ... ok test_idiv_scalar (test_base.TestLIL) ... ok test_imag (test_base.TestLIL) ... ok test_imul_scalar (test_base.TestLIL) ... ok test_inplace_ops (test_base.TestLIL) ... ok test_invalid_shapes (test_base.TestLIL) ... ok Tests whether a lil_matrix can be constructed from a ... ok test_lil_iteration (test_base.TestLIL) ... ok Tests whether a row of one lil_matrix can be assigned to ... ok test_lil_sequence_assignement (test_base.TestLIL) ... ok test_lil_slice_assignment (test_base.TestLIL) ... ok test_matmat_dense (test_base.TestLIL) ... ok test_matmat_sparse (test_base.TestLIL) ... ok test_matvec (test_base.TestLIL) ... ok Does the matrix's .mean(axis=...) method work? ... ok test_mu (test_base.TestLIL) ... ok test_mul_scalar (test_base.TestLIL) ... ok test_neg (test_base.TestLIL) ... ok test_nonzero (test_base.TestLIL) ... ok test_point_wise_multiply (test_base.TestLIL) ... ok test_pow (test_base.TestLIL) ... ok test_radd (test_base.TestLIL) ... ok test_real (test_base.TestLIL) ... ok test_repr (test_base.TestLIL) ... ok test_reshape (test_base.TestLIL) ... ok test_rmatvec (test_base.TestLIL) ... ok test_rmul_scalar (test_base.TestLIL) ... ok test_rsub (test_base.TestLIL) ... ok test_scalar_mul (test_base.TestLIL) ... ok test_setelement (test_base.TestLIL) ... ok test that A*x works for x with shape () (1,) and (1,1) ... ok Test whether the lu_solve command segfaults, as reported by Nils ... ok test_sparse_format_conversions (test_base.TestLIL) ... ok test_str (test_base.TestLIL) ... ok test_sub (test_base.TestLIL) ... ok subtracting a dense matrix to/from a sparse matrix ... ok Does the matrix's .sum(axis=...) method work? ... ok test_toarray (test_base.TestLIL) ... ok test_tobsr (test_base.TestLIL) ... ok test_todense (test_base.TestLIL) ... ok test_transpose (test_base.TestLIL) ... ok test_bmat (test_construct.TestConstructUtils) ... ok test_eye (test_construct.TestConstructUtils) ... ok test_hstack (test_construct.TestConstructUtils) ... ok test_identity (test_construct.TestConstructUtils) ... ok test_kron (test_construct.TestConstructUtils) ... ok test_kronsum (test_construct.TestConstructUtils) ... ok test_lil_diags (test_construct.TestConstructUtils) ... ok test_spdiags (test_construct.TestConstructUtils) ... ok test_vstack (test_construct.TestConstructUtils) ... ok test_tril (test_extract.TestExtract) ... ok test_triu (test_extract.TestExtract) ... ok test_count_blocks (test_spfuncs.TestSparseFunctions) ... ok test_estimate_blocksize (test_spfuncs.TestSparseFunctions) ... ok test_scale_rows_and_cols (test_spfuncs.TestSparseFunctions) ... ok test_getdtype (test_sputils.TestSparseUtils) ... ok test_isdense (test_sputils.TestSparseUtils) ... ok test_isintlike (test_sputils.TestSparseUtils) ... ok test_isscalarlike (test_sputils.TestSparseUtils) ... ok test_issequence (test_sputils.TestSparseUtils) ... ok test_isshape (test_sputils.TestSparseUtils) ... ok test_upcast (test_sputils.TestSparseUtils) ... ok Tests cdist(X, 'braycurtis') on random data. ... ok Tests cdist(X, 'canberra') on random data. ... ok Tests cdist(X, 'chebychev') on random data. ... ok Tests cdist(X, 'sqeuclidean') on random data. ... ok Tests cdist(X, 'correlation') on random data. ... ok Tests cdist(X, 'cosine') on random data. ... ok Tests cdist(X, 'dice') on random data. ... ok Tests cdist(X, 'euclidean') on random data. ... ok Tests cdist(X, 'hamming') on random boolean data. ... ok Tests cdist(X, 'hamming') on random data. ... ok Tests cdist(X, 'jaccard') on random boolean data. ... ok Tests cdist(X, 'jaccard') on random data. ... ok Tests cdist(X, 'kulsinski') on random data. ... ok Tests cdist(X, 'mahalanobis') on random data. ... ok Tests cdist(X, 'matching') on random data. ... ok Tests cdist(X, 'minkowski') on random data. (p=1.23) ... ok Tests cdist(X, 'minkowski') on random data. (p=3.8) ... ok Tests cdist(X, 'minkowski') on random data. (p=4.6) ... ok Tests cdist(X, 'rogerstanimoto') on random data. ... ok Tests cdist(X, 'russellrao') on random data. ... ok Tests cdist(X, 'seuclidean') on random data. ... ok Tests cdist(X, 'sokalmichener') on random data. ... ok Tests cdist(X, 'sokalsneath') on random data. ... ok Tests cdist(X, 'sqeuclidean') on random data. ... ok Tests cdist(X, 'wminkowski') on random data. (p=1.23) ... ok Tests cdist(X, 'wminkowski') on random data. (p=3.8) ... ok Tests cdist(X, 'wminkowski') on random data. (p=4.6) ... ok Tests cdist(X, 'yule') on random data. ... ok Tests is_valid_dm(*) on an assymetric distance matrix. Exception expected. ... ok Tests is_valid_dm(*) on an assymetric distance matrix. False expected. ... ok Tests is_valid_dm(*) on a correct 1x1. True expected. ... ok Tests is_valid_dm(*) on a correct 2x2. True expected. ... ok Tests is_valid_dm(*) on a correct 3x3. True expected. ... ok Tests is_valid_dm(*) on a correct 4x4. True expected. ... ok Tests is_valid_dm(*) on a correct 5x5. True expected. ... ok Tests is_valid_dm(*) on a 1D array. Exception expected. ... ok Tests is_valid_dm(*) on a 1D array. False expected. ... ok Tests is_valid_dm(*) on a 3D array. Exception expected. ... ok Tests is_valid_dm(*) on a 3D array. False expected. ... ok Tests is_valid_dm(*) on an int16 array. Exception expected. ... ok Tests is_valid_dm(*) on an int16 array. False expected. ... ok Tests is_valid_dm(*) on a distance matrix with a nonzero diagonal. Exception expected. ... ok Tests is_valid_dm(*) on a distance matrix with a nonzero diagonal. False expected. ... ok Tests is_valid_y(*) on 100 improper condensed distance matrices. Expecting exception. ... ok Tests is_valid_y(*) on a correct 2x2 condensed. True expected. ... ok Tests is_valid_y(*) on a correct 3x3 condensed. True expected. ... ok Tests is_valid_y(*) on a correct 4x4 condensed. True expected. ... ok Tests is_valid_y(*) on a correct 5x5 condensed. True expected. ... ok Tests is_valid_y(*) on a 2D array. Exception expected. ... ok Tests is_valid_y(*) on a 2D array. False expected. ... ok Tests is_valid_y(*) on a 3D array. Exception expected. ... ok Tests is_valid_y(*) on a 3D array. False expected. ... ok Tests is_valid_y(*) on an int16 array. Exception expected. ... ok Tests is_valid_y(*) on an int16 array. False expected. ... ok Tests num_obs_dm(D) on a 0x0 distance matrix. Expecting exception. ... ok Tests num_obs_dm(D) on a 1x1 distance matrix. ... ok Tests num_obs_dm(D) on a 2x2 distance matrix. ... ok Tests num_obs_dm(D) on a 3x3 distance matrix. ... ok Tests num_obs_dm(D) on a 4x4 distance matrix. ... ok Tests num_obs_dm with observation matrices of multiple sizes. ... ok Tests num_obs_y(y) on a condensed distance matrix over 1 observations. Expecting exception. ... ok Tests num_obs_y(y) on a condensed distance matrix over 2 observations. ... ok Tests num_obs_y(y) on 100 improper condensed distance matrices. Expecting exception. ... ok Tests num_obs_y(y) on a condensed distance matrix over 3 observations. ... ok Tests num_obs_y(y) on a condensed distance matrix over 4 observations. ... ok Tests num_obs_y(y) on a condensed distance matrix between 5 and 15 observations. ... ok Tests num_obs_y with observation matrices of multiple sizes. ... ok Tests pdist(X, 'canberra') to see if the two implementations match on the Iris data set. ... ok Tests pdist(X, 'canberra') to see if Canberra gives the right result as reported in Scipy bug report 711. ... ok Tests pdist(X, 'chebychev') on the Iris data set. ... ok Tests pdist(X, 'chebychev') on the Iris data set. (float32) ... ok Tests pdist(X, 'test_chebychev') [the non-C implementation] on the Iris data set. ... ok Tests pdist(X, 'chebychev') on random data. ... ok Tests pdist(X, 'chebychev') on random data. (float32) ... ok Tests pdist(X, 'test_chebychev') [the non-C implementation] on random data. ... ok Tests pdist(X, 'cityblock') on the Iris data set. ... ok Tests pdist(X, 'cityblock') on the Iris data set. (float32) ... ok Tests pdist(X, 'test_cityblock') [the non-C implementation] on the Iris data set. ... ok Tests pdist(X, 'cityblock') on random data. ... ok Tests pdist(X, 'cityblock') on random data. (float32) ... ok Tests pdist(X, 'test_cityblock') [the non-C implementation] on random data. ... ok Tests pdist(X, 'correlation') on the Iris data set. ... ok Tests pdist(X, 'correlation') on the Iris data set. (float32) ... ok Tests pdist(X, 'test_correlation') [the non-C implementation] on the Iris data set. ... ok Tests pdist(X, 'correlation') on random data. ... ok Tests pdist(X, 'correlation') on random data. (float32) ... ok Tests pdist(X, 'test_correlation') [the non-C implementation] on random data. ... ok Tests pdist(X, 'cosine') on the Iris data set. ... ok Tests pdist(X, 'cosine') on the Iris data set. ... ok Tests pdist(X, 'test_cosine') [the non-C implementation] on the Iris data set. ... ok Tests pdist(X, 'cosine') on random data. ... ok Tests pdist(X, 'cosine') on random data. (float32) ... ok Tests pdist(X, 'test_cosine') [the non-C implementation] on random data. ... ok Tests pdist(X, 'hamming') on random data. ... ok Tests pdist(X, 'hamming') on random data. (float32) ... ok Tests pdist(X, 'test_hamming') [the non-C implementation] on random data. ... ok Tests pdist(X, 'dice') to see if the two implementations match on random double input data. ... ok Tests dice(*,*) with mtica example #1. ... ok Tests dice(*,*) with mtica example #2. ... ok Tests pdist(X, 'jaccard') on random data. ... ok Tests pdist(X, 'jaccard') on random data. (float32) ... ok Tests pdist(X, 'test_jaccard') [the non-C implementation] on random data. ... ok Tests pdist(X, 'euclidean') on the Iris data set. ... ok Tests pdist(X, 'euclidean') on the Iris data set. (float32) ... ok Tests pdist(X, 'test_euclidean') [the non-C implementation] on the Iris data set. ... ok Tests pdist(X, 'euclidean') on random data. ... ok Tests pdist(X, 'euclidean') on random data (float32). ... ok Tests pdist(X, 'test_euclidean') [the non-C implementation] on random data. ... ok Tests pdist(X, 'hamming') on random data. ... ok Tests pdist(X, 'hamming') on random data. ... ok Tests pdist(X, 'test_hamming') [the non-C implementation] on random data. ... ok Tests pdist(X, 'jaccard') to see if the two implementations match on random double input data. ... ok Tests jaccard(*,*) with mtica example #1. ... ok Tests jaccard(*,*) with mtica example #2. ... ok Tests pdist(X, 'jaccard') on random data. ... ok Tests pdist(X, 'jaccard') on random data. (float32) ... ok Tests pdist(X, 'test_jaccard') [the non-C implementation] on random data. ... ok Tests pdist(X, 'kulsinski') to see if the two implementations match on random double input data. ... ok Tests pdist(X, 'matching') to see if the two implementations match on random boolean input data. ... ok Tests matching(*,*) with mtica example #1 (nums). ... ok Tests matching(*,*) with mtica example #2. ... ok Tests pdist(X, 'minkowski') on iris data. ... ok Tests pdist(X, 'minkowski') on iris data. (float32) ... ok Tests pdist(X, 'test_minkowski') [the non-C implementation] on iris data. ... ok Tests pdist(X, 'minkowski') on random data. ... ok Tests pdist(X, 'minkowski') on random data. (float32) ... ok Tests pdist(X, 'test_minkowski') [the non-C implementation] on random data. ... ok Tests pdist(X, 'rogerstanimoto') to see if the two implementations match on random double input data. ... ok Tests rogerstanimoto(*,*) with mtica example #1. ... ok Tests rogerstanimoto(*,*) with mtica example #2. ... ok Tests pdist(X, 'russellrao') to see if the two implementations match on random double input data. ... ok Tests russellrao(*,*) with mtica example #1. ... ok Tests russellrao(*,*) with mtica example #2. ... ok Tests pdist(X, 'seuclidean') on the Iris data set. ... ok Tests pdist(X, 'seuclidean') on the Iris data set (float32). ... ok Tests pdist(X, 'test_seuclidean') [the non-C implementation] on the Iris data set. ... ok Tests pdist(X, 'seuclidean') on random data. ... ok Tests pdist(X, 'seuclidean') on random data (float32). ... ok Tests pdist(X, 'test_sqeuclidean') [the non-C implementation] on random data. ... ok Tests pdist(X, 'sokalmichener') to see if the two implementations match on random double input data. ... ok Tests pdist(X, 'sokalsneath') to see if the two implementations match on random double input data. ... ok Tests sokalsneath(*,*) with mtica example #1. ... ok Tests sokalsneath(*,*) with mtica example #2. ... ok Tests pdist(X, 'yule') to see if the two implementations match on random double input data. ... ok Tests yule(*,*) with mtica example #1. ... ok Tests yule(*,*) with mtica example #2. ... ok Tests squareform on a 1x1 matrix. ... ok Tests squareform on a 2x2 matrix. ... ok Tests squareform on an empty matrix. ... ok Tests squareform on an empty vector. ... ok Tests squareform on a square matrices of multiple sizes. ... ok Tests squareform on a 1-D array, length=1. ... ok Loading test data files for the scipy.spatial.distance tests. ... ok test_kdtree.test_count_neighbors.test_large_radius ... ok test_kdtree.test_count_neighbors.test_multiple_radius ... ok test_kdtree.test_count_neighbors.test_one_radius ... ok test_kdtree.test_random.test_approx ... ok test_kdtree.test_random.test_m_nearest ... ok test_kdtree.test_random.test_nearest ... ok test_kdtree.test_random.test_points_near ... ok test_kdtree.test_random.test_points_near_l1 ... ok test_kdtree.test_random.test_points_near_linf ... ok test_kdtree.test_random_ball.test_found_all ... ok test_kdtree.test_random_ball.test_in_ball ... ok test_kdtree.test_random_ball_approx.test_found_all ... ok test_kdtree.test_random_ball_approx.test_in_ball ... ok test_kdtree.test_random_ball_far.test_found_all ... ok test_kdtree.test_random_ball_far.test_in_ball ... ok test_kdtree.test_random_ball_l1.test_found_all ... ok test_kdtree.test_random_ball_l1.test_in_ball ... ok test_kdtree.test_random_ball_linf.test_found_all ... ok test_kdtree.test_random_ball_linf.test_in_ball ... ok test_kdtree.test_random_compiled.test_approx ... ok test_kdtree.test_random_compiled.test_m_nearest ... ok test_kdtree.test_random_compiled.test_nearest ... ok test_kdtree.test_random_compiled.test_points_near ... ok test_kdtree.test_random_compiled.test_points_near_l1 ... ok test_kdtree.test_random_compiled.test_points_near_linf ... ok test_kdtree.test_random_far.test_approx ... ok test_kdtree.test_random_far.test_m_nearest ... ok test_kdtree.test_random_far.test_nearest ... ok test_kdtree.test_random_far.test_points_near ... ok test_kdtree.test_random_far.test_points_near_l1 ... ok test_kdtree.test_random_far.test_points_near_linf ... ok test_kdtree.test_random_far_compiled.test_approx ... ok test_kdtree.test_random_far_compiled.test_m_nearest ... ok test_kdtree.test_random_far_compiled.test_nearest ... ok test_kdtree.test_random_far_compiled.test_points_near ... ok test_kdtree.test_random_far_compiled.test_points_near_l1 ... ok test_kdtree.test_random_far_compiled.test_points_near_linf ... ok test_kdtree.test_rectangle.test_max_inside ... ok test_kdtree.test_rectangle.test_max_one_side ... ok test_kdtree.test_rectangle.test_max_two_sides ... ok test_kdtree.test_rectangle.test_min_inside ... ok test_kdtree.test_rectangle.test_min_one_side ... ok test_kdtree.test_rectangle.test_min_two_sides ... ok test_kdtree.test_rectangle.test_split ... ok test_kdtree.test_small.test_approx ... ok test_kdtree.test_small.test_m_nearest ... ok test_kdtree.test_small.test_nearest ... ok test_kdtree.test_small.test_nearest_two ... ok test_kdtree.test_small.test_points_near ... ok test_kdtree.test_small.test_points_near_l1 ... ok test_kdtree.test_small.test_points_near_linf ... ok test_kdtree.test_small_compiled.test_approx ... ok test_kdtree.test_small_compiled.test_m_nearest ... ok test_kdtree.test_small_compiled.test_nearest ... ok test_kdtree.test_small_compiled.test_nearest_two ... ok test_kdtree.test_small_compiled.test_points_near ... ok test_kdtree.test_small_compiled.test_points_near_l1 ... ok test_kdtree.test_small_compiled.test_points_near_linf ... ok test_kdtree.test_small_nonleaf.test_approx ... ok test_kdtree.test_small_nonleaf.test_m_nearest ... ok test_kdtree.test_small_nonleaf.test_nearest ... ok test_kdtree.test_small_nonleaf.test_nearest_two ... ok test_kdtree.test_small_nonleaf.test_points_near ... ok test_kdtree.test_small_nonleaf.test_points_near_l1 ... ok test_kdtree.test_small_nonleaf.test_points_near_linf ... ok test_kdtree.test_small_nonleaf_compiled.test_approx ... ok test_kdtree.test_small_nonleaf_compiled.test_m_nearest ... ok test_kdtree.test_small_nonleaf_compiled.test_nearest ... ok test_kdtree.test_small_nonleaf_compiled.test_nearest_two ... ok test_kdtree.test_small_nonleaf_compiled.test_points_near ... ok test_kdtree.test_small_nonleaf_compiled.test_points_near_l1 ... ok test_kdtree.test_small_nonleaf_compiled.test_points_near_linf ... ok test_kdtree.test_sparse_distance_matrix.test_consistency_with_neighbors ... ok test_kdtree.test_two_random_trees.test_all_in_ball ... ok test_kdtree.test_two_random_trees.test_found_all ... ok test_kdtree.test_two_random_trees_far.test_all_in_ball ... ok test_kdtree.test_two_random_trees_far.test_found_all ... ok test_kdtree.test_two_random_trees_linf.test_all_in_ball ... ok test_kdtree.test_two_random_trees_linf.test_found_all ... ok test_kdtree.test_vectorization.test_single_query ... ok test_kdtree.test_vectorization.test_single_query_all_neighbors ... ok test_kdtree.test_vectorization.test_single_query_multiple_neighbors ... ok test_kdtree.test_vectorization.test_vectorized_query ... ok test_kdtree.test_vectorization.test_vectorized_query_all_neighbors ... ok test_kdtree.test_vectorization.test_vectorized_query_multiple_neighbors ... ok test_kdtree.test_vectorization_compiled.test_single_query ... ok test_kdtree.test_vectorization_compiled.test_single_query_multiple_neighbors ... ok test_kdtree.test_vectorization_compiled.test_vectorized_query ... ok test_kdtree.test_vectorization_compiled.test_vectorized_query_multiple_neighbors ... ok test_kdtree.test_random_ball_vectorized ... ok test_kdtree.test_distance_l2 ... ok test_kdtree.test_distance_l1 ... ok test_kdtree.test_distance_linf ... ok test_kdtree.test_distance_vectorization ... ok test_kdtree.test_distance_matrix ... ok test_kdtree.test_distance_matrix_looping ... ok test_ai_zeros (test_basic.TestAiry) ... ok test_airy (test_basic.TestAiry) ... ok test_airye (test_basic.TestAiry) ... ok test_bi_zeros (test_basic.TestAiry) ... ok test_assoc_laguerre (test_basic.TestAssocLaguerre) ... ok test_bernoulli (test_basic.TestBernoulli) ... ok test_i0 (test_basic.TestBessel) ... ok test_i0_series (test_basic.TestBessel) ... ok test_i0e (test_basic.TestBessel) ... ok test_i1 (test_basic.TestBessel) ... ok test_i1_series (test_basic.TestBessel) ... ok test_i1e (test_basic.TestBessel) ... ok test_it2i0k0 (test_basic.TestBessel) ... ok test_it2j0y0 (test_basic.TestBessel) ... ok test_iti0k0 (test_basic.TestBessel) ... ok test_itj0y0 (test_basic.TestBessel) ... ok test_iv (test_basic.TestBessel) ... ok test_iv_cephes_vs_amos (test_basic.TestBessel) ... ok test_iv_cephes_vs_amos_mass_test (test_basic.TestBessel) ... FAIL test_iv_hyperg_poles (test_basic.TestBessel) ... ok test_iv_series (test_basic.TestBessel) ... ok test_ive (test_basic.TestBessel) ... ok test_ivp (test_basic.TestBessel) ... ok test_ivp0 (test_basic.TestBessel) ... ok test_j0 (test_basic.TestBessel) ... ok test_j1 (test_basic.TestBessel) ... ok test_jacobi (test_basic.TestBessel) ... ok test_jn (test_basic.TestBessel) ... ok test_jn_zeros (test_basic.TestBessel) ... ok test_jn_zeros_slow (test_basic.TestBessel) ... ok test_jnjnp_zeros (test_basic.TestBessel) ... ok test_jnp_zeros (test_basic.TestBessel) ... ok test_jnyn_zeros (test_basic.TestBessel) ... ok test_jv (test_basic.TestBessel) ... ok test_jv_cephes_vs_amos (test_basic.TestBessel) ... ok test_jve (test_basic.TestBessel) ... ok test_jvp (test_basic.TestBessel) ... ok test_k0 (test_basic.TestBessel) ... ok test_k0e (test_basic.TestBessel) ... ok test_k1 (test_basic.TestBessel) ... ok test_k1e (test_basic.TestBessel) ... ok test_kn (test_basic.TestBessel) ... ok test_kv0 (test_basic.TestBessel) ... ok test_kv1 (test_basic.TestBessel) ... ok test_kv2 (test_basic.TestBessel) ... ok test_kv_cephes_vs_amos (test_basic.TestBessel) ... ok test_kve (test_basic.TestBessel) ... ok test_kvp_n1 (test_basic.TestBessel) ... ok test_kvp_n2 (test_basic.TestBessel) ... ok test_kvp_v0n1 (test_basic.TestBessel) ... ok test_negv (test_basic.TestBessel) ... ok Real-valued Bessel I overflow ... ok test_ticket_623 (test_basic.TestBessel) ... ok Negative-order Bessels ... ok Real-valued Bessel domains ... ok test_y0 (test_basic.TestBessel) ... ok test_y0_zeros (test_basic.TestBessel) ... ok test_y1 (test_basic.TestBessel) ... ok test_y1_zeros (test_basic.TestBessel) ... ok test_y1p_zeros (test_basic.TestBessel) ... ok test_yn (test_basic.TestBessel) ... ok test_yn_zeros (test_basic.TestBessel) ... ok test_ynp_zeros (test_basic.TestBessel) ... ok test_ynp_zeros_large_order (test_basic.TestBessel) ... KNOWNFAIL: cephes/yv is not eps accurate for large orders on all platforms, and has nan/inf issues test_yv (test_basic.TestBessel) ... ok test_yv_cephes_vs_amos (test_basic.TestBessel) ... KNOWNFAIL: cephes/yv is not eps accurate for large orders on all platforms, and has nan/inf issues test_yv_cephes_vs_amos_only_small_orders (test_basic.TestBessel) ... ok test_yve (test_basic.TestBessel) ... ok test_yvp (test_basic.TestBessel) ... ok test_besselpoly (test_basic.TestBesselpoly) ... ok test_beta (test_basic.TestBeta) ... ok test_betainc (test_basic.TestBeta) ... ok test_betaincinv (test_basic.TestBeta) ... ok test_betaln (test_basic.TestBeta) ... ok test_airy (test_basic.TestCephes) ... ok test_airye (test_basic.TestCephes) ... ok test_bdtr (test_basic.TestCephes) ... ok test_bdtrc (test_basic.TestCephes) ... ok test_bdtri (test_basic.TestCephes) ... ok test_bdtrik (test_basic.TestCephes) ... ok test_bdtrin (test_basic.TestCephes) ... ok test_bei (test_basic.TestCephes) ... ok test_beip (test_basic.TestCephes) ... ok test_ber (test_basic.TestCephes) ... ok test_berp (test_basic.TestCephes) ... ok test_besselpoly (test_basic.TestCephes) ... ok test_beta (test_basic.TestCephes) ... ok test_betainc (test_basic.TestCephes) ... ok test_betaincinv (test_basic.TestCephes) ... ok test_betaln (test_basic.TestCephes) ... ok test_btdtr (test_basic.TestCephes) ... ok test_btdtri (test_basic.TestCephes) ... ok test_btdtria (test_basic.TestCephes) ... ok test_btdtrib (test_basic.TestCephes) ... ok test_cbrt (test_basic.TestCephes) ... ok test_chdtr (test_basic.TestCephes) ... ok test_chdtrc (test_basic.TestCephes) ... ok test_chdtri (test_basic.TestCephes) ... ok test_chdtriv (test_basic.TestCephes) ... ok test_chndtr (test_basic.TestCephes) ... ok test_chndtridf (test_basic.TestCephes) ... ok test_chndtrinc (test_basic.TestCephes) ... ok test_chndtrix (test_basic.TestCephes) ... ok test_cosdg (test_basic.TestCephes) ... ok test_cosm1 (test_basic.TestCephes) ... ok test_cotdg (test_basic.TestCephes) ... ok test_dawsn (test_basic.TestCephes) ... ok test_ellipe (test_basic.TestCephes) ... ok test_ellipeinc (test_basic.TestCephes) ... ok test_ellipj (test_basic.TestCephes) ... ok test_ellipk (test_basic.TestCephes) ... ok test_ellipkinc (test_basic.TestCephes) ... ok test_erf (test_basic.TestCephes) ... ok test_erfc (test_basic.TestCephes) ... ok test_exp1 (test_basic.TestCephes) ... ok test_exp10 (test_basic.TestCephes) ... ok test_exp1_reg (test_basic.TestCephes) ... ok test_exp2 (test_basic.TestCephes) ... ok test_expi (test_basic.TestCephes) ... ok test_expm1 (test_basic.TestCephes) ... ok test_expn (test_basic.TestCephes) ... ok test_fdtr (test_basic.TestCephes) ... ok test_fdtrc (test_basic.TestCephes) ... ok test_fdtri (test_basic.TestCephes) ... ok test_fdtridfd (test_basic.TestCephes) ... ok test_fresnel (test_basic.TestCephes) ... ok test_gamma (test_basic.TestCephes) ... ok test_gammainc (test_basic.TestCephes) ... ok test_gammaincc (test_basic.TestCephes) ... ok test_gammainccinv (test_basic.TestCephes) ... ok test_gammaln (test_basic.TestCephes) ... ok test_gdtr (test_basic.TestCephes) ... ok test_gdtrc (test_basic.TestCephes) ... ok test_gdtria (test_basic.TestCephes) ... ok test_gdtrib (test_basic.TestCephes) ... ok test_gdtrix (test_basic.TestCephes) ... ok test_hankel1 (test_basic.TestCephes) ... ok test_hankel1e (test_basic.TestCephes) ... ok test_hankel2 (test_basic.TestCephes) ... ok test_hankel2e (test_basic.TestCephes) ... ok test_hyp1f1 (test_basic.TestCephes) ... ok test_hyp1f2 (test_basic.TestCephes) ... ok test_hyp2f0 (test_basic.TestCephes) ... ok test_hyp2f1 (test_basic.TestCephes) ... ok test_hyp3f0 (test_basic.TestCephes) ... ok test_hyperu (test_basic.TestCephes) ... ok test_i0 (test_basic.TestCephes) ... ok test_i0e (test_basic.TestCephes) ... ok test_i1 (test_basic.TestCephes) ... ok test_i1e (test_basic.TestCephes) ... ok test_it2i0k0 (test_basic.TestCephes) ... ok test_it2j0y0 (test_basic.TestCephes) ... ok test_it2struve0 (test_basic.TestCephes) ... ok test_itairy (test_basic.TestCephes) ... ok test_iti0k0 (test_basic.TestCephes) ... ok test_itj0y0 (test_basic.TestCephes) ... ok test_itmodstruve0 (test_basic.TestCephes) ... ok test_itstruve0 (test_basic.TestCephes) ... ok test_iv (test_basic.TestCephes) ... ok test_j0 (test_basic.TestCephes) ... ok test_j1 (test_basic.TestCephes) ... ok test_jn (test_basic.TestCephes) ... ok test_jv (test_basic.TestCephes) ... ok test_k0 (test_basic.TestCephes) ... ok test_k0e (test_basic.TestCephes) ... ok test_k1 (test_basic.TestCephes) ... ok test_k1e (test_basic.TestCephes) ... ok test_kei (test_basic.TestCephes) ... ok test_keip (test_basic.TestCephes) ... ok test_ker (test_basic.TestCephes) ... ok test_kerp (test_basic.TestCephes) ... ok test_kn (test_basic.TestCephes) ... ok test_kolmogi (test_basic.TestCephes) ... ok test_kolmogorov (test_basic.TestCephes) ... ok test_log1p (test_basic.TestCephes) ... ok test_lpmv (test_basic.TestCephes) ... ok test_mathieu_a (test_basic.TestCephes) ... ok test_mathieu_b (test_basic.TestCephes) ... ok test_mathieu_cem (test_basic.TestCephes) ... ok test_mathieu_modcem1 (test_basic.TestCephes) ... ok test_mathieu_modcem2 (test_basic.TestCephes) ... ok test_mathieu_modsem1 (test_basic.TestCephes) ... ok test_mathieu_modsem2 (test_basic.TestCephes) ... ok test_mathieu_sem (test_basic.TestCephes) ... ok test_modfresnelm (test_basic.TestCephes) ... ok test_modfresnelp (test_basic.TestCephes) ... ok test_nbdtr (test_basic.TestCephes) ... ok test_nbdtrc (test_basic.TestCephes) ... ok test_nbdtri (test_basic.TestCephes) ... ok test_nbdtrin (test_basic.TestCephes) ... ok test_ncfdtr (test_basic.TestCephes) ... ok test_ncfdtri (test_basic.TestCephes) ... ok test_ncfdtridfd (test_basic.TestCephes) ... ok test_nctdtr (test_basic.TestCephes) ... ok test_nctdtrinc (test_basic.TestCephes) ... ok test_nctdtrit (test_basic.TestCephes) ... ok test_ndtr (test_basic.TestCephes) ... ok test_ndtri (test_basic.TestCephes) ... ok test_nrdtrimn (test_basic.TestCephes) ... ok test_nrdtrisd (test_basic.TestCephes) ... ok test_obl_ang1 (test_basic.TestCephes) ... ok test_obl_ang1_cv (test_basic.TestCephes) ... ok test_obl_rad1 (test_basic.TestCephes) ... ok test_obl_rad1_cv (test_basic.TestCephes) ... ok test_obl_rad2 (test_basic.TestCephes) ... ok test_obl_rad2_cv (test_basic.TestCephes) ... ok test_pbdv (test_basic.TestCephes) ... ok test_pbvv (test_basic.TestCephes) ... ok test_pbwa (test_basic.TestCephes) ... ok test_pdtr (test_basic.TestCephes) ... ok test_pdtrc (test_basic.TestCephes) ... ok test_pdtri (test_basic.TestCephes) ... ok test_pdtrik (test_basic.TestCephes) ... ok test_pro_ang1 (test_basic.TestCephes) ... ok test_pro_ang1_cv (test_basic.TestCephes) ... ok test_pro_rad1 (test_basic.TestCephes) ... ok test_pro_rad1_cv (test_basic.TestCephes) ... ok test_pro_rad2 (test_basic.TestCephes) ... ok test_pro_rad2_cv (test_basic.TestCephes) ... ok test_psi (test_basic.TestCephes) ... ok test_radian (test_basic.TestCephes) ... ok test_rgamma (test_basic.TestCephes) ... ok test_round (test_basic.TestCephes) ... ok test_shichi (test_basic.TestCephes) ... ok test_sici (test_basic.TestCephes) ... ok test_sindg (test_basic.TestCephes) ... ok test_smirnov (test_basic.TestCephes) ... ok test_smirnovi (test_basic.TestCephes) ... ok test_spence (test_basic.TestCephes) ... ok test_stdtr (test_basic.TestCephes) ... ok test_stdtridf (test_basic.TestCephes) ... ok test_stdtrit (test_basic.TestCephes) ... ok test_struve (test_basic.TestCephes) ... ok test_tandg (test_basic.TestCephes) ... ok test_tklmbda (test_basic.TestCephes) ... ok test_wofz (test_basic.TestCephes) ... ok test_y0 (test_basic.TestCephes) ... ok test_y1 (test_basic.TestCephes) ... ok test_yn (test_basic.TestCephes) ... ok test_yv (test_basic.TestCephes) ... ok test_zeta (test_basic.TestCephes) ... ok test_zetac (test_basic.TestCephes) ... ok test_chebyc (test_basic.TestCheby) ... ok test_chebys (test_basic.TestCheby) ... ok test_chebyt (test_basic.TestCheby) ... ok test_chebyu (test_basic.TestCheby) ... ok test_ellipe (test_basic.TestEllip) ... ok test_ellipeinc (test_basic.TestEllip) ... ok test_ellipj (test_basic.TestEllip) ... ok Regression test for #946. ... ok test_ellipk (test_basic.TestEllip) ... ok test_ellipkinc (test_basic.TestEllip) ... ok test_erf (test_basic.TestErf) ... ok test_erf_zeros (test_basic.TestErf) ... ok test_erfcinv (test_basic.TestErf) ... ok test_erfinv (test_basic.TestErf) ... ok test_errprint (test_basic.TestErf) ... ok test_euler (test_basic.TestEuler) ... ok test_exp10 (test_basic.TestExp) ... ok test_exp10more (test_basic.TestExp) ... ok test_exp2 (test_basic.TestExp) ... ok test_exp2more (test_basic.TestExp) ... ok test_expm1 (test_basic.TestExp) ... ok test_expm1more (test_basic.TestExp) ... ok test_fresnel (test_basic.TestFresnel) ... ok test_fresnel_zeros (test_basic.TestFresnel) ... ok test_fresnelc_zeros (test_basic.TestFresnel) ... ok test_fresnels_zeros (test_basic.TestFresnel) ... ok test_modfresnelm (test_basic.TestFresnelIntegral) ... ok test_modfresnelp (test_basic.TestFresnelIntegral) ... ok test_gamma (test_basic.TestGamma) ... ok test_gammainc (test_basic.TestGamma) ... ok test_gammaincc (test_basic.TestGamma) ... ok test_gammainccinv (test_basic.TestGamma) ... ok test_gammaincinv (test_basic.TestGamma) ... ok test_gammaln (test_basic.TestGamma) ... ok test_rgamma (test_basic.TestGamma) ... ok test_gegenbauer (test_basic.TestGegenbauer) ... ok test_hankel1 (test_basic.TestHankel) ... ok test_hankel1e (test_basic.TestHankel) ... ok test_hankel2 (test_basic.TestHankel) ... ok test_hankl2e (test_basic.TestHankel) ... ok test_negv (test_basic.TestHankel) ... ok test_hermite (test_basic.TestHermite) ... ok test_hermitenorm (test_basic.TestHermite) ... ok test_h1vp (test_basic.TestHyper) ... ok test_h2vp (test_basic.TestHyper) ... ok test_hyp0f1 (test_basic.TestHyper) ... ok test_hyp1f1 (test_basic.TestHyper) ... ok test_hyp1f2 (test_basic.TestHyper) ... ok test_hyp2f0 (test_basic.TestHyper) ... ok test_hyp2f1 (test_basic.TestHyper) ... ok test_hyp3f0 (test_basic.TestHyper) ... ok test_hyperu (test_basic.TestHyper) ... ok test_bei (test_basic.TestKelvin) ... ok test_bei_zeros (test_basic.TestKelvin) ... ok test_beip (test_basic.TestKelvin) ... ok test_beip_zeros (test_basic.TestKelvin) ... ok test_ber (test_basic.TestKelvin) ... ok test_ber_zeros (test_basic.TestKelvin) ... ok test_berp (test_basic.TestKelvin) ... ok test_berp_zeros (test_basic.TestKelvin) ... ok test_kei (test_basic.TestKelvin) ... ok test_kei_zeros (test_basic.TestKelvin) ... ok test_keip (test_basic.TestKelvin) ... ok test_keip_zeros (test_basic.TestKelvin) ... ok test_kelvin (test_basic.TestKelvin) ... ok test_kelvin_zeros (test_basic.TestKelvin) ... ok test_ker (test_basic.TestKelvin) ... ok test_ker_zeros (test_basic.TestKelvin) ... ok test_kerp (test_basic.TestKelvin) ... ok test_kerp_zeros (test_basic.TestKelvin) ... ok test_genlaguerre (test_basic.TestLaguerre) ... ok test_laguerre (test_basic.TestLaguerre) ... ok test_lmbda (test_basic.TestLambda) ... ok test_legendre (test_basic.TestLegendre) ... ok test_lpmn (test_basic.TestLegendreFunctions) ... ok test_lpmv (test_basic.TestLegendreFunctions) ... ok test_lpn (test_basic.TestLegendreFunctions) ... ok test_lqmn (test_basic.TestLegendreFunctions) ... ok test_lqmn_shape (test_basic.TestLegendreFunctions) ... ok test_lqn (test_basic.TestLegendreFunctions) ... ok test_log1p (test_basic.TestLog1p) ... ok test_log1pmore (test_basic.TestLog1p) ... ok test_mathieu_a (test_basic.TestMathieu) ... ok test_mathieu_even_coef (test_basic.TestMathieu) ... ok test_mathieu_odd_coef (test_basic.TestMathieu) ... ok test_obl_cv_seq (test_basic.TestOblCvSeq) ... ok test_pbdn_seq (test_basic.TestParabolicCylinder) ... ok test_pbdv (test_basic.TestParabolicCylinder) ... ok test_pbdv_gradient (test_basic.TestParabolicCylinder) ... ok test_pbdv_points (test_basic.TestParabolicCylinder) ... ok test_pbdv_seq (test_basic.TestParabolicCylinder) ... ok test_pbvv_gradient (test_basic.TestParabolicCylinder) ... ok test_polygamma (test_basic.TestPolygamma) ... ok test_pro_cv_seq (test_basic.TestProCvSeq) ... ok test_psi (test_basic.TestPsi) ... ok test_radian (test_basic.TestRadian) ... ok test_radianmore (test_basic.TestRadian) ... ok test_riccati_jn (test_basic.TestRiccati) ... ok test_riccati_yn (test_basic.TestRiccati) ... ok test_round (test_basic.TestRound) ... ok test_sph_harm (test_basic.TestSpherical) ... ok test_sph_in (test_basic.TestSpherical) ... ok test_sph_inkn (test_basic.TestSpherical) ... ok test_sph_jn (test_basic.TestSpherical) ... ok test_sph_jnyn (test_basic.TestSpherical) ... ok test_sph_kn (test_basic.TestSpherical) ... ok test_sph_yn (test_basic.TestSpherical) ... ok Regression test for #679 ... ok test_basic.TestStruve.test_some_values ... ok Check Struve function versus its power series ... ok test_specialpoints (test_basic.TestTandg) ... ok test_tandg (test_basic.TestTandg) ... ok test_tandgmore (test_basic.TestTandg) ... ok test_0 (test_basic.TestTrigonometric) ... ok test_cbrt (test_basic.TestTrigonometric) ... ok test_cbrtmore (test_basic.TestTrigonometric) ... ok test_cosdg (test_basic.TestTrigonometric) ... ok test_cosdgmore (test_basic.TestTrigonometric) ... ok test_cosm1 (test_basic.TestTrigonometric) ... ok test_cotdg (test_basic.TestTrigonometric) ... ok test_cotdgmore (test_basic.TestTrigonometric) ... ok test_sinc (test_basic.TestTrigonometric) ... ok test_sindg (test_basic.TestTrigonometric) ... ok test_sindgmore (test_basic.TestTrigonometric) ... ok test_specialpoints (test_basic.TestTrigonometric) ... ok test1 (test_spfun_stats.TestMultiGammaLn) ... ok test_ararg (test_spfun_stats.TestMultiGammaLn) ... ok test_bararg (test_spfun_stats.TestMultiGammaLn) ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), array(inf), array(inf), 0.31772708039386671, 0.021186836778540902, 1000, 'alphasample mean test') ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), array(inf), array(inf), 'alpha') ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), 'alpha') ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), 'alpha') ... ok test_continuous_basic.test_cont_basic(, (3.5704770516650459,), 'alpha') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(0.11685027506808487), 0.019485173966289539, 0.11461131582481687, 1000, 'anglitsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(0.11685027506808487), 'anglit') ... ok test_continuous_basic.test_cont_basic(, (), 'anglit') ... ok test_continuous_basic.test_cont_basic(, (), 'anglit') ... ok test_continuous_basic.test_cont_basic(, (), 'anglit') ... ok test_continuous_basic.test_cont_basic(, (), array(0.5), array(0.125), 0.51691545030819297, 0.12586663168201145, 1000, 'arcsinesample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.5), array(0.125), 'arcsine') ... ok test_continuous_basic.test_cont_basic(, (), 'arcsine') ... ok test_continuous_basic.test_cont_basic(, (), 'arcsine') ... ok test_continuous_basic.test_cont_basic(, (), 'arcsine') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), array(0.78653818488354665), array(0.04264856955583702), 0.78396526766379881, 0.045854302817002014, 1000, 'betasample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), array(0.78653818488354665), array(0.04264856955583702), 'beta') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), 'beta') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), 'beta') ... ok test_continuous_basic.test_cont_basic(, (2.3098496451481823, 0.62687954300963677), 'beta') ... ok test_continuous_basic.test_cont_basic(, (5, 6), array(1.0), array(0.5), 0.9608076505807116, 0.47274240606696361, 1000, 'betaprimesample mean test') ... ok test_continuous_basic.test_cont_basic(, (5, 6), array(1.0), array(0.5), 'betaprime') ... ok test_continuous_basic.test_cont_basic(, (5, 6), 'betaprime') ... ok test_continuous_basic.test_cont_basic(, (5, 6), 'betaprime') ... ok test_continuous_basic.test_cont_basic(, (5, 6), 'betaprime') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), array(0.47823078529291196), array(0.083238491922311225), 0.4933692442090703, 0.083341918695194112, 1000, 'bradfordsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), array(0.47823078529291196), array(0.083238491922311225), 'bradford') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), 'bradford') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), 'bradford') ... ok test_continuous_basic.test_cont_basic(, (0.29891359763170633,), 'bradford') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), array(1.2109372989617821), array(0.029148272765685535), 1.2204374750635194, 0.030007409783013278, 1000, 'burrsample mean test') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), array(1.2109372989617821), array(0.029148272765685535), 'burr') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), 'burr') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), 'burr') ... ok test_continuous_basic.test_cont_basic(, (10.5, 4.2999999999999998), 'burr') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 1.552645887869553, 327.63988189221925, 1000, 'cauchysample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 'cauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'cauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'cauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'cauchy') ... ok test_continuous_basic.test_cont_basic(, (78,), array(8.8035000285242742), array(0.49838724777310972), 8.7666853364107826, 0.4613094602619911, 1000, 'chisample mean test') ... ok test_continuous_basic.test_cont_basic(, (78,), array(8.8035000285242742), array(0.49838724777310972), 'chi') ... ok test_continuous_basic.test_cont_basic(, (78,), 'chi') ... ok test_continuous_basic.test_cont_basic(, (78,), 'chi') ... ok test_continuous_basic.test_cont_basic(, (78,), 'chi') ... ok test_continuous_basic.test_cont_basic(, (55,), array(55.0), array(110.0), 54.443237765581728, 99.92590900706503, 1000, 'chi2sample mean test') ... ok test_continuous_basic.test_cont_basic(, (55,), array(55.0), array(110.0), 'chi2') ... ok test_continuous_basic.test_cont_basic(, (55,), 'chi2') ... ok test_continuous_basic.test_cont_basic(, (55,), 'chi2') ... ok test_continuous_basic.test_cont_basic(, (55,), 'chi2') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), array(0.0), array(2.3174697893161609), 0.058649521093078708, 2.2265558396676179, 1000, 'dgammasample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), array(0.0), array(2.3174697893161609), 'dgamma') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), 'dgamma') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), 'dgamma') ... ok test_continuous_basic.test_cont_basic(, (1.1023326088288166,), 'dgamma') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), array(0.0), array(0.98644644671326842), 0.036400052129263775, 0.95809285003898315, 1000, 'dweibullsample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), array(0.0), array(0.98644644671326842), 'dweibull') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), 'dweibull') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), 'dweibull') ... ok test_continuous_basic.test_cont_basic(, (2.0685080649914673,), 'dweibull') ... ok test_continuous_basic.test_cont_basic(, (20,), array(20.0), array(20.0), 19.75909577220186, 18.118858828066092, 1000, 'erlangsample mean test') ... ok test_continuous_basic.test_cont_basic(, (20,), array(20.0), array(20.0), 'erlang') ... ok test_continuous_basic.test_cont_basic(, (20,), 'erlang') ... ok test_continuous_basic.test_cont_basic(, (20,), 'erlang') ... ok test_continuous_basic.test_cont_basic(, (20,), 'erlang') ... ok test_continuous_basic.test_cont_basic(, (), array(1.0), array(1.0), 1.0489197735650717, 1.063581407259818, 1000, 'exponsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(1.0), array(1.0), 'expon') ... ok test_continuous_basic.test_cont_basic(, (), 'expon') ... ok test_continuous_basic.test_cont_basic(, (), 'expon') ... ok test_continuous_basic.test_cont_basic(, (), 'expon') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), array(0.76622330667382488), array(0.05900404926303171), 0.78088147091689752, 0.055982343857351249, 1000, 'exponpowsample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), array(0.76622330667382488), array(0.05900404926303171), 'exponpow') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), 'exponpow') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), 'exponpow') ... ok test_continuous_basic.test_cont_basic(, (2.697119160358469,), 'exponpow') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), array(1.2873418037984079), array(0.18119174498960655), 1.3122933415946192, 0.18047748963723587, 1000, 'exponweibsample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), array(1.2873418037984079), array(0.18119174498960655), 'exponweib') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), 'exponweib') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), 'exponweib') ... ok test_continuous_basic.test_cont_basic(, (2.8923945291034436, 1.9505288745913174), 'exponweib') ... ok test_continuous_basic.test_cont_basic(, (29, 18), array(1.125), array(0.2805572660098522), 1.1043635143445401, 0.2668974538642544, 1000, 'fsample mean test') ... ok test_continuous_basic.test_cont_basic(, (29, 18), array(1.125), array(0.2805572660098522), 'f') ... ok test_continuous_basic.test_cont_basic(, (29, 18), 'f') ... ok test_continuous_basic.test_cont_basic(, (29, 18), 'f') ... ok test_continuous_basic.test_cont_basic(, (29, 18), 'f') ... ok test_continuous_basic.test_cont_basic(, (29,), array(421.5), array(884942.25), 381.5231692182964, 659709.81429483229, 1000, 'fatiguelifesample mean test') ... ok test_continuous_basic.test_cont_basic(, (29,), array(421.5), array(884942.25), 'fatiguelife') ... ok test_continuous_basic.test_cont_basic(, (29,), 'fatiguelife') ... ok test_continuous_basic.test_cont_basic(, (29,), 'fatiguelife') ... ok test_continuous_basic.test_cont_basic(, (29,), 'fatiguelife') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), array(1.19619763403311), array(0.84763509403100801), 1.2366804437236734, 0.81012589838120275, 1000, 'fisksample mean test') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), array(1.19619763403311), array(0.84763509403100801), 'fisk') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), 'fisk') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), 'fisk') ... ok test_continuous_basic.test_cont_basic(, (3.0857548622253179,), 'fisk') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), array(inf), array(inf), 7.1128596673275473, 316.34888997897662, 1000, 'foldcauchysample mean test') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), array(inf), array(inf), 'foldcauchy') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), 'foldcauchy') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), 'foldcauchy') ... ok test_continuous_basic.test_cont_basic(, (4.7164673455831894,), 'foldcauchy') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), array(1.97141281966251), array(0.92432482721597609), 1.9485668609369076, 0.89445098899946973, 1000, 'foldnormsample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), array(1.97141281966251), array(0.92432482721597609), 'foldnorm') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), 'foldnorm') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), 'foldnorm') ... ok test_continuous_basic.test_cont_basic(, (1.9521253373555869,), 'foldnorm') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), array(-0.90148416697658329), array(0.076288054283963236), -0.88452420295607126, 0.07344496156832396, 1000, 'frechet_lsample mean test') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), array(-0.90148416697658329), array(0.076288054283963236), 'frechet_l') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), 'frechet_l') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), 'frechet_l') ... ok test_continuous_basic.test_cont_basic(, (3.6279911255583239,), 'frechet_l') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), array(0.88747270666698841), array(0.23766896745884436), 0.91443119143867235, 0.24138404811476438, 1000, 'frechet_rsample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), array(0.88747270666698841), array(0.23766896745884436), 'frechet_r') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), 'frechet_r') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), 'frechet_r') ... ok test_continuous_basic.test_cont_basic(, (1.8928171603534227,), 'frechet_r') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), array(1.9932305483800778), array(1.9932305483800778), 1.9049475453784119, 1.683287232178736, 1000, 'gammasample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), array(1.9932305483800778), array(1.9932305483800778), 'gamma') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), 'gamma') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), 'gamma') ... ok test_continuous_basic.test_cont_basic(, (1.9932305483800778,), 'gamma') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), array(0.68628702119319673), array(2.226241073208163), 0.7659800257484175, 2.3195034224827276, 1000, 'genextremesample mean test') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), array(0.68628702119319673), array(2.226241073208163), 'genextreme') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), 'genextreme') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), 'genextreme') ... ok test_continuous_basic.test_cont_basic(, (-0.10000000000000001,), 'genextreme') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), array(1.5702162249275609), array(0.060317549758582167), 1.5854913681198324, 0.057829220024230694, 1000, 'gengammasample mean test') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), array(1.5702162249275609), array(0.060317549758582167), 'gengamma') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), 'gengamma') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), 'gengamma') ... ok test_continuous_basic.test_cont_basic(, (4.4162385429431925, 3.1193091679242761), 'gengamma') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), array(0.70597656450848112), array(0.12459765121103794), 0.72486157273526908, 0.12193473451964035, 1000, 'genhalflogisticsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), array(0.70597656450848112), array(0.12459765121103794), 'genhalflogistic') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), 'genhalflogistic') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), 'genhalflogistic') ... ok test_continuous_basic.test_cont_basic(, (0.77274727809929322,), 'genhalflogistic') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), array(-1.8996417990533176), array(8.5520107632574316), -1.6915023776411073, 7.2701839656308955, 1000, 'genlogisticsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), array(-1.8996417990533176), array(8.5520107632574316), 'genlogistic') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), 'genlogistic') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), 'genlogistic') ... ok test_continuous_basic.test_cont_basic(, (0.41192440799679475,), 'genlogistic') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), array(1.1111111111111116), array(1.543209876543199), 1.1693857515263717, 1.6532801760783784, 1000, 'genparetosample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), array(1.1111111111111116), array(1.543209876543199), 'genpareto') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), 'genpareto') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), 'genpareto') ... ok test_continuous_basic.test_cont_basic(, (0.10000000000000001,), 'genpareto') ... ok test_continuous_basic.test_cont_basic(, (), array(1.6487212707001282), array(4.670774270471604), 1.5542857947354762, 3.0177319205419764, 1000, 'gilbratsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(1.6487212707001282), array(4.670774270471604), 'gilbrat') ... ok test_continuous_basic.test_cont_basic(, (), 'gilbrat') ... ok test_continuous_basic.test_cont_basic(, (), 'gilbrat') ... ok test_continuous_basic.test_cont_basic(, (), 'gilbrat') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), array(0.61842381762891141), array(0.18616258957403664), 0.64114216641328903, 0.19073684302906721, 1000, 'gompertzsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), array(0.61842381762891141), array(0.18616258957403664), 'gompertz') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), 'gompertz') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), 'gompertz') ... ok test_continuous_basic.test_cont_basic(, (0.94743713075105251,), 'gompertz') ... ok test_continuous_basic.test_cont_basic(, (), array(-0.57721566490153287), array(1.6449340668482264), -0.48770832402349951, 1.403145247583542, 1000, 'gumbel_lsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(-0.57721566490153287), array(1.6449340668482264), 'gumbel_l') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_l') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_l') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_l') ... ok test_continuous_basic.test_cont_basic(, (), array(0.57721566490153287), array(1.6449340668482264), 0.64936088543269999, 1.6827982006295068, 1000, 'gumbel_rsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.57721566490153287), array(1.6449340668482264), 'gumbel_r') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_r') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_r') ... ok test_continuous_basic.test_cont_basic(, (), 'gumbel_r') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 5.8851391398051538, 1194.9588343167102, 1000, 'halfcauchysample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 'halfcauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'halfcauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'halfcauchy') ... ok test_continuous_basic.test_cont_basic(, (), 'halfcauchy') ... ok test_continuous_basic.test_cont_basic(, (), array(1.3862943611198906), array(1.3680560780236473), 1.4456693604391833, 1.4370223442716139, 1000, 'halflogisticsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(1.3862943611198906), array(1.3680560780236473), 'halflogistic') ... ok test_continuous_basic.test_cont_basic(, (), 'halflogistic') ... ok test_continuous_basic.test_cont_basic(, (), 'halflogistic') ... ok test_continuous_basic.test_cont_basic(, (), 'halflogistic') ... ok test_continuous_basic.test_cont_basic(, (), array(0.79788456080286541), array(0.36338022763241862), 0.78953928239319027, 0.34253579133614653, 1000, 'halfnormsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.79788456080286541), array(0.36338022763241862), 'halfnorm') ... ok test_continuous_basic.test_cont_basic(, (), 'halfnorm') ... ok test_continuous_basic.test_cont_basic(, (), 'halfnorm') ... ok test_continuous_basic.test_cont_basic(, (), 'halfnorm') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(2.4674011002723395), 0.10651732466849634, 2.2821454925062783, 1000, 'hypsecantsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(2.4674011002723395), 'hypsecant') ... ok test_continuous_basic.test_cont_basic(, (), 'hypsecant') ... ok test_continuous_basic.test_cont_basic(, (), 'hypsecant') ... ok test_continuous_basic.test_cont_basic(, (), 'hypsecant') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), array(0.93729530609975309), array(13.131951625091871), 0.98003361245261933, 2.2163927217803892, 1000, 'invgammasample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), array(0.93729530609975309), array(13.131951625091871), 'invgamma') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), 'invgamma') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), 'invgamma') ... ok test_continuous_basic.test_cont_basic(, (2.0668996136993067,), 'invgamma') ... ok test_continuous_basic.test_cont_basic('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.211, 0.326, 0.295, 2.314, 0.359, 0.75 , 1.167, 0.806, 1.237, ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), array(0.14546264555347513), array(0.0030778995751055637), 0.14654162671350618, 0.0029042058229545743, 1000, 'invnormsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), array(0.14546264555347513), array(0.0030778995751055637), 'invnorm') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invnorm') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invnorm') ... ok test_continuous_basic.test_cont_basic(, (0.14546264555347513,), 'invnorm') ... ok test_continuous_basic.test_cont_basic('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.191, 0.239, 0.122, 0.175, 0.092, 0.082, 0.175, 0.121, 0.242, 0.124, ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), array(0.20952073643389132), array(0.0026608544463244121), 0.21254860453459329, 0.0026502849724359253, 1000, 'johnsonsbsample mean test') ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), array(0.20952073643389132), array(0.0026608544463244121), 'johnsonsb') ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), 'johnsonsb') ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), 'johnsonsb') ... ok test_continuous_basic.test_cont_basic(, (4.3172675099141058, 3.1837781130785063), 'johnsonsb') ... ok test_continuous_basic.test_cont_basic('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.135, 0.165, 0.158, 0.294, 0.172, 0.223, 0.252, 0.228, 0.256, 0.164, ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(2.0), 0.098100813619852997, 1.8262500606297374, 1000, 'laplacesample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(2.0), 'laplace') ... ok test_continuous_basic.test_cont_basic(, (), 'laplace') ... ok test_continuous_basic.test_cont_basic(, (), 'laplace') ... ok test_continuous_basic.test_cont_basic(, (), 'laplace') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 1932.0205477129166, 578434680.15985334, 1000, 'levysample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 'levy') ... ok test_continuous_basic.test_cont_basic(, (), 'levy') ... ok test_continuous_basic.test_cont_basic(, (), 'levy') ... ok test_continuous_basic.test_cont_basic(, (), 'levy') ... ok test_continuous_basic.test_cont_basic('wrapcauchy', (0.031071279018614728,), 0.01, array([ 2.704e-01, 6.147e-01, 5.045e-01, 1.637e+02, 7.518e-01, 4.449e+00, ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), -146.03264446610152, 5369822.6715053003, 1000, 'levy_lsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(inf), array(inf), 'levy_l') ... ok test_continuous_basic.test_cont_basic(, (), 'levy_l') ... ok test_continuous_basic.test_cont_basic(, (), 'levy_l') ... ok test_continuous_basic.test_cont_basic(, (), 'levy_l') ... ok test_continuous_basic.test_cont_basic('wrapcauchy', (0.031071279018614728,), 0.01, array([ -2.141e+02, -1.524e+01, -2.480e+01, -2.878e-01, -9.949e+00, -1.216e+00, ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), array(-2.4617679388246936), array(6.8426502245961007), -2.4057296757832174, 5.9831621683265688, 1000, 'loggammasample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), array(-2.4617679388246936), array(6.8426502245961007), 'loggamma') ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), 'loggamma') ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), 'loggamma') ... ok test_continuous_basic.test_cont_basic(, (0.41411931826052117,), 'loggamma') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(3.2898681336964528), 0.12005549995830994, 3.078985058384891, 1000, 'logisticsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(3.2898681336964528), 'logistic') ... ok test_continuous_basic.test_cont_basic(, (), 'logistic') ... ok test_continuous_basic.test_cont_basic(, (), 'logistic') ... ok test_continuous_basic.test_cont_basic(, (), 'logistic') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), array(1.1045330480739952), array(0.38917293304417666), 1.1334773520725614, 0.37349384621458165, 1000, 'loglaplacesample mean test') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), array(1.1045330480739952), array(0.38917293304417666), 'loglaplace') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), 'loglaplace') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), 'loglaplace') ... ok test_continuous_basic.test_cont_basic(, (3.2505926592051435,), 'loglaplace') ... ok test_continuous_basic.test_cont_basic('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.506, 0.757, 0.703, 1.898, 0.807, 1.102, 1.335, 1.134, 1.372, 0.743, ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), array(1.5757871665018548), array(3.6827062109137709), 1.4935980843433927, 2.4720855647743387, 1000, 'lognormsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), array(1.5757871665018548), array(3.6827062109137709), 'lognorm') ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), 'lognorm') ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), 'lognorm') ... ok test_continuous_basic.test_cont_basic(, (0.95368226960575331,), 'lognorm') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), array(1.1400690695795155), array(inf), 1.2046982570012683, 8.1205985026367333, 1000, 'lomaxsample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), array(1.1400690695795155), array(inf), 'lomax') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), 'lomax') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), 'lomax') ... ok test_continuous_basic.test_cont_basic(, (1.8771398388773268,), 'lomax') ... ok test_continuous_basic.test_cont_basic(, (), array(1.5957691216057308), array(0.45352091052967447), 1.5580947217508414, 0.41026340192631527, 1000, 'maxwellsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(1.5957691216057308), array(0.45352091052967447), 'maxwell') ... ok test_continuous_basic.test_cont_basic(, (), 'maxwell') ... ok test_continuous_basic.test_cont_basic(, (), 'maxwell') ... ok test_continuous_basic.test_cont_basic(, (), 'maxwell') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), array(0.97519075370169761), array(0.049002993894714963), 0.9885798461330112, 0.047891039771988025, 1000, 'nakagamisample mean test') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), array(0.97519075370169761), array(0.049002993894714963), 'nakagami') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), 'nakagami') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), 'nakagami') ... ok test_continuous_basic.test_cont_basic(, (4.9673794866666237,), 'nakagami') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), array(1.0966313767196905), array(0.19835325341303081), 1.0759414510260812, 0.18247641865181716, 1000, 'ncfsample mean test') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), array(1.0966313767196905), array(0.19835325341303081), 'ncf') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), 'ncf') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), 'ncf') ... ok test_continuous_basic.test_cont_basic(, (27, 27, 0.41578441799226107), 'ncf') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), array(0.25436732738327772), array(1.1694163414603564), 0.23917507013451059, 1.071479548334112, 1000, 'nctsample mean test') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), array(0.25436732738327772), array(1.1694163414603564), 'nct') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), 'nct') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), 'nct') ... ok test_continuous_basic.test_cont_basic(, (14, 0.24045031331198066), 'nct') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), array(22.056046597511642), array(46.224186390046569), 22.092814053046368, 48.384580315939367, 1000, 'ncx2sample mean test') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), array(22.056046597511642), array(46.224186390046569), 'ncx2') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), 'ncx2') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), 'ncx2') ... ok test_continuous_basic.test_cont_basic(, (21, 1.0560465975116415), 'ncx2') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(1.0), -0.021857613430289052, 0.96543031451323214, 1000, 'normsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.0), array(1.0), 'norm') ... ok test_continuous_basic.test_cont_basic(, (), 'norm') ... ok test_continuous_basic.test_cont_basic(, (), 'norm') ... ok test_continuous_basic.test_cont_basic(, (), 'norm') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), array(1.6166305764162521), array(1.6034057205287133), 1.6550137301062546, 1.2674974417903764, 1000, 'paretosample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), array(1.6166305764162521), array(1.6034057205287133), 'pareto') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), 'pareto') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), 'pareto') ... ok test_continuous_basic.test_cont_basic(, (2.621716532144454,), 'pareto') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), array(0.62393479469353574), array(0.85857496135780687), 0.63812341708244613, 0.061503031152340903, 1000, 'powerlawsample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), array(0.62393479469353574), array(0.85857496135780687), 'powerlaw') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), 'powerlaw') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), 'powerlaw') ... ok test_continuous_basic.test_cont_basic(, (1.6591133289905851,), 'powerlaw') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), array(-1.0934378551735171), array(0.46999722851193249), -1.0492605411034246, 0.4387505202446717, 1000, 'powernormsample mean test') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), array(-1.0934378551735171), array(0.46999722851193249), 'powernorm') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), 'powernorm') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), 'powernorm') ... ok test_continuous_basic.test_cont_basic(, (4.4453652254590779,), 'powernorm') ... ok test_continuous_basic.test_cont_basic('wrapcauchy', (0.031071279018614728,), 0.01, array([-2.241, -1.649, -1.771, -0.089, -1.536, -0.831, -0.503, -0.773, -0.465, -1.681, ... ok test_continuous_basic.test_cont_basic(, (), array(1.2533141373155001), array(0.42920367320510344), 1.2898620140336594, 0.43409553188317485, 1000, 'rayleighsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(1.2533141373155001), array(0.42920367320510344), 'rayleigh') ... ok test_continuous_basic.test_cont_basic(, (), 'rayleigh') ... ok test_continuous_basic.test_cont_basic(, (), 'rayleigh') ... ok test_continuous_basic.test_cont_basic(, (), 'rayleigh') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), array(0.19667848552072986), array(0.060882307287375481), 0.20860644275950979, 0.064125334510458903, 1000, 'reciprocalsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), array(0.19667848552072986), array(0.060882307287375481), 'reciprocal') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), 'reciprocal') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), 'reciprocal') ... ok test_continuous_basic.test_cont_basic(, (0.0062309367010521255, 1.0062309367010522), 'reciprocal') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), array(0.0), array(3.6905172081719186), -0.027973201793274144, 3.5344407590010793, 1000, 'tsample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), array(0.0), array(3.6905172081719186), 't') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), 't') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), 't') ... ok test_continuous_basic.test_cont_basic(, (2.7433514990818093,), 't') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), array(0.38595009941509401), array(0.048170356578380126), 0.3979467137648407, 0.048758189137974826, 1000, 'triangsample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), array(0.38595009941509401), array(0.048170356578380126), 'triang') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), 'triang') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), 'triang') ... ok test_continuous_basic.test_cont_basic(, (0.15785029824528218,), 'triang') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), array(0.95654169346841034), array(0.79425834443323384), 1.0006988174499489, 0.83089849698746843, 1000, 'truncexponsample mean test') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), array(0.95654169346841034), array(0.79425834443323384), 'truncexpon') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), 'truncexpon') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), 'truncexpon') ... ok test_continuous_basic.test_cont_basic(, (4.6907725456810478,), 'truncexpon') ... ok test_continuous_basic.test_cont_basic('wrapcauchy', (0.031071279018614728,), 0.01, array([ 5.550e-02, 2.235e-01, 1.716e-01, 2.646e+00, 2.830e-01, 9.932e-01, ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), array(0.24256013977032281), array(0.63221524437974919), 0.28565694330796559, 0.63826177893087388, 1000, 'truncnormsample mean test') ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), array(0.24256013977032281), array(0.63221524437974919), 'truncnorm') ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), 'truncnorm') ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), 'truncnorm') ... ok test_continuous_basic.test_cont_basic(, (-1.0978730080013919, 2.7306754109031979), 'truncnorm') ... ok test_continuous_basic.test_cont_basic('wrapcauchy', (0.031071279018614728,), 0.01, array([ -9.039e-01, -4.955e-01, -6.034e-01, 1.582e+00, -3.846e-01, 4.763e-01, ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), array(0.0), array(0.30476472279111871), 0.0090800061342219563, 0.026536393118696645, 1000, 'tukeylambdasample mean test') ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), array(0.0), array(0.30476472279111871), 'tukeylambda') ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), 'tukeylambda') ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), 'tukeylambda') ... ok test_continuous_basic.test_cont_basic(, (3.1321477856738267,), 'tukeylambda') ... ok test_continuous_basic.test_cont_basic(, (), array(0.5), array(0.083333333333333329), 0.51516123993082641, 0.083002600178291724, 1000, 'uniformsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(0.5), array(0.083333333333333329), 'uniform') ... ok test_continuous_basic.test_cont_basic(, (), 'uniform') ... ok test_continuous_basic.test_cont_basic(, (), 'uniform') ... ok test_continuous_basic.test_cont_basic(, (), 'uniform') ... ok test_continuous_basic.test_cont_basic(, (), array(1.0), array(1.0), 1.0190852081914157, 1.0256627129688456, 1000, 'waldsample mean test') ... ok test_continuous_basic.test_cont_basic(, (), array(1.0), array(1.0), 'wald') ... ok test_continuous_basic.test_cont_basic(, (), 'wald') ... ok test_continuous_basic.test_cont_basic(, (), 'wald') ... ok test_continuous_basic.test_cont_basic(, (), 'wald') ... ok test_continuous_basic.test_cont_basic('wrapcauchy', (0.031071279018614728,), 0.01, array([ 2.007, 0.29 , 0.628, 1.619, 0.317, 0.243, 1.623, 0.625, 3.541, 0.663, ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), array(-0.89129676887660936), array(0.11369305518071726), -0.87022627496962057, 0.10761318131749308, 1000, 'weibull_maxsample mean test') ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), array(-0.89129676887660936), array(0.11369305518071726), 'weibull_max') ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), 'weibull_max') ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), 'weibull_max') ... ok test_continuous_basic.test_cont_basic(, (2.8687961709100187,), 'weibull_max') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), array(0.88961629797475072), array(0.26510662289002929), 0.9178211327965029, 0.27042300220730853, 1000, 'weibull_minsample mean test') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), array(0.88961629797475072), array(0.26510662289002929), 'weibull_min') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), 'weibull_min') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), 'weibull_min') ... ok test_continuous_basic.test_cont_basic(, (1.7866166930421596,), 'weibull_min') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), array(3.1415926535897931), array(3.4151322438845), 3.2377370605003604, 3.4056398320138892, 1000, 'wrapcauchysample mean test') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), array(3.1415926535897931), array(3.4151322438845), 'wrapcauchy') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), 'wrapcauchy') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), 'wrapcauchy') ... ok test_continuous_basic.test_cont_basic(, (0.031071279018614728,), 'wrapcauchy') ... ok test_continuous_basic.test_cont_basic('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.322, 1.211, 0.949, 5.915, 1.501, 4.04 , 5.113, 4.253, 5.215, 1.139, ... ok test_continuous_basic.test_cont_basic_slow(, (), array(0.0), array(1.2898681336964528), 0.066877232164720093, 1.2557095806403149, 1000, 'cosinesample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (), array(0.0), array(1.2898681336964528), 'cosine') ... ok test_continuous_basic.test_cont_basic_slow(, (), 'cosine') ... ok test_continuous_basic.test_cont_basic_slow(, (), 'cosine') ... ok test_continuous_basic.test_cont_basic_slow(, (), 'cosine') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([ -1.834e+00, -1.020e+00, -1.207e+00, 1.770e+00, -8.363e-01, 4.322e-01, ... ok test_continuous_basic.test_cont_basic_slow(, (13.763771604130699, 3.1189636648681431, 2.5145980350183019, 5.1811649903971615), array(0.79060294208129789), array(0.010324457830151235), 0.79716096342877829, 0.0093964106719662023, 1000, 'gausshypersample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (13.763771604130699, 3.1189636648681431, 2.5145980350183019, 5.1811649903971615), array(0.79060294208129789), array(0.010324457830151235), 'gausshyper') ... ok test_continuous_basic.test_cont_basic_slow(, (13.763771604130699, 3.1189636648681431, 2.5145980350183019, 5.1811649903971615), 'gausshyper') ... ok test_continuous_basic.test_cont_basic_slow(, (13.763771604130699, 3.1189636648681431, 2.5145980350183019, 5.1811649903971615), 'gausshyper') ... ok test_continuous_basic.test_cont_basic_slow(, (13.763771604130699, 3.1189636648681431, 2.5145980350183019, 5.1811649903971615), 'gausshyper') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.609, 0.708, 0.688, 0.927, 0.727, 0.838, 0.882, 0.846, 0.887, 0.703, ... ok test_continuous_basic.test_cont_basic_slow(, (9.1325976465418908, 16.231956600590632, 3.2819552690843983), array(0.081881553566041348), array(0.0047488061702736818), 0.085394283597286458, 0.0049721262309725244, 1000, 'genexponsample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (9.1325976465418908, 16.231956600590632, 3.2819552690843983), array(0.081881553566041348), array(0.0047488061702736818), 'genexpon') ... ok test_continuous_basic.test_cont_basic_slow(, (9.1325976465418908, 16.231956600590632, 3.2819552690843983), 'genexpon') ... ok test_continuous_basic.test_cont_basic_slow(, (9.1325976465418908, 16.231956600590632, 3.2819552690843983), 'genexpon') ... ok test_continuous_basic.test_cont_basic_slow(, (9.1325976465418908, 16.231956600590632, 3.2819552690843983), 'genexpon') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([ 6.030e-03, 2.320e-02, 1.805e-02, 2.049e-01, 2.895e-02, 8.933e-02, ... ok test_continuous_basic.test_cont_basic_slow(, (10.58,), array(1.0642459285700652), array(0.019515313753631469), 1.0717327056387989, 0.020316928703907878, 1000, 'invweibullsample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (10.58,), array(1.0642459285700652), array(0.019515313753631469), 'invweibull') ... ok test_continuous_basic.test_cont_basic_slow(, (10.58,), 'invweibull') ... ok test_continuous_basic.test_cont_basic_slow(, (10.58,), 'invweibull') ... ok test_continuous_basic.test_cont_basic_slow(, (10.58,), 'invweibull') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.904, 0.957, 0.944, 1.296, 0.969, 1.078, 1.155, 1.09 , 1.166, 0.953, ... ok test_continuous_basic.test_cont_basic_slow(, (2.554395574161155, 2.2482281679651965), array(-1.5421538673934503), array(0.76298819698259068), -1.4802090580087397, 0.64030549378085866, 1000, 'johnsonsusample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (2.554395574161155, 2.2482281679651965), array(-1.5421538673934503), array(0.76298819698259068), 'johnsonsu') ... ok test_continuous_basic.test_cont_basic_slow(, (2.554395574161155, 2.2482281679651965), 'johnsonsu') ... ok test_continuous_basic.test_cont_basic_slow(, (2.554395574161155, 2.2482281679651965), 'johnsonsu') ... ok test_continuous_basic.test_cont_basic_slow(, (2.554395574161155, 2.2482281679651965), 'johnsonsu') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([-3.098, -2.146, -2.325, -0.469, -1.987, -1.148, -0.828, -1.089, -0.793, -2.191, ... ok Warning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. Warning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. Warning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. test_continuous_basic.test_cont_basic_slow(, (22,), array(0.12650111758021138), array(0.0048081234718335385), 0.13037965545366439, 0.0048583865933551837, 1000, 'ksonesample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (22,), array(0.12650111758021138), array(0.0048081234718335385), 'ksone') ... Warning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. Warning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. ok test_continuous_basic.test_cont_basic_slow(, (22,), 'ksone') ... ok test_continuous_basic.test_cont_basic_slow(, (22,), 'ksone') ... ok test_continuous_basic.test_cont_basic_slow(, (22,), 'ksone') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.03 , 0.065, 0.056, 0.243, 0.074, 0.144, 0.185, 0.151, 0.191, 0.063, ... ok test_continuous_basic.test_cont_basic_slow(, (), array(0.86873310172046714), array(0.067771096156418076), 0.88333617713614543, 0.068962463183067341, 1000, 'kstwobignsample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (), array(0.86873310172046714), array(0.067771096156418076), 'kstwobign') ... ok test_continuous_basic.test_cont_basic_slow(, (), 'kstwobign') ... ok test_continuous_basic.test_cont_basic_slow(, (), 'kstwobign') ... ok test_continuous_basic.test_cont_basic_slow(, (), 'kstwobign') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.525, 0.646, 0.617, 1.317, 0.676, 0.921, 1.078, 0.946, 1.099, 0.639, ... ok test_continuous_basic.test_cont_basic_slow(, (10.4, 3.6000000000000001), array(1.6383160845170994), array(0.76014296685491267), 1.6787451437758132, 0.76460232760244407, 1000, 'mielkesample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (10.4, 3.6000000000000001), array(1.6383160845170994), array(0.76014296685491267), 'mielke') ... ok test_continuous_basic.test_cont_basic_slow(, (10.4, 3.6000000000000001), 'mielke') ... ok test_continuous_basic.test_cont_basic_slow(, (10.4, 3.6000000000000001), 'mielke') ... ok test_continuous_basic.test_cont_basic_slow(, (10.4, 3.6000000000000001), 'mielke') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.858, 1.088, 1.033, 2.868, 1.143, 1.636, 2.03 , 1.694, 2.089, 1.073, ... ok test_continuous_basic.test_cont_basic_slow(, (2.1413923530064087, 0.44639540782048337), array(0.81093503806118061), array(0.087393945141339358), 0.82762934478457906, 0.088971628537018746, 1000, 'powerlognormsample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (2.1413923530064087, 0.44639540782048337), array(0.81093503806118061), array(0.087393945141339358), 'powerlognorm') ... ok test_continuous_basic.test_cont_basic_slow(, (2.1413923530064087, 0.44639540782048337), 'powerlognorm') ... ok test_continuous_basic.test_cont_basic_slow(, (2.1413923530064087, 0.44639540782048337), 'powerlognorm') ... ok test_continuous_basic.test_cont_basic_slow(, (2.1413923530064087, 0.44639540782048337), 'powerlognorm') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.419, 0.564, 0.53 , 1.308, 0.598, 0.868, 1.038, 0.896, 1.061, 0.556, ... ok test_continuous_basic.test_cont_basic_slow(, (0.90000000000000002,), array(0.0), array(1.7800284850122077), 0.034195658437782939, 0.53053935902929883, 1000, 'rdistsample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (0.90000000000000002,), array(0.0), array(1.7800284850122077), 'rdist') ... ok test_continuous_basic.test_cont_basic_slow(, (0.90000000000000002,), 'rdist') ... ok test_continuous_basic.test_cont_basic_slow(, (0.90000000000000002,), 'rdist') ... ok test_continuous_basic.test_cont_basic_slow(, (0.90000000000000002,), 'rdist') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([ -9.908e-01, -8.354e-01, -9.020e-01, 9.876e-01, -7.439e-01, 4.408e-01, ... ok test_continuous_basic.test_cont_basic_slow(, (0.63004267809369119,), array(2.5871940660171719), array(3.5871940659106416), 2.4895378659294742, 3.0462804278153826, 1000, 'recipinvgausssample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (0.63004267809369119,), array(2.5871940660171719), array(3.5871940659106416), 'recipinvgauss') ... ok test_continuous_basic.test_cont_basic_slow(, (0.63004267809369119,), 'recipinvgauss') ... ok test_continuous_basic.test_cont_basic_slow(, (0.63004267809369119,), 'recipinvgauss') ... ok test_continuous_basic.test_cont_basic_slow(, (0.63004267809369119,), 'recipinvgauss') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.909, 4.33 , 2.297, 1.081, 4.02 , 5.026, 1.079, 2.309, 0.569, ... ok test_continuous_basic.test_cont_basic_slow(, (0.7749725210111873,), array(1.4347677646804933), array(0.54202386975617545), 1.4759724770721283, 0.54682310140952894, 1000, 'ricesample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (0.7749725210111873,), array(1.4347677646804933), array(0.54202386975617545), 'rice') ... ok test_continuous_basic.test_cont_basic_slow(, (0.7749725210111873,), 'rice') ... ok test_continuous_basic.test_cont_basic_slow(, (0.7749725210111873,), 'rice') ... ok test_continuous_basic.test_cont_basic_slow(, (0.7749725210111873,), 'rice') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([ 0.389, 0.779, 0.683, 2.668, 0.876, 1.631, 2.063, 1.704, 2.117, 0.753, ... ok test_continuous_basic.test_cont_basic_slow(, (), array(0.0), array(0.25), 0.027504350440163326, 0.24700849349956552, 1000, 'semicircularsample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (), array(0.0), array(0.25), 'semicircular') ... ok test_continuous_basic.test_cont_basic_slow(, (), 'semicircular') ... ok test_continuous_basic.test_cont_basic_slow(, (), 'semicircular') ... ok test_continuous_basic.test_cont_basic_slow(, (), 'semicircular') ... ok test_continuous_basic.test_cont_basic_slow('wrapcauchy', (0.031071279018614728,), 0.01, array([ -7.936e-01, -4.880e-01, -5.675e-01, 7.739e-01, -4.061e-01, 2.144e-01, ... ok test_continuous_basic.test_cont_basic_slow(, (3.9939042581071398,), array(-7.7153012244073893e-17), array(0.29880424181701204), 0.033567273497364061, 0.31611620271596441, 1000, 'vonmisessample mean test') ... ok test_continuous_basic.test_cont_basic_slow(, (3.9939042581071398,), array(-7.7153012244073893e-17), array(0.29880424181701204), 'vonmises') ... ok test_continuous_basic.test_cont_basic_slow(, (3.9939042581071398,), 'vonmises') ... ok test_continuous_basic.test_cont_basic_slow(, (3.9939042581071398,), 'vonmises') ... ok test_continuous_basic.test_cont_basic_slow(, (3.9939042581071398,), 'vonmises') ... ok test_continuous_extra.test_cont_extra(, (3.5704770516650459,), 'alpha ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (3.5704770516650459,), 'alpha isf limit test') ... ok test_continuous_extra.test_cont_extra(, (3.5704770516650459,), 'alpha loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'anglit ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'anglit isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'anglit loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'arcsine ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'arcsine isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'arcsine loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (2.3098496451481823, 0.62687954300963677), 'beta ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.3098496451481823, 0.62687954300963677), 'beta isf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.3098496451481823, 0.62687954300963677), 'beta loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (5, 6), 'betaprime ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (5, 6), 'betaprime isf limit test') ... ok test_continuous_extra.test_cont_extra(, (5, 6), 'betaprime loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.29891359763170633,), 'bradford ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.29891359763170633,), 'bradford isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.29891359763170633,), 'bradford loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (10.5, 4.2999999999999998), 'burr ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (10.5, 4.2999999999999998), 'burr isf limit test') ... ok test_continuous_extra.test_cont_extra(, (10.5, 4.2999999999999998), 'burr loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'cauchy ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'cauchy isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'cauchy loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (78,), 'chi ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (78,), 'chi isf limit test') ... ok test_continuous_extra.test_cont_extra(, (78,), 'chi loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (55,), 'chi2 ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (55,), 'chi2 isf limit test') ... ok test_continuous_extra.test_cont_extra(, (55,), 'chi2 loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'cosine ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'cosine isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'cosine loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (1.1023326088288166,), 'dgamma ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.1023326088288166,), 'dgamma isf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.1023326088288166,), 'dgamma loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (2.0685080649914673,), 'dweibull ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.0685080649914673,), 'dweibull isf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.0685080649914673,), 'dweibull loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (20,), 'erlang ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (20,), 'erlang isf limit test') ... ok test_continuous_extra.test_cont_extra(, (20,), 'erlang loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'expon ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'expon isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'expon loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (2.697119160358469,), 'exponpow ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.697119160358469,), 'exponpow isf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.697119160358469,), 'exponpow loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (2.8923945291034436, 1.9505288745913174), 'exponweib ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.8923945291034436, 1.9505288745913174), 'exponweib isf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.8923945291034436, 1.9505288745913174), 'exponweib loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (29, 18), 'f ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (29, 18), 'f isf limit test') ... ok test_continuous_extra.test_cont_extra(, (29, 18), 'f loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (29,), 'fatiguelife ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (29,), 'fatiguelife isf limit test') ... ok test_continuous_extra.test_cont_extra(, (29,), 'fatiguelife loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (3.0857548622253179,), 'fisk ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (3.0857548622253179,), 'fisk isf limit test') ... ok test_continuous_extra.test_cont_extra(, (3.0857548622253179,), 'fisk loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (4.7164673455831894,), 'foldcauchy ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (4.7164673455831894,), 'foldcauchy isf limit test') ... ok test_continuous_extra.test_cont_extra(, (4.7164673455831894,), 'foldcauchy loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (1.9521253373555869,), 'foldnorm ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.9521253373555869,), 'foldnorm isf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.9521253373555869,), 'foldnorm loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (3.6279911255583239,), 'frechet_l ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (3.6279911255583239,), 'frechet_l isf limit test') ... ok test_continuous_extra.test_cont_extra(, (3.6279911255583239,), 'frechet_l loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (1.8928171603534227,), 'frechet_r ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.8928171603534227,), 'frechet_r isf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.8928171603534227,), 'frechet_r loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (1.9932305483800778,), 'gamma ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.9932305483800778,), 'gamma isf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.9932305483800778,), 'gamma loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (13.763771604130699, 3.1189636648681431, 2.5145980350183019, 5.1811649903971615), 'gausshyper ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (13.763771604130699, 3.1189636648681431, 2.5145980350183019, 5.1811649903971615), 'gausshyper isf limit test') ... ok test_continuous_extra.test_cont_extra(, (13.763771604130699, 3.1189636648681431, 2.5145980350183019, 5.1811649903971615), 'gausshyper loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (9.1325976465418908, 16.231956600590632, 3.2819552690843983), 'genexpon ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (9.1325976465418908, 16.231956600590632, 3.2819552690843983), 'genexpon isf limit test') ... ok test_continuous_extra.test_cont_extra(, (9.1325976465418908, 16.231956600590632, 3.2819552690843983), 'genexpon loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (-0.10000000000000001,), 'genextreme ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (-0.10000000000000001,), 'genextreme isf limit test') ... ok test_continuous_extra.test_cont_extra(, (-0.10000000000000001,), 'genextreme loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (4.4162385429431925, 3.1193091679242761), 'gengamma ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (4.4162385429431925, 3.1193091679242761), 'gengamma isf limit test') ... ok test_continuous_extra.test_cont_extra(, (4.4162385429431925, 3.1193091679242761), 'gengamma loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.77274727809929322,), 'genhalflogistic ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.77274727809929322,), 'genhalflogistic isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.77274727809929322,), 'genhalflogistic loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.41192440799679475,), 'genlogistic ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.41192440799679475,), 'genlogistic isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.41192440799679475,), 'genlogistic loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.10000000000000001,), 'genpareto ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.10000000000000001,), 'genpareto isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.10000000000000001,), 'genpareto loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'gilbrat ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'gilbrat isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'gilbrat loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.94743713075105251,), 'gompertz ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.94743713075105251,), 'gompertz isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.94743713075105251,), 'gompertz loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'gumbel_l ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'gumbel_l isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'gumbel_l loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'gumbel_r ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'gumbel_r isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'gumbel_r loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'halfcauchy ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'halfcauchy isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'halfcauchy loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'halflogistic ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'halflogistic isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'halflogistic loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'halfnorm ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'halfnorm isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'halfnorm loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'hypsecant ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'hypsecant isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'hypsecant loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (2.0668996136993067,), 'invgamma ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.0668996136993067,), 'invgamma isf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.0668996136993067,), 'invgamma loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.14546264555347513,), 'invnorm ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.14546264555347513,), 'invnorm isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.14546264555347513,), 'invnorm loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (10.58,), 'invweibull ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (10.58,), 'invweibull isf limit test') ... ok test_continuous_extra.test_cont_extra(, (10.58,), 'invweibull loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (4.3172675099141058, 3.1837781130785063), 'johnsonsb ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (4.3172675099141058, 3.1837781130785063), 'johnsonsb isf limit test') ... ok test_continuous_extra.test_cont_extra(, (4.3172675099141058, 3.1837781130785063), 'johnsonsb loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (2.554395574161155, 2.2482281679651965), 'johnsonsu ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.554395574161155, 2.2482281679651965), 'johnsonsu isf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.554395574161155, 2.2482281679651965), 'johnsonsu loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (22,), 'ksone ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (22,), 'ksone isf limit test') ... ok test_continuous_extra.test_cont_extra(, (22,), 'ksone loc, scale test') ... Warning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. Warning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. Warning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. Warning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. ok test_continuous_extra.test_cont_extra(, (), 'kstwobign ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'kstwobign isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'kstwobign loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'laplace ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'laplace isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'laplace loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'levy ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'levy isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'levy loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'levy_l ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'levy_l isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'levy_l loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.41411931826052117,), 'loggamma ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.41411931826052117,), 'loggamma isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.41411931826052117,), 'loggamma loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'logistic ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'logistic isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'logistic loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (3.2505926592051435,), 'loglaplace ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (3.2505926592051435,), 'loglaplace isf limit test') ... ok test_continuous_extra.test_cont_extra(, (3.2505926592051435,), 'loglaplace loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.95368226960575331,), 'lognorm ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.95368226960575331,), 'lognorm isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.95368226960575331,), 'lognorm loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (1.8771398388773268,), 'lomax ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.8771398388773268,), 'lomax isf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.8771398388773268,), 'lomax loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'maxwell ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'maxwell isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'maxwell loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (10.4, 3.6000000000000001), 'mielke ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (10.4, 3.6000000000000001), 'mielke isf limit test') ... ok test_continuous_extra.test_cont_extra(, (10.4, 3.6000000000000001), 'mielke loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (4.9673794866666237,), 'nakagami ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (4.9673794866666237,), 'nakagami isf limit test') ... ok test_continuous_extra.test_cont_extra(, (4.9673794866666237,), 'nakagami loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (27, 27, 0.41578441799226107), 'ncf ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (27, 27, 0.41578441799226107), 'ncf isf limit test') ... ok test_continuous_extra.test_cont_extra(, (27, 27, 0.41578441799226107), 'ncf loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (14, 0.24045031331198066), 'nct ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (14, 0.24045031331198066), 'nct isf limit test') ... ok test_continuous_extra.test_cont_extra(, (14, 0.24045031331198066), 'nct loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (21, 1.0560465975116415), 'ncx2 ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (21, 1.0560465975116415), 'ncx2 isf limit test') ... ok test_continuous_extra.test_cont_extra(, (21, 1.0560465975116415), 'ncx2 loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'norm ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'norm isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'norm loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (2.621716532144454,), 'pareto ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.621716532144454,), 'pareto isf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.621716532144454,), 'pareto loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (1.6591133289905851,), 'powerlaw ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.6591133289905851,), 'powerlaw isf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.6591133289905851,), 'powerlaw loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (2.1413923530064087, 0.44639540782048337), 'powerlognorm ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.1413923530064087, 0.44639540782048337), 'powerlognorm isf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.1413923530064087, 0.44639540782048337), 'powerlognorm loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (4.4453652254590779,), 'powernorm ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (4.4453652254590779,), 'powernorm isf limit test') ... ok test_continuous_extra.test_cont_extra(, (4.4453652254590779,), 'powernorm loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'rayleigh ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'rayleigh isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'rayleigh loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.90000000000000002,), 'rdist ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.90000000000000002,), 'rdist isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.90000000000000002,), 'rdist loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.63004267809369119,), 'recipinvgauss ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.63004267809369119,), 'recipinvgauss isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.63004267809369119,), 'recipinvgauss loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.0062309367010521255, 1.0062309367010522), 'reciprocal ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.0062309367010521255, 1.0062309367010522), 'reciprocal isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.0062309367010521255, 1.0062309367010522), 'reciprocal loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.7749725210111873,), 'rice ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.7749725210111873,), 'rice isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.7749725210111873,), 'rice loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'semicircular ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'semicircular isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'semicircular loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (2.7433514990818093,), 't ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.7433514990818093,), 't isf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.7433514990818093,), 't loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.15785029824528218,), 'triang ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.15785029824528218,), 'triang isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.15785029824528218,), 'triang loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (4.6907725456810478,), 'truncexpon ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (4.6907725456810478,), 'truncexpon isf limit test') ... ok test_continuous_extra.test_cont_extra(, (4.6907725456810478,), 'truncexpon loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (-1.0978730080013919, 2.7306754109031979), 'truncnorm ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (-1.0978730080013919, 2.7306754109031979), 'truncnorm isf limit test') ... ok test_continuous_extra.test_cont_extra(, (-1.0978730080013919, 2.7306754109031979), 'truncnorm loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (3.1321477856738267,), 'tukeylambda ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (3.1321477856738267,), 'tukeylambda isf limit test') ... ok test_continuous_extra.test_cont_extra(, (3.1321477856738267,), 'tukeylambda loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'uniform ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'uniform isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'uniform loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (3.9939042581071398,), 'vonmises ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (3.9939042581071398,), 'vonmises isf limit test') ... ok test_continuous_extra.test_cont_extra(, (3.9939042581071398,), 'vonmises loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (), 'wald ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'wald isf limit test') ... ok test_continuous_extra.test_cont_extra(, (), 'wald loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (2.8687961709100187,), 'weibull_max ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.8687961709100187,), 'weibull_max isf limit test') ... ok test_continuous_extra.test_cont_extra(, (2.8687961709100187,), 'weibull_max loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (1.7866166930421596,), 'weibull_min ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.7866166930421596,), 'weibull_min isf limit test') ... ok test_continuous_extra.test_cont_extra(, (1.7866166930421596,), 'weibull_min loc, scale test') ... ok test_continuous_extra.test_cont_extra(, (0.031071279018614728,), 'wrapcauchy ppf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.031071279018614728,), 'wrapcauchy isf limit test') ... ok test_continuous_extra.test_cont_extra(, (0.031071279018614728,), 'wrapcauchy loc, scale test') ... ok test_continuous_extra.test_540_567 ... ok test_discrete_basic.test_discrete_basic(0.29999999999999999, array(0.29999999999999999), 'bernoulli sample mean test') ... ok test_discrete_basic.test_discrete_basic(0.20999999999999627, array(0.20999999999999999), 'bernoulli sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.29999999999999999,), 'bernoulli cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.29999999999999999,), 'bernoulli pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.29999999999999999,), 'bernoulli oth') ... ok test_discrete_basic.test_discrete_basic(, (0.29999999999999999,), -1.2380952380951449, 0.87287156094400487, 'bernoulli skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (0.29999999999999999,), array([0, 0, 0, ..., 1, 0, 0]), 0.01, 'bernoulli chisquare') ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/lib/function_base.py:185: Warning: The new semantics of histogram is now the default and the `new` keyword will be removed in NumPy 2.0. """, Warning) ok test_discrete_basic.test_discrete_basic(2.0015000000000001, array(2.0), 'binom sample mean test') ... ok test_discrete_basic.test_discrete_basic(1.1854977500000026, array(1.2), 'binom sample var test') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.40000000000000002), 'binom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.40000000000000002), 'binom pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.40000000000000002), 'binom oth') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.40000000000000002), -0.26248929225026352, 0.28057933666556623, 'binom skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.40000000000000002), array([2, 2, 2, ..., 4, 1, 3]), 0.01, 'binom chisquare') ... ok test_discrete_basic.test_discrete_basic(0.32900000000000001, array(0.32731081784804011), 'boltzmann sample mean test') ... ok test_discrete_basic.test_discrete_basic(0.43975900000001117, array(0.4344431884043245), 'boltzmann sample var test') ... ok test_discrete_basic.test_discrete_basic(, (1.3999999999999999, 19), 'boltzmann cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (1.3999999999999999, 19), 'boltzmann pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (1.3999999999999999, 19), 'boltzmann oth') ... ok test_discrete_basic.test_discrete_basic(, (1.3999999999999999, 19), 6.7133652484343216, 2.418691392797208, 'boltzmann skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (1.3999999999999999, 19), array([0, 0, 0, ..., 2, 0, 0]), 0.01, 'boltzmann chisquare') ... ok test_discrete_basic.test_discrete_basic(0.0070000000000000001, array(7.9181711188056743e-17), 'dlaplace sample mean test') ... ok test_discrete_basic.test_discrete_basic(2.9319510000000588, array(2.9635341891843714), 'dlaplace sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.80000000000000004,), 'dlaplace cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.80000000000000004,), 'dlaplace pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.80000000000000004,), 'dlaplace oth') ... ok test_discrete_basic.test_discrete_basic(, (0.80000000000000004,), 3.0660776822072453, 0.021996158609059947, 'dlaplace skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (0.80000000000000004,), array([ 0, 0, 0, ..., 4, -1, 0]), 0.01, 'dlaplace chisquare') ... ok test_discrete_basic.test_discrete_basic(1.9870000000000001, array(2.0), 'geom sample mean test') ... ok test_discrete_basic.test_discrete_basic(2.0098310000000303, array(2.0), 'geom sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.5,), 'geom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.5,), 'geom pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.5,), 'geom oth') ... ok test_discrete_basic.test_discrete_basic(, (0.5,), 5.1935883716655766, 2.0476504362662378, 'geom skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (0.5,), array([1, 1, 2, ..., 6, 1, 2]), 0.01, 'geom chisquare') ... ok test_discrete_basic.test_discrete_basic(2.3860000000000001, array(2.4000000000000004), 'hypergeom sample mean test') ... ok test_discrete_basic.test_discrete_basic(1.1500039999999776, array(1.1917241379310344), 'hypergeom sample var test') ... ok test_discrete_basic.test_discrete_basic(, (30, 12, 6), 'hypergeom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (30, 12, 6), 'hypergeom pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (30, 12, 6), 'hypergeom oth') ... ok test_discrete_basic.test_discrete_basic(, (30, 12, 6), -0.29686916362552029, 0.020906577365969316, 'hypergeom skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (30, 12, 6), array([1, 1, 4, ..., 3, 2, 2]), 0.01, 'hypergeom chisquare') ... ok test_discrete_basic.test_discrete_basic(1.635, array(1.637035001905937), 'logser sample mean test') ... ok test_discrete_basic.test_discrete_basic(1.325775000000023, array(1.4127039072996714), 'logser sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 'logser cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 'logser pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 'logser oth') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 7.559198377977479, 2.4947797038220592, 'logser skew_kurt') ... ok test_discrete_basic.test_discrete_basic(4.9210000000000003, array(5.0), 'nbinom sample mean test') ... ok test_discrete_basic.test_discrete_basic(9.4787590000000037, array(10.0), 'nbinom sample var test') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.5), 'nbinom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.5), 'nbinom pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.5), 'nbinom oth') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.5), 1.5000586959708402, 0.97358518373019021, 'nbinom skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (5, 0.5), array([0, 2, 6, ..., 3, 3, 3]), 0.01, 'nbinom chisquare') ... ok test_discrete_basic.test_discrete_basic(0.58399999999999996, array(0.60000000000000009), 'nbinom sample mean test') ... ok test_discrete_basic.test_discrete_basic(1.4729440000000598, array(1.5000000000000002), 'nbinom sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.40000000000000002, 0.40000000000000002), 'nbinom cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.40000000000000002, 0.40000000000000002), 'nbinom pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.40000000000000002, 0.40000000000000002), 'nbinom oth') ... ok test_discrete_basic.test_discrete_basic(, (0.40000000000000002, 0.40000000000000002), 13.929082276070467, 3.2071528858780165, 'nbinom skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (0.40000000000000002, 0.40000000000000002), array([0, 0, 0, ..., 0, 0, 0]), 0.01, 'nbinom chisquare') ... ok test_discrete_basic.test_discrete_basic(1.496, array(1.5031012098113492), 'planck sample mean test') ... ok test_discrete_basic.test_discrete_basic(3.8119840000000167, array(3.7624144567476914), 'planck sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.51000000000000001,), 'planck cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.51000000000000001,), 'planck pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.51000000000000001,), 'planck oth') ... ok test_discrete_basic.test_discrete_basic(, (0.51000000000000001,), 5.0921201134828475, 1.9924056300476671, 'planck skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (0.51000000000000001,), array([1, 1, 1, ..., 7, 0, 2]), 0.01, 'planck chisquare') ... ok test_discrete_basic.test_discrete_basic(0.58550000000000002, array(0.59999999999999998), 'poisson sample mean test') ... ok test_discrete_basic.test_discrete_basic(0.59768974999998681, array(0.59999999999999998), 'poisson sample var test') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 'poisson cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 'poisson pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 'poisson oth') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), 1.9406814436782422, 1.3589585241917534, 'poisson skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (0.59999999999999998,), array([0, 0, 0, ..., 1, 0, 0]), 0.01, 'poisson chisquare') ... ok test_discrete_basic.test_discrete_basic(18.4725, array(18.5), 'randint sample mean test') ... ok test_discrete_basic.test_discrete_basic(48.800243749999929, array(47.916666666666664), 'randint sample var test') ... ok test_discrete_basic.test_discrete_basic(, (7, 31), 'randint cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (7, 31), 'randint pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (7, 31), 'randint oth') ... ok test_discrete_basic.test_discrete_basic(, (7, 31), -1.2115060412211844, -0.025412774105826177, 'randint skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (7, 31), array([27, 10, 15, ..., 16, 9, 17]), 0.01, 'randint chisquare') ... ok test_discrete_basic.test_discrete_basic(1.1194999999999999, array(1.110626535326148), 'zipf sample mean test') ... ok test_discrete_basic.test_discrete_basic(0.30921975000000479, array(0.28632645366450338), 'zipf sample var test') ... ok test_discrete_basic.test_discrete_basic(, (4,), 'zipf cdf_ppf') ... ok test_discrete_basic.test_discrete_basic(, (4,), 'zipf pmf_cdf') ... ok test_discrete_basic.test_discrete_basic(, (4,), 'zipf oth') ... ok test_discrete_basic.test_discrete_basic(, (4,), 167.01888834705358, 10.002579522051445, 'zipf skew_kurt') ... ok test_discrete_basic.test_discrete_basic(, (4,), array([1, 1, 1, ..., 1, 1, 1]), 0.01, 'zipf chisquare') ... ok test_discrete_basic.test_discrete_extra(, (0.29999999999999999,), 'bernoulli ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.29999999999999999,), 'bernoulli isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.29999999999999999,), 'bernoulli entropy nan test') ... ok test_discrete_basic.test_discrete_extra(, (5, 0.40000000000000002), 'binom ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (5, 0.40000000000000002), 'binom isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (5, 0.40000000000000002), 'binom entropy nan test') ... ok test_discrete_basic.test_discrete_extra(, (1.3999999999999999, 19), 'boltzmann ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (1.3999999999999999, 19), 'boltzmann isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (1.3999999999999999, 19), 'boltzmann entropy nan test') ... ok test_discrete_basic.test_discrete_extra(, (0.80000000000000004,), 'dlaplace ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.80000000000000004,), 'dlaplace isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.80000000000000004,), 'dlaplace entropy nan test') ... ok test_discrete_basic.test_discrete_extra(, (0.5,), 'geom ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.5,), 'geom isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.5,), 'geom entropy nan test') ... ok test_discrete_basic.test_discrete_extra(, (30, 12, 6), 'hypergeom ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (30, 12, 6), 'hypergeom isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (30, 12, 6), 'hypergeom entropy nan test') ... ok test_discrete_basic.test_discrete_extra(, (0.59999999999999998,), 'logser ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.59999999999999998,), 'logser isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.59999999999999998,), 'logser entropy nan test') ... ok test_discrete_basic.test_discrete_extra(, (5, 0.5), 'nbinom ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (5, 0.5), 'nbinom isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (5, 0.5), 'nbinom entropy nan test') ... ok test_discrete_basic.test_discrete_extra(, (0.40000000000000002, 0.40000000000000002), 'nbinom ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.40000000000000002, 0.40000000000000002), 'nbinom isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.40000000000000002, 0.40000000000000002), 'nbinom entropy nan test') ... ok test_discrete_basic.test_discrete_extra(, (0.51000000000000001,), 'planck ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.51000000000000001,), 'planck isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.51000000000000001,), 'planck entropy nan test') ... ok test_discrete_basic.test_discrete_extra(, (0.59999999999999998,), 'poisson ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.59999999999999998,), 'poisson isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (0.59999999999999998,), 'poisson entropy nan test') ... ok test_discrete_basic.test_discrete_extra(, (7, 31), 'randint ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (7, 31), 'randint isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (7, 31), 'randint entropy nan test') ... ok test_discrete_basic.test_discrete_extra(, (4,), 'zipf ppf limit test') ... ok test_discrete_basic.test_discrete_extra(, (4,), 'zipf isf limit test') ... ok test_discrete_basic.test_discrete_extra(, (4,), 'zipf entropy nan test') ... ok Failure: SkipTest (Skipping test: test_discrete_privateTest skipped due to test condition) ... SKIP: Skipping test: test_discrete_privateTest skipped due to test condition test_rvs (test_distributions.TestBernoulli) ... ok test_rvs (test_distributions.TestBinom) ... ok test_rvs (test_distributions.TestDLaplace) ... ok See ticket #761 ... ok See ticket #497 ... ok test_tail (test_distributions.TestExpon) ... ok test_zero (test_distributions.TestExpon) ... ok test_tail (test_distributions.TestExponpow) ... ok test_cdf_bounds (test_distributions.TestGenExpon) ... ok test_pdf_unity_area (test_distributions.TestGenExpon) ... ok test_cdf_sf (test_distributions.TestGeom) ... ok test_pmf (test_distributions.TestGeom) ... ok test_rvs (test_distributions.TestGeom) ... ok test_rvs (test_distributions.TestHypergeom) ... ok test_rvs (test_distributions.TestLogser) ... ok test_rvs (test_distributions.TestNBinom) ... ok test_rvs (test_distributions.TestPoisson) ... ok test_cdf (test_distributions.TestRandInt) ... ok test_pdf (test_distributions.TestRandInt) ... ok test_rvs (test_distributions.TestRandInt) ... ok test_rvs (test_distributions.TestRvDiscrete) ... ok test_rvs (test_distributions.TestZipf) ... ok test_distributions.test_all_distributions('uniform', (), 0.01) ... ok test_distributions.test_all_distributions('norm', (), 0.01) ... ok test_distributions.test_all_distributions('lognorm', (1.5876170641754364,), 0.01) ... ok test_distributions.test_all_distributions('expon', (), 0.01) ... ok test_distributions.test_all_distributions('beta', (1.4449890262755161, 1.5962868615831063), 0.01) ... ok test_distributions.test_all_distributions('powerlaw', (1.3849011459726603,), 0.01) ... ok test_distributions.test_all_distributions('bradford', (1.5756510141648885,), 0.01) ... ok test_distributions.test_all_distributions('burr', (1.2903295024027579, 1.1893913285543563), 0.01) ... ok test_distributions.test_all_distributions('fisk', (1.186729528255555,), 0.01) ... ok test_distributions.test_all_distributions('cauchy', (), 0.01) ... ok test_distributions.test_all_distributions('halfcauchy', (), 0.01) ... ok test_distributions.test_all_distributions('foldcauchy', (1.6127731798686067,), 0.01) ... ok test_distributions.test_all_distributions('gamma', (1.6566593889896288,), 0.01) ... ok test_distributions.test_all_distributions('gengamma', (1.4765309920093808, 1.0898243611955936), 0.01) ... ok test_distributions.test_all_distributions('loggamma', (1.7576039219664368,), 0.01) ... ok test_distributions.test_all_distributions('alpha', (1.8767703708227748,), 0.01) ... ok test_distributions.test_all_distributions('anglit', (), 0.01) ... ok test_distributions.test_all_distributions('arcsine', (), 0.01) ... ok test_distributions.test_all_distributions('betaprime', (1.9233810159462807, 1.8424602231401823), 0.01) ... ok test_distributions.test_all_distributions('erlang', (4, 0.89817312135787897, 0.92308243982017679), 0.01) ... ok test_distributions.test_all_distributions('dgamma', (1.5405999249480544,), 0.01) ... ok test_distributions.test_all_distributions('exponweib', (1.391296050234625, 1.7052833998544061), 0.01) ... ok test_distributions.test_all_distributions('exponpow', (1.2756341213121272,), 0.01) ... ok test_distributions.test_all_distributions('frechet_l', (1.8116287085078784,), 0.01) ... ok test_distributions.test_all_distributions('frechet_r', (1.8494859651863671,), 0.01) ... ok test_distributions.test_all_distributions('gilbrat', (), 0.01) ... ok test_distributions.test_all_distributions('f', (1.8950389674266752, 1.5898011835311598), 0.01) ... ok test_distributions.test_all_distributions('ncf', (1.9497648732321204, 1.5796950107456058, 1.4505631066311553), 0.01) ... ok test_distributions.test_all_distributions('chi2', (1.660245378622389,), 0.01) ... ok test_distributions.test_all_distributions('chi', (1.9962578393535728,), 0.01) ... ok test_distributions.test_all_distributions('nakagami', (1.9169412179474561,), 0.01) ... ok test_distributions.test_all_distributions('genpareto', (1.7933250841302242,), 0.01) ... ok test_distributions.test_all_distributions('genextreme', (1.0823729881966475,), 0.01) ... ok test_distributions.test_all_distributions('genhalflogistic', (1.6127831050407122,), 0.01) ... ok test_distributions.test_all_distributions('pareto', (1.4864442019691668,), 0.01) ... ok test_distributions.test_all_distributions('lomax', (1.6301473404114728,), 0.01) ... ok test_distributions.test_all_distributions('halfnorm', (), 0.01) ... ok test_distributions.test_all_distributions('halflogistic', (), 0.01) ... ok test_distributions.test_all_distributions('fatiguelife', (1.8450775756715152,), 0.001) ... ok test_distributions.test_all_distributions('foldnorm', (1.2430356220618561,), 0.01) ... ok test_distributions.test_all_distributions('ncx2', (1.7314892207908477, 1.117134293208518), 0.01) ... ok test_distributions.test_all_distributions('t', (1.2204605368678285,), 0.01) ... ok test_distributions.test_all_distributions('nct', (1.7945829717105759, 1.3325361492196555), 0.01) ... ok test_distributions.test_all_distributions('weibull_min', (1.8159130965336594,), 0.01) ... ok test_distributions.test_all_distributions('weibull_max', (1.1006075202160961,), 0.01) ... ok test_distributions.test_all_distributions('dweibull', (1.1463584889123037,), 0.01) ... ok test_distributions.test_all_distributions('maxwell', (), 0.01) ... ok test_distributions.test_all_distributions('rayleigh', (), 0.01) ... ok test_distributions.test_all_distributions('genlogistic', (1.6976706401912387,), 0.01) ... ok test_distributions.test_all_distributions('logistic', (), 0.01) ... ok test_distributions.test_all_distributions('gumbel_l', (), 0.01) ... ok test_distributions.test_all_distributions('gumbel_r', (), 0.01) ... ok test_distributions.test_all_distributions('gompertz', (1.0452340678656125,), 0.01) ... ok test_distributions.test_all_distributions('hypsecant', (), 0.01) ... ok test_distributions.test_all_distributions('laplace', (), 0.01) ... ok test_distributions.test_all_distributions('reciprocal', (0.57386603678916692, 1.573866036789167), 0.01) ... ok test_distributions.test_all_distributions('triang', (0.53419796826072397,), 0.01) ... ok test_distributions.test_all_distributions('tukeylambda', (1.6805891325622566,), 0.01) ... ok test_distributions.test_all_distributions('vonmises', (100,), 0.01) ... ok test_distributions.test_all_distributions('vonmises', (1.0266967946622052,), 0.01) ... ok test_expon (test_morestats.TestAnderson) ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/stats.py:420: DeprecationWarning: scipy.stats.mean is deprecated; please update your code to use numpy.mean. Please note that: - numpy.mean axis argument defaults to None, not 0 - numpy.mean has a ddof argument to replace bias in a more general manner. scipy.stats.mean(a, bias=True) can be replaced by numpy.mean(x, axis=0, ddof=1). axis=0, ddof=1).""", DeprecationWarning) ok test_normal (test_morestats.TestAnderson) ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/stats.py:1329: DeprecationWarning: scipy.stats.std is deprecated; please update your code to use numpy.std. Please note that: - numpy.std axis argument defaults to None, not 0 - numpy.std has a ddof argument to replace bias in a more general manner. scipy.stats.std(a, bias=True) can be replaced by numpy.std(x, axis=0, ddof=0), scipy.stats.std(a, bias=False) by numpy.std(x, axis=0, ddof=1). ddof=1).""", DeprecationWarning) /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/stats.py:1305: DeprecationWarning: scipy.stats.var is deprecated; please update your code to use numpy.var. Please note that: - numpy.var axis argument defaults to None, not 0 - numpy.var has a ddof argument to replace bias in a more general manner. scipy.stats.var(a, bias=True) can be replaced by numpy.var(x, axis=0, ddof=0), scipy.stats.var(a, bias=False) by var(x, axis=0, ddof=1). ddof=1).""", DeprecationWarning) ok test_approx (test_morestats.TestAnsari) ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/morestats.py:603: UserWarning: Ties preclude use of exact statistic. warnings.warn("Ties preclude use of exact statistic.") ok test_exact (test_morestats.TestAnsari) ... ok test_small (test_morestats.TestAnsari) ... ok test_data (test_morestats.TestBartlett) ... ok test_data (test_morestats.TestBinomP) ... ok test_basic (test_morestats.TestFindRepeats) ... ok test_data (test_morestats.TestLevene) ... /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/stats/stats.py:498: DeprecationWarning: scipy.stats.median is deprecated; please update your code to use numpy.median. Please note that: - numpy.median axis argument defaults to None, not 0 - numpy.median has a ddof argument to replace bias in a more general manner. scipy.stats.median(a, bias=True) can be replaced by numpy.median(x, axis=0, ddof=1). axis=0, ddof=1).""", DeprecationWarning) ok test_basic (test_morestats.TestShapiro) ... ok test_morestats.test_fligner ... ok test_morestats.test_mood ... ok Tests the cov function. ... ok Tests some computations of Kendall's tau ... ok Tests the seasonal Kendall tau. ... ok Tests some computations of Pearson's r ... ok Tests point biserial ... ok Tests some computations of Spearman's rho ... ok test_1D (test_mstats_basic.TestGMean) ... ok test_2D (test_mstats_basic.TestGMean) ... ok test_1D (test_mstats_basic.TestHMean) ... ok test_2D (test_mstats_basic.TestHMean) ... ok Tests the Friedman Chi-square test ... ok Tests the Kolmogorov-Smirnov 2 samples test ... ok Tests Obrien transform ... ok sum((testcase-mean(testcase,axis=0))**4,axis=0)/((sqrt(var(testcase)*3/4))**4)/4 ... ok Tests the mode ... ok mean((testcase-mean(testcase))**power,axis=0),axis=0))**power)) ... ok sum((testmathworks-mean(testmathworks,axis=0))**3,axis=0)/((sqrt(var(testmathworks)*4/5))**3)/5 ... ok variation = samplestd/mean ... ok test_2D (test_mstats_basic.TestPercentile) ... ok test_percentile (test_mstats_basic.TestPercentile) ... ok test_ranking (test_mstats_basic.TestRanking) ... ok Tests trimming ... ok Tests trimming. ... ok Tests the trimmed mean standard error. ... ok Tests the trimmed mean. ... ok Tests the Winsorization of the data. ... ok test_samplestd (test_mstats_basic.TestVariability) ... ok R does not have 'samplevar' so the following was used ... ok this is not in R, so used ... ok this is not in R, so used ... ok test_std (test_mstats_basic.TestVariability) ... ok this is not in R, so used ... ok var(testcase) = 1.666666667 ... ok not in R, so used ... ok not in R, so tested by using ... ok Tests ideal-fourths ... ok Tests the Marits-Jarrett estimator ... ok Tests the confidence intervals of the trimmed mean. ... ok test_hdquantiles (test_mstats_extras.TestQuantiles) ... ok test_meanBIG (test_stats.TestBasicStats) ... ok test_meanHUGE (test_stats.TestBasicStats) ... ok test_meanLITTLE (test_stats.TestBasicStats) ... ok test_meanROUND (test_stats.TestBasicStats) ... ok test_meanTINY (test_stats.TestBasicStats) ... ok test_meanX (test_stats.TestBasicStats) ... ok test_meanZERO (test_stats.TestBasicStats) ... ok test_stdBIG (test_stats.TestBasicStats) ... ok test_stdHUGE (test_stats.TestBasicStats) ... ok test_stdLITTLE (test_stats.TestBasicStats) ... ok test_stdROUND (test_stats.TestBasicStats) ... ok test_stdTINY (test_stats.TestBasicStats) ... ok test_stdX (test_stats.TestBasicStats) ... ok test_stdZERO (test_stats.TestBasicStats) ... ok test_tmeanX (test_stats.TestBasicStats) ... ok test_tstdX (test_stats.TestBasicStats) ... ok test_tvarX (test_stats.TestBasicStats) ... ok test_basic (test_stats.TestCMedian) ... ok test_pBIGBIG (test_stats.TestCorr) ... ok test_pBIGHUGE (test_stats.TestCorr) ... ok test_pBIGLITTLE (test_stats.TestCorr) ... ok test_pBIGROUND (test_stats.TestCorr) ... ok test_pBIGTINY (test_stats.TestCorr) ... ok test_pHUGEHUGE (test_stats.TestCorr) ... ok test_pHUGEROUND (test_stats.TestCorr) ... ok test_pHUGETINY (test_stats.TestCorr) ... ok test_pLITTLEHUGE (test_stats.TestCorr) ... ok test_pLITTLELITTLE (test_stats.TestCorr) ... ok test_pLITTLEROUND (test_stats.TestCorr) ... ok test_pLITTLETINY (test_stats.TestCorr) ... ok test_pROUNDROUND (test_stats.TestCorr) ... ok test_pTINYROUND (test_stats.TestCorr) ... ok test_pTINYTINY (test_stats.TestCorr) ... ok test_pXBIG (test_stats.TestCorr) ... ok test_pXHUGE (test_stats.TestCorr) ... ok test_pXLITTLE (test_stats.TestCorr) ... ok test_pXROUND (test_stats.TestCorr) ... ok test_pXTINY (test_stats.TestCorr) ... ok test_pXX (test_stats.TestCorr) ... ok test_sBIGBIG (test_stats.TestCorr) ... ok test_sBIGHUGE (test_stats.TestCorr) ... ok test_sBIGLITTLE (test_stats.TestCorr) ... ok test_sBIGROUND (test_stats.TestCorr) ... ok test_sBIGTINY (test_stats.TestCorr) ... ok test_sHUGEHUGE (test_stats.TestCorr) ... ok test_sHUGEROUND (test_stats.TestCorr) ... ok test_sHUGETINY (test_stats.TestCorr) ... ok test_sLITTLEHUGE (test_stats.TestCorr) ... ok test_sLITTLELITTLE (test_stats.TestCorr) ... ok test_sLITTLEROUND (test_stats.TestCorr) ... ok test_sLITTLETINY (test_stats.TestCorr) ... ok test_sROUNDROUND (test_stats.TestCorr) ... ok test_sTINYROUND (test_stats.TestCorr) ... ok test_sTINYTINY (test_stats.TestCorr) ... ok test_sXBIG (test_stats.TestCorr) ... ok test_sXHUGE (test_stats.TestCorr) ... ok test_sXLITTLE (test_stats.TestCorr) ... ok test_sXROUND (test_stats.TestCorr) ... ok test_sXTINY (test_stats.TestCorr) ... ok test_sXX (test_stats.TestCorr) ... ok test_1D_array (test_stats.TestGMean) ... ok test_1D_list (test_stats.TestGMean) ... ok test_2D_array_default (test_stats.TestGMean) ... ok test_2D_array_dim1 (test_stats.TestGMean) ... ok test_large_values (test_stats.TestGMean) ... ok test_1D_array (test_stats.TestHMean) ... ok test_1D_list (test_stats.TestHMean) ... ok test_2D_array_default (test_stats.TestHMean) ... ok test_2D_array_dim1 (test_stats.TestHMean) ... ok test_2d (test_stats.TestMean) ... ok test_basic (test_stats.TestMean) ... ok test_ravel (test_stats.TestMean) ... ok Regression test for #760. ... ok test_basic (test_stats.TestMedian) ... ok test_basic2 (test_stats.TestMedian) ... ok test_basic (test_stats.TestMode) ... ok sum((testcase-mean(testcase,axis=0))**4,axis=0)/((sqrt(var(testcase)*3/4))**4)/4 ... ok test_kurtosis_array_scalar (test_stats.TestMoments) ... ok mean((testcase-mean(testcase))**power,axis=0),axis=0))**power)) ... ok sum((testmathworks-mean(testmathworks,axis=0))**3,axis=0)/ ... ok `skew` must return a scalar for 1-dim input ... ok variation = samplestd/mean ... ok Check nanmean when all values are nan. ... ok Check nanmean when no values are nan. ... ok Check nanmean when some values only are nan. ... ok Check nanmedian when all values are nan. ... ok Check nanmedian when no values are nan. ... ok Check nanmedian when some values only are nan. ... ok Check nanstd when all values are nan. ... ok Check nanstd when no values are nan. ... ok Check nanstd when some values only are nan. ... ok test_2D (test_stats.TestPercentile) ... ok test_median (test_stats.TestPercentile) ... ok test_percentile (test_stats.TestPercentile) ... ok compared with multivariate ols with pinv ... ok W.II.F. Regress BIG on X. ... ok W.IV.B. Regress X on X. ... ok W.IV.D. Regress ZERO on X. ... ok Regress a line with sinusoidal noise. ... ok W.II.A.0. Print ROUND with only one digit. ... ok W.II.A.1. Y = INT(2.6*7 -0.2) (Y should be 18) ... ok W.II.A.2. Y = 2-INT(EXP(LOG(SQR(2)*SQR(2)))) (Y should be 0) ... ok W.II.A.3. Y = INT(3-EXP(LOG(SQR(2)*SQR(2)))) (Y should be 1) ... ok test_2d (test_stats.TestStd) ... ok test_basic (test_stats.TestStd) ... ok test_onesample (test_stats.TestStudentTest) ... ok test_basic (test_stats.TestThreshold) ... ok test_samplestd (test_stats.TestVariability) ... ok R does not have 'samplevar' so the following was used ... ok this is not in R, so used ... ok this is not in R, so used ... ok test_std (test_stats.TestVariability) ... ok this is not in R, so used ... ok var(testcase) = 1.666666667 ... ok not in R, so used ... ok not in R, so tested by using ... ok test_stats.test_scoreatpercentile ... ok test_stats.test_percentileofscore(35.0, 35.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(40.0, 40.0) ... ok test_stats.test_percentileofscore(45.0, 45.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(50.0, 50.0) ... ok test_stats.test_percentileofscore(40.0, 40.0) ... ok test_stats.test_percentileofscore(50.0, 50.0) ... ok test_stats.test_percentileofscore(45.0, 45.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(60.0, 60.0) ... ok test_stats.test_percentileofscore(30.0, 30) ... ok test_stats.test_percentileofscore(30.0, 30) ... ok test_stats.test_percentileofscore(30.0, 30) ... ok test_stats.test_percentileofscore(30.0, 30) ... ok test_stats.test_percentileofscore(35.0, 35.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(40.0, 40.0) ... ok test_stats.test_percentileofscore(45.0, 45.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(60.0, 60.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(30.0, 30.0) ... ok test_stats.test_percentileofscore(10.0, 10.0) ... ok test_stats.test_percentileofscore(5.0, 5.0) ... ok test_stats.test_percentileofscore(0.0, 0.0) ... ok test_stats.test_percentileofscore(10.0, 10.0) ... ok test_stats.test_percentileofscore(100.0, 100.0) ... ok test_stats.test_percentileofscore(95.0, 95.0) ... ok test_stats.test_percentileofscore(90.0, 90.0) ... ok test_stats.test_percentileofscore(100.0, 100.0) ... ok test_stats.test_percentileofscore(100.0, 100.0) ... ok test_stats.test_percentileofscore(100.0, 100.0) ... ok test_stats.test_percentileofscore(0.0, 0.0) ... ok test_stats.test_friedmanchisquare ... Warning: friedmanchisquare test using Chisquared aproximation ok test_stats.test_kstest ... ok test_stats.test_ks_2samp ... ok test_stats.test_ttest_rel ... ok test_stats.test_ttest_ind ... ok test_stats.test_ttest_1samp_new ... ok test_stats.test_describe ... ok test_stats.test_normalitytests((3.9237191815818493, 0.14059672529747549), (3.92371918, 0.14059673)) ... ok test_stats.test_normalitytests((1.9807882609087573, 0.047615023828432301), (1.98078826, 0.047615020000000001)) ... ok test_stats.test_normalitytests((-0.014037344047597383, 0.98880018772590561), (-0.014037340000000001, 0.98880018999999997)) ... ok test_stats.test_pointbiserial ... ok test_stats.test_obrientransform ... ok convert simple expr to blitz ... ok convert fdtd equation to blitz. ... ok convert simple expr to blitz ... ok result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1] ... KNOWNFAIL: Test skipped due to known failure result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1] ... KNOWNFAIL: Test skipped due to known failure result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1] ... KNOWNFAIL: Test skipped due to known failure bad path should return same as default (and warn) ... warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ok make sure it handles relative values. ... ok default behavior is to return current directory ... ok make sure it handles relative values ... warning: specified build_dir '..' does not exist or is not writable. Trying default locations ok test_simple (test_build_tools.TestConfigureSysArgv) ... ok bad path should return same as default (and warn) ... warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ok make sure it handles relative values. ... ok default behavior returns tempdir ... ok make sure it handles relative values ... warning: specified build_dir '..' does not exist or is not writable. Trying default locations ok test_call_function (test_c_spec.CallableConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_d9b504d1a91ae5e28245fdf60a03c4143.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_complex_return (test_c_spec.ComplexConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e50.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e50.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e50.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e50.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e50.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e50.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e50.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_complex_var_in (test_c_spec.ComplexConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_complex (test_c_spec.ComplexConverter) ... ok test_type_match_float (test_c_spec.ComplexConverter) ... ok test_type_match_int (test_c_spec.ComplexConverter) ... ok test_type_match_string (test_c_spec.ComplexConverter) ... ok test_return (test_c_spec.DictConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675480.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675480.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675480.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675480.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675480.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675480.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675480.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_bad (test_c_spec.DictConverter) ... ok test_type_match_good (test_c_spec.DictConverter) ... ok test_var_in (test_c_spec.DictConverter) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_file_to_py (test_c_spec.FileConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a3.cpp: In function ???PyObject* compiled_func(PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a3.cpp:668: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a3.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a3.cpp: In function ???PyObject* compiled_func(PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a3.cpp:668: warning: deprecated conversion from string constant to ???char*??? ok test_py_to_file (test_c_spec.FileConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_d133102ab45193e072f8dbb5a1f684853.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_float_return (test_c_spec.FloatConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_float_var_in (test_c_spec.FloatConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d10.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d10.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d10.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d10.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d10.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d10.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d10.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_complex (test_c_spec.FloatConverter) ... ok test_type_match_float (test_c_spec.FloatConverter) ... ok test_type_match_int (test_c_spec.FloatConverter) ... ok test_type_match_string (test_c_spec.FloatConverter) ... ok test_int_return (test_c_spec.IntConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962310.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962310.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962310.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962310.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962310.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962310.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962310.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_complex (test_c_spec.IntConverter) ... ok test_type_match_float (test_c_spec.IntConverter) ... ok test_type_match_int (test_c_spec.IntConverter) ... ok test_type_match_string (test_c_spec.IntConverter) ... ok test_var_in (test_c_spec.IntConverter) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35440.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35440.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35440.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35440.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35440.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35440.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35440.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_return (test_c_spec.ListConverter) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_speed (test_c_spec.ListConverter) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp: In function ???PyObject* with_cxx(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp: In function ???PyObject* no_checking(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp:668: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp:668: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp: In function ???PyObject* with_cxx(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp: In function ???PyObject* no_checking(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp:668: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf260.cpp:668: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. speed test for list access compiler: scxx: 0.0281748771667 C, no checking: 0.0196361541748 python: 0.207987070084 ok test_type_match_bad (test_c_spec.ListConverter) ... ok test_type_match_good (test_c_spec.ListConverter) ... ok test_var_in (test_c_spec.ListConverter) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_convert_to_dict (test_c_spec.SequenceConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_67933097fdd75c33d4a8510b92e0360312.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_convert_to_list (test_c_spec.SequenceConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_67933097fdd75c33d4a8510b92e0360313.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_convert_to_string (test_c_spec.SequenceConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_67933097fdd75c33d4a8510b92e0360314.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_convert_to_tuple (test_c_spec.SequenceConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_67933097fdd75c33d4a8510b92e0360315.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_return (test_c_spec.StringConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b950.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b950.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b950.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b950.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b950.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b950.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b950.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_complex (test_c_spec.StringConverter) ... ok test_type_match_float (test_c_spec.StringConverter) ... ok test_type_match_int (test_c_spec.StringConverter) ... ok test_type_match_string (test_c_spec.StringConverter) ... ok test_var_in (test_c_spec.StringConverter) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f520.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f520.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f520.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f520.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f520.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f520.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f520.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_call_function (test_c_spec.TestCallableConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_d9b504d1a91ae5e28245fdf60a03c4144.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_call_function (test_c_spec.TestCallableConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_d9b504d1a91ae5e28245fdf60a03c4145.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_complex_return (test_c_spec.TestComplexConverterGcc) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_complex_var_in (test_c_spec.TestComplexConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_complex (test_c_spec.TestComplexConverterGcc) ... ok test_type_match_float (test_c_spec.TestComplexConverterGcc) ... ok test_type_match_int (test_c_spec.TestComplexConverterGcc) ... ok test_type_match_string (test_c_spec.TestComplexConverterGcc) ... ok test_complex_return (test_c_spec.TestComplexConverterUnix) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e51.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e51.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e51.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e51.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e51.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e51.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_fa23bc7871bacd4fec33347968a187e51.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_complex_var_in (test_c_spec.TestComplexConverterUnix) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_type_match_complex (test_c_spec.TestComplexConverterUnix) ... ok test_type_match_float (test_c_spec.TestComplexConverterUnix) ... ok test_type_match_int (test_c_spec.TestComplexConverterUnix) ... ok test_type_match_string (test_c_spec.TestComplexConverterUnix) ... ok test_return (test_c_spec.TestDictConverterGcc) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5d18dbe10f89acf9d9e78c6112f0db6c0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5d18dbe10f89acf9d9e78c6112f0db6c0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5d18dbe10f89acf9d9e78c6112f0db6c0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5d18dbe10f89acf9d9e78c6112f0db6c0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5d18dbe10f89acf9d9e78c6112f0db6c0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5d18dbe10f89acf9d9e78c6112f0db6c0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5d18dbe10f89acf9d9e78c6112f0db6c0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_type_match_bad (test_c_spec.TestDictConverterGcc) ... ok test_type_match_good (test_c_spec.TestDictConverterGcc) ... ok test_var_in (test_c_spec.TestDictConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37d19fdff45469a82adc5b44f29704300.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37d19fdff45469a82adc5b44f29704300.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37d19fdff45469a82adc5b44f29704300.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37d19fdff45469a82adc5b44f29704300.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37d19fdff45469a82adc5b44f29704300.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37d19fdff45469a82adc5b44f29704300.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37d19fdff45469a82adc5b44f29704300.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_return (test_c_spec.TestDictConverterUnix) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675481.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675481.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675481.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675481.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675481.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675481.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_d4c708553d17d1900611ece6086675481.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_type_match_bad (test_c_spec.TestDictConverterUnix) ... ok test_type_match_good (test_c_spec.TestDictConverterUnix) ... ok test_var_in (test_c_spec.TestDictConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a1.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a1.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a1.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a1.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a1.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_004f78823106f12cdad3c63d275ac19a1.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_file_to_py (test_c_spec.TestFileConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a4.cpp: In function ???PyObject* compiled_func(PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a4.cpp:668: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a4.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a4.cpp: In function ???PyObject* compiled_func(PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a4.cpp:668: warning: deprecated conversion from string constant to ???char*??? ok test_py_to_file (test_c_spec.TestFileConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_d133102ab45193e072f8dbb5a1f684854.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_file_to_py (test_c_spec.TestFileConverterUnix) ... /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a5.cpp: In function ???PyObject* compiled_func(PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a5.cpp:668: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a5.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a5.cpp: In function ???PyObject* compiled_func(PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/sc_df5e0d29270a5bc86cb25e3614f7f09a5.cpp:668: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_py_to_file (test_c_spec.TestFileConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_d133102ab45193e072f8dbb5a1f684855.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_float_return (test_c_spec.TestFloatConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_119507aaff6cbb49b207aae11a6add790.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_119507aaff6cbb49b207aae11a6add790.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_119507aaff6cbb49b207aae11a6add790.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_119507aaff6cbb49b207aae11a6add790.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_119507aaff6cbb49b207aae11a6add790.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_119507aaff6cbb49b207aae11a6add790.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_119507aaff6cbb49b207aae11a6add790.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_float_var_in (test_c_spec.TestFloatConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f9d03bc857dee022afacdfb25e5eb7b90.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f9d03bc857dee022afacdfb25e5eb7b90.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f9d03bc857dee022afacdfb25e5eb7b90.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f9d03bc857dee022afacdfb25e5eb7b90.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f9d03bc857dee022afacdfb25e5eb7b90.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f9d03bc857dee022afacdfb25e5eb7b90.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f9d03bc857dee022afacdfb25e5eb7b90.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_complex (test_c_spec.TestFloatConverterGcc) ... ok test_type_match_float (test_c_spec.TestFloatConverterGcc) ... ok test_type_match_int (test_c_spec.TestFloatConverterGcc) ... ok test_type_match_string (test_c_spec.TestFloatConverterGcc) ... ok test_float_return (test_c_spec.TestFloatConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f1.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f1.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_6d33db8f51c8ac682d0bf38988af258f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_float_var_in (test_c_spec.TestFloatConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d11.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d11.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d11.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d11.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d11.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d11.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e509b9a37b4e8a1c2b4451bf96f8a5d11.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_complex (test_c_spec.TestFloatConverterUnix) ... ok test_type_match_float (test_c_spec.TestFloatConverterUnix) ... ok test_type_match_int (test_c_spec.TestFloatConverterUnix) ... ok test_type_match_string (test_c_spec.TestFloatConverterUnix) ... ok test_int_return (test_c_spec.TestIntConverterGcc) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_043149618b80956000581f3b4ba6b35a0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_043149618b80956000581f3b4ba6b35a0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_043149618b80956000581f3b4ba6b35a0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_043149618b80956000581f3b4ba6b35a0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_043149618b80956000581f3b4ba6b35a0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_043149618b80956000581f3b4ba6b35a0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_043149618b80956000581f3b4ba6b35a0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_type_match_complex (test_c_spec.TestIntConverterGcc) ... ok test_type_match_float (test_c_spec.TestIntConverterGcc) ... ok test_type_match_int (test_c_spec.TestIntConverterGcc) ... ok test_type_match_string (test_c_spec.TestIntConverterGcc) ... ok test_var_in (test_c_spec.TestIntConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_9c028080482b4176c16cd0ea582673a20.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_9c028080482b4176c16cd0ea582673a20.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_9c028080482b4176c16cd0ea582673a20.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_9c028080482b4176c16cd0ea582673a20.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_9c028080482b4176c16cd0ea582673a20.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_9c028080482b4176c16cd0ea582673a20.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_9c028080482b4176c16cd0ea582673a20.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_int_return (test_c_spec.TestIntConverterUnix) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962311.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962311.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962311.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962311.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962311.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962311.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_891f1e7e690dee6278075468434962311.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_type_match_complex (test_c_spec.TestIntConverterUnix) ... ok test_type_match_float (test_c_spec.TestIntConverterUnix) ... ok test_type_match_int (test_c_spec.TestIntConverterUnix) ... ok test_type_match_string (test_c_spec.TestIntConverterUnix) ... ok test_var_in (test_c_spec.TestIntConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35441.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35441.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35441.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35441.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35441.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35441.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_621b8d548204e11af39c23a618aa35441.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_return (test_c_spec.TestListConverterGcc) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_dd8ac2321c8eef3a32468b87094a8a830.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_dd8ac2321c8eef3a32468b87094a8a830.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_dd8ac2321c8eef3a32468b87094a8a830.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_dd8ac2321c8eef3a32468b87094a8a830.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_dd8ac2321c8eef3a32468b87094a8a830.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_dd8ac2321c8eef3a32468b87094a8a830.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_dd8ac2321c8eef3a32468b87094a8a830.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_speed (test_c_spec.TestListConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp: In function ???PyObject* with_cxx(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp: In function ???PyObject* no_checking(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp:668: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp:668: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp: In function ???PyObject* with_cxx(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp: In function ???PyObject* no_checking(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp:668: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f92211418eddbce58663eb3bba6fe02c0.cpp:668: warning: deprecated conversion from string constant to ???char*??? speed test for list access compiler: gcc scxx: 0.0279221534729 C, no checking: 0.01948595047 python: 0.196350812912 ok test_type_match_bad (test_c_spec.TestListConverterGcc) ... ok test_type_match_good (test_c_spec.TestListConverterGcc) ... ok test_var_in (test_c_spec.TestListConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37ef65ad8559e70b5853432470983cd20.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37ef65ad8559e70b5853432470983cd20.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37ef65ad8559e70b5853432470983cd20.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37ef65ad8559e70b5853432470983cd20.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37ef65ad8559e70b5853432470983cd20.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37ef65ad8559e70b5853432470983cd20.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_37ef65ad8559e70b5853432470983cd20.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_return (test_c_spec.TestListConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b1.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b1.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b1.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b1.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b1.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e8b1705336d0617ebc4b7d3722215c3b1.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_speed (test_c_spec.TestListConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp: In function ???PyObject* with_cxx(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp: In function ???PyObject* no_checking(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp:668: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp:668: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp: In function ???PyObject* with_cxx(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp: In function ???PyObject* no_checking(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp:668: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f078b196f06544bbc7571c8c92fbaf261.cpp:668: warning: deprecated conversion from string constant to ???char*??? speed test for list access compiler: scxx: 0.0299417972565 C, no checking: 0.0203900337219 python: 0.205413103104 ok test_type_match_bad (test_c_spec.TestListConverterUnix) ... ok test_type_match_good (test_c_spec.TestListConverterUnix) ... ok test_var_in (test_c_spec.TestListConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f1.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f1.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3a5c7ad3ac45a98d03cd9168232f7d8f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_convert_to_dict (test_c_spec.TestSequenceConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_67933097fdd75c33d4a8510b92e0360316.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_convert_to_list (test_c_spec.TestSequenceConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_67933097fdd75c33d4a8510b92e0360317.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_convert_to_string (test_c_spec.TestSequenceConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_67933097fdd75c33d4a8510b92e0360318.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_convert_to_tuple (test_c_spec.TestSequenceConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_67933097fdd75c33d4a8510b92e0360319.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_convert_to_dict (test_c_spec.TestSequenceConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_67933097fdd75c33d4a8510b92e0360320.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_convert_to_list (test_c_spec.TestSequenceConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_67933097fdd75c33d4a8510b92e0360321.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_convert_to_string (test_c_spec.TestSequenceConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_67933097fdd75c33d4a8510b92e0360322.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_convert_to_tuple (test_c_spec.TestSequenceConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/sc_67933097fdd75c33d4a8510b92e0360323.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_return (test_c_spec.TestStringConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f7392e965dab61a772fca97cb86f71890.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f7392e965dab61a772fca97cb86f71890.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f7392e965dab61a772fca97cb86f71890.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f7392e965dab61a772fca97cb86f71890.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f7392e965dab61a772fca97cb86f71890.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f7392e965dab61a772fca97cb86f71890.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_f7392e965dab61a772fca97cb86f71890.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_complex (test_c_spec.TestStringConverterGcc) ... ok test_type_match_float (test_c_spec.TestStringConverterGcc) ... ok test_type_match_int (test_c_spec.TestStringConverterGcc) ... ok test_type_match_string (test_c_spec.TestStringConverterGcc) ... ok test_var_in (test_c_spec.TestStringConverterGcc) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3524c8f05ec9835973e30e5b91e9dad10.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3524c8f05ec9835973e30e5b91e9dad10.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3524c8f05ec9835973e30e5b91e9dad10.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3524c8f05ec9835973e30e5b91e9dad10.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3524c8f05ec9835973e30e5b91e9dad10.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3524c8f05ec9835973e30e5b91e9dad10.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_3524c8f05ec9835973e30e5b91e9dad10.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_return (test_c_spec.TestStringConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b951.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b951.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b951.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b951.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b951.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b951.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_b832a076a51400a5021dbdc7500f5b951.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_complex (test_c_spec.TestStringConverterUnix) ... ok test_type_match_float (test_c_spec.TestStringConverterUnix) ... ok test_type_match_int (test_c_spec.TestStringConverterUnix) ... ok test_type_match_string (test_c_spec.TestStringConverterUnix) ... ok test_var_in (test_c_spec.TestStringConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f521.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f521.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f521.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f521.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f521.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f521.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_43cea103efba790cf4341b835c1a4f521.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_return (test_c_spec.TestTupleConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5ba1011af9b406a1282ea6ca1d0e297c0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5ba1011af9b406a1282ea6ca1d0e297c0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5ba1011af9b406a1282ea6ca1d0e297c0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5ba1011af9b406a1282ea6ca1d0e297c0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5ba1011af9b406a1282ea6ca1d0e297c0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5ba1011af9b406a1282ea6ca1d0e297c0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5ba1011af9b406a1282ea6ca1d0e297c0.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_bad (test_c_spec.TestTupleConverterGcc) ... ok test_type_match_good (test_c_spec.TestTupleConverterGcc) ... ok test_var_in (test_c_spec.TestTupleConverterGcc) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5e793b65d237e9371c3e114dc0eb72aa0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5e793b65d237e9371c3e114dc0eb72aa0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5e793b65d237e9371c3e114dc0eb72aa0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5e793b65d237e9371c3e114dc0eb72aa0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5e793b65d237e9371c3e114dc0eb72aa0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5e793b65d237e9371c3e114dc0eb72aa0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_5e793b65d237e9371c3e114dc0eb72aa0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_return (test_c_spec.TestTupleConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8100.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8100.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8100.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8100.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8100.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8100.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8100.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_bad (test_c_spec.TestTupleConverterUnix) ... ok test_type_match_good (test_c_spec.TestTupleConverterUnix) ... ok test_var_in (test_c_spec.TestTupleConverterUnix) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23820.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23820.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23820.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23820.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23820.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23820.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23820.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_return (test_c_spec.TupleConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8101.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8101.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8101.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8101.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8101.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8101.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_03f5cdfa924d1c8c6fdd6628a606d8101.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_type_match_bad (test_c_spec.TupleConverter) ... ok test_type_match_good (test_c_spec.TupleConverter) ... ok test_var_in (test_c_spec.TupleConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23821.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23821.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23821.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23821.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23821.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23821.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpQuaJIm/sc_e2f52d98283f8252698424a9b90b23821.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok There should always be a writable file -- even if it is in temp ... ok test_add_function_ordered (test_catalog.TestCatalog) ... ok Test persisting a function in the default catalog ... ok MODULE in search path should be replaced by module_dir. ... ok MODULE in search path should be removed if module_dir==None. ... ok If MODULE is absent, module_dir shouldn't be in search path. ... ok Make sure environment variable is getting used. ... ok Be sure we get at least one file even without specifying the path. ... ok Ignore bad paths in the path. ... ok test_clear_module_directory (test_catalog.TestCatalog) ... ok test_get_environ_path (test_catalog.TestCatalog) ... ok Shouldn't get any files when temp doesn't exist and no path set. ... ok Shouldn't get a single file from the temp dir. ... ok test_set_module_directory (test_catalog.TestCatalog) ... ok Check that we can create a file in the writable directory ... ok Check that we can create a file in the writable directory ... ok There should always be a writable file -- even if search paths contain ... ok test_bad_path (test_catalog.TestCatalogPath) ... ok test_current (test_catalog.TestCatalogPath) ... ok test_default (test_catalog.TestCatalogPath) ... ok test_module (test_catalog.TestCatalogPath) ... ok test_path (test_catalog.TestCatalogPath) ... ok test_user (test_catalog.TestCatalogPath) ... ok test_is_writable (test_catalog.TestDefaultDir) ... ok get_test_dir (test_catalog.TestGetCatalog) ... ok test_create_catalog (test_catalog.TestGetCatalog) ... ok test_nonexistent_catalog_is_none (test_catalog.TestGetCatalog) ... ok building extensions here: /Users/fguimara/.python26_compiled/m1 test_assign_variable_types (test_ext_tools.TestAssignVariableTypes) ... ok Simplest possible function ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/m1/simple_ext_function.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/simple_ext_function.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/simple_ext_function.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/m1/simple_ext_function.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/simple_ext_function.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_multi_functions (test_ext_tools.TestExtModule) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/m1/module_multi_function.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/module_multi_function.cpp:618: warning: deprecated conversion from string constant to ???char*??? /Users/fguimara/.python26_compiled/m1/module_multi_function.cpp: In function ???PyObject* test2(PyObject*, PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/module_multi_function.cpp:649: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/module_multi_function.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/m1/module_multi_function.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/module_multi_function.cpp:618: warning: deprecated conversion from string constant to ???char*??? /Users/fguimara/.python26_compiled/m1/module_multi_function.cpp: In function ???PyObject* test2(PyObject*, PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/module_multi_function.cpp:649: warning: deprecated conversion from string constant to ???char*??? ok test_return_tuple (test_ext_tools.TestExtModule) ... /Users/fguimara/.python26_compiled/m1/ext_return_tuple.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/ext_return_tuple.cpp:618: warning: deprecated conversion from string constant to ???char*??? /Users/fguimara/.python26_compiled/m1/ext_return_tuple.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/ext_return_tuple.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/m1/ext_return_tuple.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/ext_return_tuple.cpp:618: warning: deprecated conversion from string constant to ???char*??? /Users/fguimara/.python26_compiled/m1/ext_return_tuple.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok Simplest possible module ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/simple_ext_module.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_string_and_int (test_ext_tools.TestExtModule) ... /Users/fguimara/.python26_compiled/m1/ext_string_and_int.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/ext_string_and_int.cpp:618: warning: deprecated conversion from string constant to ???char*??? /Users/fguimara/.python26_compiled/m1/ext_string_and_int.cpp:618: warning: deprecated conversion from string constant to ???char*??? /Users/fguimara/.python26_compiled/m1/ext_string_and_int.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/ext_string_and_int.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/m1/ext_string_and_int.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/ext_string_and_int.cpp:618: warning: deprecated conversion from string constant to ???char*??? /Users/fguimara/.python26_compiled/m1/ext_string_and_int.cpp:618: warning: deprecated conversion from string constant to ???char*??? /Users/fguimara/.python26_compiled/m1/ext_string_and_int.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_with_include (test_ext_tools.TestExtModule) ... /Users/fguimara/.python26_compiled/m1/ext_module_with_include.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/ext_module_with_include.cpp:618: warning: deprecated conversion from string constant to ???char*??? /Users/fguimara/.python26_compiled/m1/ext_module_with_include.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/ext_module_with_include.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/m1/ext_module_with_include.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/ext_module_with_include.cpp:618: warning: deprecated conversion from string constant to ???char*??? /Users/fguimara/.python26_compiled/m1/ext_module_with_include.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. test printing a value:2 ok test_exceptions (test_inline_tools.TestInline) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_2b01bfa9cce5c43d4c49a1d7e13f43d20.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_complex_return (test_numpy_scalar_spec.NumpyComplexScalarConverter) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e50.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e50.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e50.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e50.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e50.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e50.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e50.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_complex_var_in (test_numpy_scalar_spec.NumpyComplexScalarConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f0.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_inline (test_numpy_scalar_spec.NumpyComplexScalarConverter) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_d0cbb21803a2fd264cf0a27bf60229bd0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_type_match_complex128 (test_numpy_scalar_spec.NumpyComplexScalarConverter) ... ok test_type_match_float (test_numpy_scalar_spec.NumpyComplexScalarConverter) ... ok test_type_match_int (test_numpy_scalar_spec.NumpyComplexScalarConverter) ... ok test_type_match_string (test_numpy_scalar_spec.NumpyComplexScalarConverter) ... ok test_complex_return (test_numpy_scalar_spec.TestNumpyComplexScalarConverterGcc) ... /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_c4eb0b5d36a92c5b74984170eb37bd6b0.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_complex_var_in (test_numpy_scalar_spec.TestNumpyComplexScalarConverterGcc) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_10c06e031e43d8074f567aeb43f4a0b40.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_inline (test_numpy_scalar_spec.TestNumpyComplexScalarConverterGcc) ... ok test_type_match_complex128 (test_numpy_scalar_spec.TestNumpyComplexScalarConverterGcc) ... ok test_type_match_float (test_numpy_scalar_spec.TestNumpyComplexScalarConverterGcc) ... ok test_type_match_int (test_numpy_scalar_spec.TestNumpyComplexScalarConverterGcc) ... ok test_type_match_string (test_numpy_scalar_spec.TestNumpyComplexScalarConverterGcc) ... ok test_complex_return (test_numpy_scalar_spec.TestNumpyComplexScalarConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e51.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e51.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e51.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e51.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e51.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e51.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_fa23bc7871bacd4fec33347968a187e51.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_complex_var_in (test_numpy_scalar_spec.TestNumpyComplexScalarConverterUnix) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp: In function ???PyObject* test(PyObject*, PyObject*, PyObject*)???: /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? /var/folders/8+/8+sCMp+HHpGx1sG+PBFsKk+++TI/-Tmp-/tmpmrLRXQ/sc_a180646b7d2cf09f9e86c6b05225fb8f1.cpp:618: warning: deprecated conversion from string constant to ???char*??? ok test_inline (test_numpy_scalar_spec.TestNumpyComplexScalarConverterUnix) ... ok test_type_match_complex128 (test_numpy_scalar_spec.TestNumpyComplexScalarConverterUnix) ... ok test_type_match_float (test_numpy_scalar_spec.TestNumpyComplexScalarConverterUnix) ... ok test_type_match_int (test_numpy_scalar_spec.TestNumpyComplexScalarConverterUnix) ... ok test_type_match_string (test_numpy_scalar_spec.TestNumpyComplexScalarConverterUnix) ... ok test_numpy_scalar_spec.setup_test_location ... ok test_numpy_scalar_spec.teardown_test_location ... ok test_empty (test_scxx_dict.TestDictConstruct) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_2b6af25d0035555e6a7579be09c9693c0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_complex (test_scxx_dict.TestDictDel) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_81d8a853bb19eaa3cff163936f0d71e60.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_double (test_scxx_dict.TestDictDel) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_81d8a853bb19eaa3cff163936f0d71e61.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_int (test_scxx_dict.TestDictDel) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_81d8a853bb19eaa3cff163936f0d71e62.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_obj (test_scxx_dict.TestDictDel) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_81d8a853bb19eaa3cff163936f0d71e63.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_std_string (test_scxx_dict.TestDictDel) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_81d8a853bb19eaa3cff163936f0d71e64.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_char (test_scxx_dict.TestDictGetItemOp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_8c0f7ed76f985735175ea5f0dbe6caf00.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_char_fail (test_scxx_dict.TestDictGetItemOp) ... KNOWNFAIL: Test skipped due to known failure test_obj (test_scxx_dict.TestDictGetItemOp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_a16aab11e68fd88b171557bb571bc8c00.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_obj_fail (test_scxx_dict.TestDictGetItemOp) ... KNOWNFAIL: Test skipped due to known failure test_string (test_scxx_dict.TestDictGetItemOp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_30f18f0fe4d74e4fed4710d6c81b23d30.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_complex (test_scxx_dict.TestDictHasKey) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_eb163677d281a8ce95ad0c4b8161be4c0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_double (test_scxx_dict.TestDictHasKey) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_5fe53a0d2feb1cc7e54e0b4176b13ab90.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_int (test_scxx_dict.TestDictHasKey) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_a80934bef80370d556f9799f93b4815b0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_obj (test_scxx_dict.TestDictHasKey) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_efbb18fca93817d601b740ecde4b65990.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_std_string (test_scxx_dict.TestDictHasKey) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_961d6594e4fc1b8e8e66737a6c78ae520.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_string (test_scxx_dict.TestDictHasKey) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_8a14e607d39555788f16c13076646c040.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_string_fail (test_scxx_dict.TestDictHasKey) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_7eb08d7dc094754bd78168d0cb37fabb0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_clear (test_scxx_dict.TestDictOthers) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_9ac972b15aa99781871c60a73bb6cb8c0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_items (test_scxx_dict.TestDictOthers) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_c9390b09b972c1bc8d08e956052f8eab0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_keys (test_scxx_dict.TestDictOthers) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_d51ef79169cb426e3d60377a952829c70.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_update (test_scxx_dict.TestDictOthers) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_779a8725253cea69a04b4ba824ff8e7d0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_values (test_scxx_dict.TestDictOthers) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_bf9c28a2c6827e8811a494e3d947b9e40.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_new_complex_int (test_scxx_dict.TestDictSetOperator) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_c74ad97099f7d86a4efefddc19b22a010.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_new_double_int (test_scxx_dict.TestDictSetOperator) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_c74ad97099f7d86a4efefddc19b22a011.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_new_int_int (test_scxx_dict.TestDictSetOperator) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_c74ad97099f7d86a4efefddc19b22a012.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_new_obj_int (test_scxx_dict.TestDictSetOperator) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_c74ad97099f7d86a4efefddc19b22a013.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_new_std_string_int (test_scxx_dict.TestDictSetOperator) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_c74ad97099f7d86a4efefddc19b22a014.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_overwrite_complex_int (test_scxx_dict.TestDictSetOperator) ... ok test_overwrite_double_int (test_scxx_dict.TestDictSetOperator) ... ok test_overwrite_int_int (test_scxx_dict.TestDictSetOperator) ... ok test_overwrite_obj_int (test_scxx_dict.TestDictSetOperator) ... ok test_overwrite_std_string_int (test_scxx_dict.TestDictSetOperator) ... ok test_attr_call (test_scxx_object.TestObjectAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_2b8d02e15c17a64f38b8c8eb21d755490.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_char (test_scxx_object.TestObjectAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_492866eb7b746d06b78af1c12c848c500.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_char_fail (test_scxx_object.TestObjectAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_e1b6ea4f0b0df5d150b83823772d97490.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_obj (test_scxx_object.TestObjectAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_582a78a99cf985b07302beb7f2a696900.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_obj_fail (test_scxx_object.TestObjectAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_1396e571abe8dd07b399a4ce285b5a450.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_string (test_scxx_object.TestObjectAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_48d0ee43e16d4165294e5dd6c4d8f1820.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_string_fail (test_scxx_object.TestObjectAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_3909c060f05f65d0eb55988950c4f33d0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_args (test_scxx_object.TestObjectCall) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_4d0a8855a5e191e0228c21d2a06943910.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_args_kw (test_scxx_object.TestObjectCall) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_4d7fbc0d0468c1bf8744d2b4b4050cce0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_noargs (test_scxx_object.TestObjectCall) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_7cd9e7827ed07ad1029bbb5dd37a85910.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_noargs_with_args (test_scxx_object.TestObjectCall) ... ok test_complex_cast (test_scxx_object.TestObjectCast) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_ba8c696c499a28e274d3853e71af2fa80.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_double_cast (test_scxx_object.TestObjectCast) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_2ab8b0f58eacbd6386713a05640406e00.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_float_cast (test_scxx_object.TestObjectCast) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_31c4250725bcfa3b5d60c280f51c47cb0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_int_cast (test_scxx_object.TestObjectCast) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_fe66e79003a5cfb4ef7a3c9f81c1bb330.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_string_cast (test_scxx_object.TestObjectCast) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_1e25cc7cd30267c3bc9156d566ec98a60.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_char (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_4bc67a73386136680947e4662bb8c3d90.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_double (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_8eb4002229a3151b0ae87317f5da94b70.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_equal (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_de61c48bb88e8c68ba6054f4ee8ef99d0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_equal_objects (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_de61c48bb88e8c68ba6054f4ee8ef99d1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_gt (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_2a554b529982a8abaf7462cb001750570.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_gte (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_c06ff480261d8ecd18334e79d6671c810.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_int (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_1928489fb6e476d52eb79bc7c5da50f90.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_int2 (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_8a8fbaf7452a7012c260d3ac3e92fdd40.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_lt (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_3637b6c88bae129a141874ba000d9eac0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_lte (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_bb7d265a6d2a50dcaa1173794350bc7f0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_not_equal (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_7578b5284ef5806e59fc156ae08bd9b50.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_std_string (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_e669d6abe3e46389c07caf145d54c4000.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_unsigned_long (test_scxx_object.TestObjectCmp) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_dd0e3309119dc037d897d3d2cfa166130.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_complex (test_scxx_object.TestObjectConstruct) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_997971139e1ce6da66f3393cf5b3fd750.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_double (test_scxx_object.TestObjectConstruct) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_cb9aebfd89a9371a1bc8d9fc650222c50.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_float (test_scxx_object.TestObjectConstruct) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_cb1ea57f3f445b71c448b76ca01937630.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_int (test_scxx_object.TestObjectConstruct) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_b6c320edcc9b29352b63a38b6cb43e490.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_std_string (test_scxx_object.TestObjectConstruct) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_1c3224e3539793c89530fb0ec7961d620.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_string (test_scxx_object.TestObjectConstruct) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_834e7b60161cc8e2a77e30069e25ef130.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_char (test_scxx_object.TestObjectDel) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_38253a4cf459b0eba9b12f9b03fa3b930.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_object (test_scxx_object.TestObjectDel) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_e58f7b25d3bbcb446ef92511f2c3260c0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_string (test_scxx_object.TestObjectDel) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_7111de1456910167a499bdd14e6535230.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_func (test_scxx_object.TestObjectHasattr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_0be10fe0b495cd93d7c6ca1fc2d478340.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok THIS NEEDS TO MOVE TO THE INLINE TEST SUITE ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_2ce7e985c07ef6394a33d8c03b7f32f20.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. after and after2 should be equal in the following before, after, after2: 2 3 3 ok test_std_string (test_scxx_object.TestObjectHasattr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_8a08ec43c97c9d5323e06cc5cd2aa9ff0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_string (test_scxx_object.TestObjectHasattr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_1d2922d50346fce6318ade3557ddd2d80.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_string_fail (test_scxx_object.TestObjectHasattr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_62439f91219ff8cdf1f1b9efcef9609f0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_hash (test_scxx_object.TestObjectHash) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_c03278007f8e38ccea1e796dddc0f9930.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. hash: 123 ok test_false (test_scxx_object.TestObjectIsCallable) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_4466d67cb61aa0888952c8362628b50a0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_true (test_scxx_object.TestObjectIsCallable) ... ok test_false (test_scxx_object.TestObjectIsTrue) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_a61a71c7ec6b700d054d5170bf56a4520.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_true (test_scxx_object.TestObjectIsTrue) ... ok test_args (test_scxx_object.TestObjectMcall) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_149b83000d57267c00ed26145c1c26430.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_args_kw (test_scxx_object.TestObjectMcall) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_263bc36cc11c2372045a00dac7ad8dbf0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_noargs (test_scxx_object.TestObjectMcall) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_d9fcfb4a2bf58055dd1a06437d7149ab0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_noargs_with_args (test_scxx_object.TestObjectMcall) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_137db0331cdaa98890cc8a16bad7e1ad0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_std_args (test_scxx_object.TestObjectMcall) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_0a648ee742d6ee6b218dda0dc72564570.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_std_args_kw (test_scxx_object.TestObjectMcall) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_5965f5d098c14eeec2e46b3dd9805c710.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_std_noargs (test_scxx_object.TestObjectMcall) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_40e4523207fa7058638973339c91a66a0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_stringio (test_scxx_object.TestObjectPrint) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_8f443a8b75484649652556a4e75bc5500.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. 'how now brown cow' ok test_repr (test_scxx_object.TestObjectRepr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_d48e78f5a1963d13f24cb149e3bd63c40.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_existing_char (test_scxx_object.TestObjectSetAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_9905822ea49dbf5d12361bca8bb338080.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_existing_char1 (test_scxx_object.TestObjectSetAttr) ... ok test_existing_complex (test_scxx_object.TestObjectSetAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_89279eb406e89bc4c4d8c17d3d6851d00.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_existing_double (test_scxx_object.TestObjectSetAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_0cde7af9fd00fc4e7bd82941f066cd710.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_existing_int (test_scxx_object.TestObjectSetAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_8bb3865f35498736e98ec18588022a320.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_existing_object (test_scxx_object.TestObjectSetAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_7cd0877a24803a6a0416fe6739818b5c0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_existing_string (test_scxx_object.TestObjectSetAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_16f98e6f39821d8722c74ce623e421fb0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_existing_string1 (test_scxx_object.TestObjectSetAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_5060ceceba88276a255e56b9e2cf84040.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_new_char (test_scxx_object.TestObjectSetAttr) ... ok test_new_fail (test_scxx_object.TestObjectSetAttr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_415f11c92c5bc56e6dc6e0850da67cd30.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_new_object (test_scxx_object.TestObjectSetAttr) ... ok test_new_string (test_scxx_object.TestObjectSetAttr) ... ok test_list_refcount (test_scxx_object.TestObjectSetItemOpIndex) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_f1092344d09114a36568ed8dde8c855f0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_set_char (test_scxx_object.TestObjectSetItemOpIndex) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_162843e835f369b6bec431a6a5342a840.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_set_double (test_scxx_object.TestObjectSetItemOpIndex) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_5db04e95ad3e67bcc6b7d57c9b1e5bc30.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_set_int (test_scxx_object.TestObjectSetItemOpIndex) ... ok test_set_string (test_scxx_object.TestObjectSetItemOpIndex) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_dd636e529523dc195077276046594f7c0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_key_refcount (test_scxx_object.TestObjectSetItemOpKey) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_f72ea0141ef81419e1c03d56a72115200.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_set_char (test_scxx_object.TestObjectSetItemOpKey) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_d20c19bac14f961221f4c8eab46c8af20.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_set_class (test_scxx_object.TestObjectSetItemOpKey) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_293f5ff34c92d9dc8ff43d8d1e1774cf0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_set_complex (test_scxx_object.TestObjectSetItemOpKey) ... KNOWNFAIL: Test skipped due to known failure test_set_double_exists (test_scxx_object.TestObjectSetItemOpKey) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_4c977fc5c69586c46f80b925bde266a90.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_set_double_new (test_scxx_object.TestObjectSetItemOpKey) ... ok test_set_from_member (test_scxx_object.TestObjectSetItemOpKey) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_a35665b241c24d6c9d3fd21e957285660.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_len (test_scxx_object.TestObjectSize) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_57ac91fda9eed040ed186b9e925025090.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_length (test_scxx_object.TestObjectSize) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_9da32194129e3848ea69c749b105291f0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_size (test_scxx_object.TestObjectSize) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_76fad431523bf377ce62e768e660c1fb0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_str (test_scxx_object.TestObjectStr) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_8b71565c69d15ffa43c3d4866fe4388c0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. str return ok test_type (test_scxx_object.TestObjectType) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_0eea0992d7ebff76e6bbd923c724cb2c0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_unicode (test_scxx_object.TestObjectUnicode) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_bb62a8d47f95941a76cf6f58e4c52f6f0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_access_set_speed (test_scxx_sequence.TestList) ... %s access/set -- b[i] = a[i] for N = (, 1000000) python: 0.151823997498 In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_287d43e643a563b251d17a481a4a23c60.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. weave: 0.0589859485626 ok test_access_speed (test_scxx_sequence.TestList) ... %s access -- val = a[i] for N = (, 1000000) python1: 0.0770809650421 python2: 0.0412080287933 In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_9fb967e655acfb86193f5121a570aaf30.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. weave: 0.0168900489807 ok test_append (test_scxx_sequence.TestList) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_cebd6d03dc37c361f1f6ad006cd8cea30.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_fd68afa547f8ee906fc0b5bbdad0ad160.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_2609c2ccee3359bcdb8f58be698084080.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_2a3ae9b48ac8c7db7bc607853b2248700.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_44fbd48d8961bbfadf4ad1fad88cf0050.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_append_passed_item (test_scxx_sequence.TestList) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_7350c348743e10db72c18be0f339a1bb0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_conversion (test_scxx_sequence.TestList) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_8e474387ddd197fffc021b70eb70a2d60.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok Test the "count" method for lists. We'll assume ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_edee47bd05c2569658907980e11cffc90.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_e48618811d3a607a00f038ed6e0b96c40.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_4664d211ccee6c3c6144b98b21b595210.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/m1/sc_7f6b8d306afa52853b82f4dd4c22efaf0.cpp: In function ???PyObject* compiled_func(PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/sc_7f6b8d306afa52853b82f4dd4c22efaf0.cpp:665: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_7f6b8d306afa52853b82f4dd4c22efaf0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/m1/sc_7f6b8d306afa52853b82f4dd4c22efaf0.cpp: In function ???PyObject* compiled_func(PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/sc_7f6b8d306afa52853b82f4dd4c22efaf0.cpp:665: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_2f40f5a0e48167d4f29de396d13c364e0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_get_item_index_error (test_scxx_sequence.TestList) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_b040f32eeafcaaaadfadd395e23a4ae80.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok Test the "in" method for lists. We'll assume ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_3214a110840bc292f6babc6b34e1db2a0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_dc8bfe930753c93df44c7f52cc7833620.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_ab722679600f98d75117d44020297f0d0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_d1500bbef7e35bbbb9b03fdc6fa537310.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_0656fa87ae09e48da1f062e498e7e78e0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_44b466b8740210f29459dcea810fd41d0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_f7a389fe01b1df33f115253d0bd8a8800.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_90123e531d29a9e420ef0593478e0f4e0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_26203b1d65506e261b064172a3ff63310.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_insert (test_scxx_sequence.TestList) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_823c773b6d3c815362d29482da8527e10.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_95e4d771e3b766110737eb2a31f4296a0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_13585456ec5eb503dde6ebac601be74e0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_2b9b3dc6c02a3369b8342e2e36031cfb0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_int_add_speed (test_scxx_sequence.TestList) ... int add -- b[i] = a[i] + 1 for N = 1000000 python: 0.182474136353 In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_2c6e9fef6e7eda7af4ad39dafa414cb50.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. weave: 0.059159040451 ok test_set_item_index_error (test_scxx_sequence.TestList) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_62f9e913ce02eb0ccf4c1ef19e4582230.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_set_item_operator_equal (test_scxx_sequence.TestList) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_635f32ca4fc90eed8e523027e079ec360.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_set_item_operator_equal_created (test_scxx_sequence.TestList) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_4279ab89d7853b76c8b9b49ae08488aa0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_string_add_speed (test_scxx_sequence.TestList) ... string add -- b[i] = a[i] + "blah" for N = 1000000 python: 0.311142206192 In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_60fb89d77cdd18dda4f283bbc2b16eeb0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. weave: 0.695421934128 ok test_access_set_speed (test_scxx_sequence.TestTuple) ... %s access/set -- b[i] = a[i] for N = (, 1000000) python: 0.169914007187 In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_287d43e643a563b251d17a481a4a23c61.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. weave: 0.0466229915619 ok test_access_speed (test_scxx_sequence.TestTuple) ... %s access -- val = a[i] for N = (, 1000000) python1: 0.100818157196 python2: 0.0396411418915 In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_9fb967e655acfb86193f5121a570aaf31.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. weave: 0.0177319049835 ok test_conversion (test_scxx_sequence.TestTuple) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_8e474387ddd197fffc021b70eb70a2d61.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok Test the "count" method for lists. We'll assume ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_edee47bd05c2569658907980e11cffc91.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_e48618811d3a607a00f038ed6e0b96c41.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_4664d211ccee6c3c6144b98b21b595211.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/m1/sc_7f6b8d306afa52853b82f4dd4c22efaf1.cpp: In function ???PyObject* compiled_func(PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/sc_7f6b8d306afa52853b82f4dd4c22efaf1.cpp:665: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_7f6b8d306afa52853b82f4dd4c22efaf1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. /Users/fguimara/.python26_compiled/m1/sc_7f6b8d306afa52853b82f4dd4c22efaf1.cpp: In function ???PyObject* compiled_func(PyObject*, PyObject*)???: /Users/fguimara/.python26_compiled/m1/sc_7f6b8d306afa52853b82f4dd4c22efaf1.cpp:665: warning: deprecated conversion from string constant to ???char*??? In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_2f40f5a0e48167d4f29de396d13c364e1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_get_item_operator_index_error (test_scxx_sequence.TestTuple) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_6e45ee5385acbd8909461b7f70a52c4b0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok Test the "in" method for lists. We'll assume ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_3214a110840bc292f6babc6b34e1db2a1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_dc8bfe930753c93df44c7f52cc7833621.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_ab722679600f98d75117d44020297f0d1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_d1500bbef7e35bbbb9b03fdc6fa537311.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_0656fa87ae09e48da1f062e498e7e78e1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_44b466b8740210f29459dcea810fd41d1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_f7a389fe01b1df33f115253d0bd8a8801.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_90123e531d29a9e420ef0593478e0f4e1.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_26203b1d65506e261b064172a3ff63311.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_set_item_index_error (test_scxx_sequence.TestTuple) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_489089cd98c31206457951a0c89d48690.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_set_item_operator_equal (test_scxx_sequence.TestTuple) ... In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/object.h:11, from /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/scxx/weave_imp.cpp:7: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. In file included from /usr/include/architecture/i386/math.h:626, from /usr/include/math.h:28, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyport.h:235, from /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:58, from /Users/fguimara/.python26_compiled/m1/sc_1063a45b266805721b8d0eeca9bcf4fc0.cpp:10: /usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid. ok test_set_item_operator_equal_fail (test_scxx_sequence.TestTuple) ... ok test_error1 (test_size_check.TestBinaryOpSize) ... ok test_error2 (test_size_check.TestBinaryOpSize) ... ok test_scalar (test_size_check.TestBinaryOpSize) ... ok test_x1 (test_size_check.TestBinaryOpSize) ... ok test_x_y (test_size_check.TestBinaryOpSize) ... ok test_x_y2 (test_size_check.TestBinaryOpSize) ... ok test_x_y3 (test_size_check.TestBinaryOpSize) ... ok test_x_y4 (test_size_check.TestBinaryOpSize) ... ok test_x_y5 (test_size_check.TestBinaryOpSize) ... ok test_x_y6 (test_size_check.TestBinaryOpSize) ... ok test_x_y7 (test_size_check.TestBinaryOpSize) ... ok test_y1 (test_size_check.TestBinaryOpSize) ... ok test_error1 (test_size_check.TestDummyArray) ... ok test_error2 (test_size_check.TestDummyArray) ... ok test_scalar (test_size_check.TestDummyArray) ... ok test_x1 (test_size_check.TestDummyArray) ... ok test_x_y (test_size_check.TestDummyArray) ... ok test_x_y2 (test_size_check.TestDummyArray) ... ok test_x_y3 (test_size_check.TestDummyArray) ... ok test_x_y4 (test_size_check.TestDummyArray) ... ok test_x_y5 (test_size_check.TestDummyArray) ... ok test_x_y6 (test_size_check.TestDummyArray) ... ok test_x_y7 (test_size_check.TestDummyArray) ... ok test_y1 (test_size_check.TestDummyArray) ... ok test_1d_0 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_1 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_10 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_2 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_3 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_4 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_5 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_6 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_7 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_8 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_9 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_index_0 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_index_1 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_index_2 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_index_3 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_index_calculated (test_size_check.TestDummyArrayIndexing) ... ok through a bunch of different indexes at it for good measure. ... ok test_1d_stride_0 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_1 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_10 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_11 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_12 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_2 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_3 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_4 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_5 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_6 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_7 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_8 (test_size_check.TestDummyArrayIndexing) ... ok test_1d_stride_9 (test_size_check.TestDummyArrayIndexing) ... ok test_2d_0 (test_size_check.TestDummyArrayIndexing) ... ok test_2d_1 (test_size_check.TestDummyArrayIndexing) ... ok test_2d_2 (test_size_check.TestDummyArrayIndexing) ... ok through a bunch of different indexes at it for good measure. ... ok through a bunch of different indexes at it for good measure. ... ok test_calculated_index (test_size_check.TestExpressions) ... ok test_calculated_index2 (test_size_check.TestExpressions) ... ok test_generic_1d (test_size_check.TestExpressions) ... ok test_single_index (test_size_check.TestExpressions) ... ok test_scalar (test_size_check.TestMakeSameLength) ... ok test_x_scalar (test_size_check.TestMakeSameLength) ... ok test_x_short (test_size_check.TestMakeSameLength) ... ok test_y_scalar (test_size_check.TestMakeSameLength) ... ok test_y_short (test_size_check.TestMakeSameLength) ... ok test_1d_0 (test_size_check.TestReduction) ... ok test_2d_0 (test_size_check.TestReduction) ... ok test_2d_1 (test_size_check.TestReduction) ... ok test_3d_0 (test_size_check.TestReduction) ... ok test_error0 (test_size_check.TestReduction) ... ok test_error1 (test_size_check.TestReduction) ... ok test_exclusive_end (test_slice_handler.TestBuildSliceAtom) ... ok match slice from a[1:] ... ok match slice from a[1::] ... ok match slice from a[1:2] ... ok match slice from a[1:2:] ... ok match slice from a[1:2:3] ... ok match slice from a[1::3] ... ok match slice from a[:] ... ok match slice from a[::] ... ok match slice from a[:2] ... ok match slice from a[:2:] ... ok match slice from a[:2:3] ... ok match slice from a[:1+i+2:] ... ok match slice from a[0] ... ok match slice from a[::3] ... ok transform a[:,:] = b[:,1:1+2:3] *(c[1-2+i:,:] - c[:,:]) ... ok test_type_match_array (test_standard_array_spec.TestArrayConverter) ... ok test_type_match_int (test_standard_array_spec.TestArrayConverter) ... ok test_type_match_string (test_standard_array_spec.TestArrayConverter) ... ok ====================================================================== ERROR: test_imresize (test_pilutil.TestPILUtil) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/testing/decorators.py", line 146, in skipper_func return f(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/misc/tests/test_pilutil.py", line 23, in test_imresize im1 = pilutil.imresize(im,T(1.1)) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/misc/pilutil.py", line 224, in imresize im = toimage(arr) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/misc/pilutil.py", line 103, in toimage image = Image.fromstring('L',shape,bytedata.tostring()) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/PIL/Image.py", line 1796, in fromstring im = new(mode, size) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/PIL/Image.py", line 1763, in new return Image()._new(core.fill(mode, size, color)) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/PIL/Image.py", line 37, in __getattr__ raise ImportError("The _imaging C module is not installed") ImportError: The _imaging C module is not installed ====================================================================== FAIL: Test generator for parametric tests ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/misc/tests/test_pilutil.py", line 35, in tst_fromimage assert img.min() >= imin AssertionError ====================================================================== FAIL: Test generator for parametric tests ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/misc/tests/test_pilutil.py", line 35, in tst_fromimage assert img.min() >= imin AssertionError ====================================================================== FAIL: Test generator for parametric tests ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/nose-0.11.3-py2.6.egg/nose/case.py", line 186, in runTest self.test(*self.arg) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/misc/tests/test_pilutil.py", line 35, in tst_fromimage assert img.min() >= imin AssertionError ====================================================================== FAIL: test_iv_cephes_vs_amos_mass_test (test_basic.TestBessel) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/tests/test_basic.py", line 1712, in test_iv_cephes_vs_amos_mass_test assert dc[k] < 1e-9, (iv(v[k], x[k]), iv(v[k], x[k]+0j)) AssertionError: (1.8320048963545875e+306, (inf+0j)) ---------------------------------------------------------------------- Ran 4168 tests in 1006.843s FAILED (KNOWNFAIL=10, SKIP=29, errors=1, failures=4) >>> From tinauser at libero.it Thu Jun 3 09:36:51 2010 From: tinauser at libero.it (tinauser) Date: Thu, 3 Jun 2010 06:36:51 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] numpy and C Message-ID: <28767579.post@talk.nabble.com> Hallo, I'm pretty new with both Python and C I have a C application that run a python script.The python script then uses some C functions written in the code that embedd the script itself. I want to pass array from the C code (where an API is giving pointer to data I want to plot in Python) and Python (a GUI). What I've done so far is to allocate before calling the Pyhon script a PyArray with PyArray_SimpleNew. Since the data are unsigned char, the command is: my_second_array = (PyArrayObject *)PyArray_SimpleNew(2,dim,NPY_UBYTE); When I call the Python script,I sent my_second_array.On a timer, my_second_array is used as a parameter for a C written function: the idea is to assign the pointer of a frame to my_second_array.data. PyArrayObject *Pymatout_img=NULL; PyArg_ParseTuple(args, "O", &Pymatout_img);//Pymatout_img is the matrix that was created in C during the initialization with PyArray_SimpleNew Pymatout_img->data= cam_frame->data; The problem is that the compiler (I'm using visual C 2008) says that cannot convert *char to *unsigned char... Can someone explains me what i'm doing wrong? -- View this message in context: http://old.nabble.com/numpy-and-C-tp28767579p28767579.html Sent from the Scipy-User mailing list archive at Nabble.com. From gaedol at gmail.com Tue Jun 8 14:22:07 2010 From: gaedol at gmail.com (Marco) Date: Tue, 8 Jun 2010 11:22:07 -0700 Subject: [SciPy-User] Crossing of Splines Message-ID: Hi all! I have 2 different datasets which I fit using interpolate.splrep(). I am interested in finding the point where the splines cross: as of now I use interpolate.splev() to evaluate each spline and then look for a zero in the difference of the evaluated splines. Suggestions to do it better? TIA, marco From ben.root at ou.edu Tue Jun 8 14:31:32 2010 From: ben.root at ou.edu (Benjamin Root) Date: Tue, 8 Jun 2010 13:31:32 -0500 Subject: [SciPy-User] matplotlib woes In-Reply-To: <1275053367.1431.7.camel@falconeer> References: <1275053367.1431.7.camel@falconeer> Message-ID: Hello, You may wish to check out pcolor() and/or pcolormesh() as they allow you to specify the coordinate system. As for how to space the plots better, there are some options. The one trick that I know of is to specify that you are using 3 rows when plotting the first two subplots, but then say that you are using 2 rows when plotting the last plot (or something like that). There is also AxesGrid, but I haven't tried using it to do what you are thinking. Ben Root 2010/5/28 Th?ger Emil Juul Thorsen > Hello SciPy list; > > For my thesis I have an image which is also a spectrum of an object. I > want to plot the image using imshow along with a data plot of the > intensity, as can be seen on http://yfrog.com/0tforscipylistp . > > My questions are 2: > > 1) imshow() sets the ticks on the two upper subplots as pixels > coordinates. What I want to show as tick labels on my x-axis is the > wavelength coordinates of the lower plot on the upper images (since > there is a straightforward pixel-to-wavelength conversion). I have > googled everywhere but can't seem to find a solution, is it possible? > > 2) Is there any possible way to make the subplots layout look a bit > nicer? Ideally to squeeze the two upper plots closer together and > stretch the lower plot vertically, or at least to make the two upper > subplots take up an equal amount of space? > > Best regards; > > Emil, python-newb and (former) IDL-user, > Master student of Astrophysics at the University of Copenhagen, > Niels Bohr Instutute. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aarchiba at physics.mcgill.ca Tue Jun 8 14:33:18 2010 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Tue, 8 Jun 2010 14:33:18 -0400 Subject: [SciPy-User] Crossing of Splines In-Reply-To: References: Message-ID: On 8 June 2010 14:22, Marco wrote: > Hi all! > > I have 2 different datasets which I fit using interpolate.splrep(). > > I am interested in finding the point where the splines cross: as of > now I use interpolate.splev() to evaluate each spline and then look > for a zero in the difference of the evaluated splines. > > Suggestions to do it better? It's not a wholly satisfactory solution, but if both splines were defined on the same set of knots, you could simply subtract their values to obtain a difference spline. There is code in scipy.interpolate (sproot) to efficiently and reliably find the zero(s) of a spline. The trick is making sure both splines are defined on the same set of knots. splrep normally chooses its own set of knots, simplifying the curve where possible, but you can supply it with a list of knots. As long as you have roughly as many knots as data points and they're not too awkwardly spaced you should be fine; I think splrep starts with the list of input data points and then deletes unnecessary knots, so you could simply supply a list of all x values for either data set as the knots argument. You should check, but the result should be a spline with exactly the set of knots (t in t,c,k) that you specified. Alternatively, if you are willing to look under the hood and interpret the t,c,k representation of the splines, you could insert knots into that in order to obtain a common set of knots for your two splines. Anne > TIA, > > marco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Tue Jun 8 14:40:34 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 8 Jun 2010 14:40:34 -0400 Subject: [SciPy-User] Triangular Distribution ppf method In-Reply-To: References: Message-ID: On Tue, May 25, 2010 at 20:33, Leon Adams wrote: > Hi all, > There seems to be a bug of some sort in evaluating the ppf method of the > scipy.stats.triang.ppf function. Evaluating the distribution with a location > parameter 1 or greater seems to?problematic. I am looking for confirmation > on this behavior and suggestions for work?around. Please show us exactly what you did, exactly what results you got, and what results you expected. Please copy-and-paste rather than summarizing. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tsyu80 at gmail.com Tue Jun 8 14:48:28 2010 From: tsyu80 at gmail.com (Tony S Yu) Date: Tue, 8 Jun 2010 14:48:28 -0400 Subject: [SciPy-User] matplotlib woes In-Reply-To: <1275053367.1431.7.camel@falconeer> References: <1275053367.1431.7.camel@falconeer> Message-ID: <01641729-20E1-407D-AE6D-B2E013A11317@gmail.com> On May 28, 2010, at 9:29 AM, Th?ger Emil Juul Thorsen wrote: > Hello SciPy list; > > For my thesis I have an image which is also a spectrum of an object. I > want to plot the image using imshow along with a data plot of the > intensity, as can be seen on http://yfrog.com/0tforscipylistp . > > My questions are 2: > > 1) imshow() sets the ticks on the two upper subplots as pixels > coordinates. What I want to show as tick labels on my x-axis is the > wavelength coordinates of the lower plot on the upper images (since > there is a straightforward pixel-to-wavelength conversion). I have > googled everywhere but can't seem to find a solution, is it possible? > > 2) Is there any possible way to make the subplots layout look a bit > nicer? Ideally to squeeze the two upper plots closer together and > stretch the lower plot vertically, or at least to make the two upper > subplots take up an equal amount of space? > > Best regards; > > Emil, python-newb and (former) IDL-user, > Master student of Astrophysics at the University of Copenhagen, > Niels Bohr Instutute. Hey Emil, 1) You should try the `extent` argument in `imshow`. From the docs for imshow: *extent*: [ None | scalars (left, right, bottom, top) ] Data limits for the axes. The default assigns zero-based row, column indices to the *x*, *y* centers of the pixels. For example: >>> import numpy as np >>> import matplotlib.pyplot as plt >>> x = np.linspace(0, 2*np.pi) >>> Y = np.sin([x, x]) >>> plt.imshow(Y, extent=[0, 2*np.pi, 0, 1]) 2) If you're plotting interactively, you can configure the subplots using the command on the toolbar (3rd from right). If you want to add the adjustment to your script use `subplots_adjust`. For example, >>> plt.subplots_adjust(hspace=0.1) where "hspace" is the spacing (height) between subplots. Of course, change the value of hspace to suit your needs. I'm actually surprised there's so much spacing between your subplots, so I suspect you may be doing something strange in your plot script. If subplots_adjust doesn't work it may be helpful to see your plot script, BUT.... Please move this discussion to the matplotlib-users list (https://lists.sourceforge.net/lists/listinfo/matplotlib-users), if you have further questions or want to follow-up on this question. It's more appropriate for matplotlib-specific questions. Best, -Tony From robert.kern at gmail.com Tue Jun 8 14:30:05 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 8 Jun 2010 14:30:05 -0400 Subject: [SciPy-User] matplotlib woes In-Reply-To: <1275053367.1431.7.camel@falconeer> References: <1275053367.1431.7.camel@falconeer> Message-ID: 2010/5/28 Th?ger Emil Juul Thorsen : > Hello SciPy list; The matplotlib list is over here: https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Tue Jun 8 15:02:24 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 8 Jun 2010 15:02:24 -0400 Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: <28819859.post@talk.nabble.com> References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28819759.post@talk.nabble.com> <28819859.post@talk.nabble.com> Message-ID: On Tue, Jun 8, 2010 at 12:00 PM, mdekauwe wrote: > > Similarly, > > mths = np.arange(12) > pts = np.arange(numpts) > out_array[mths, pts] = array[mths, 0, r, c] If you want all months, this should work out_array = array[:, 0, r, c] or mths, r, c are all 1d but mths has different length, so needs broadcasting out_array = array[mths[:,np.newaxis], 0, r, c] or I guess out_array[mths[:,np.newaxis], pts] = array[mths[:,np.newaxis], 0, r, c] Josef > > Does not work either... > -- > View this message in context: http://old.nabble.com/removing-for-loops...-tp28633477p28819859.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From mdekauwe at gmail.com Tue Jun 8 17:08:30 2010 From: mdekauwe at gmail.com (mdekauwe) Date: Tue, 8 Jun 2010 14:08:30 -0700 (PDT) Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28819759.post@talk.nabble.com> <28819859.post@talk.nabble.com> Message-ID: <28823221.post@talk.nabble.com> Hi, Yes that works thanks again!!! I understand know what you mean about why I need to broadcast the months as it is a different size to the arrays r and c. Makes more sense! josef.pktd wrote: > > On Tue, Jun 8, 2010 at 12:00 PM, mdekauwe wrote: >> >> Similarly, >> >> mths = np.arange(12) >> pts = np.arange(numpts) >> out_array[mths, pts] = array[mths, 0, r, c] > > If you want all months, this should work > > out_array = array[:, 0, r, c] > > or > mths, r, c are all 1d but mths has different length, so needs broadcasting > > out_array = array[mths[:,np.newaxis], 0, r, c] > > or I guess > > out_array[mths[:,np.newaxis], pts] = array[mths[:,np.newaxis], 0, r, c] > > Josef > >> >> Does not work either... >> -- >> View this message in context: >> http://old.nabble.com/removing-for-loops...-tp28633477p28819859.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/removing-for-loops...-tp28633477p28823221.html Sent from the Scipy-User mailing list archive at Nabble.com. From mdekauwe at gmail.com Tue Jun 8 18:41:15 2010 From: mdekauwe at gmail.com (mdekauwe) Date: Tue, 8 Jun 2010 15:41:15 -0700 (PDT) Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28711581.post@talk.nabble.com> Message-ID: <28824023.post@talk.nabble.com> OK... but if I do... In [28]: np.mod(np.arange(nummonths*numyears), nummonths).reshape((-1, nummonths)) Out[28]: array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) When really I would be after something like this I think? array([ 0, 12, 24, 36, 48, 60, 72, 84, 96, 108, 120], [ 1, 13, 25, 37, 49, 61, 73, 85, 97, 109, 121], [ 2, 14, 26, 38, 50, 62, 74, 86, 98, 110, 122] etc, etc i.e. so for each month jump across the years. Not quite sure of this example...this is what I currently have which does seem to work, though I guess not completely efficiently. for month in xrange(nummonths): tmp = jules[xrange(0, numyears * nummonths, nummonths),VAR,:,0] tmp[tmp < 0.0] = np.nan data[month,:] = np.mean(tmp, axis=0) Benjamin Root-2 wrote: > > If you want an average for each month from your timeseries, then the > sneaky > way would be to reshape your array so that the time dimension is split > into > two (month, year) dimensions. > > For a 1-D array, this would be: > >> dataarray = numpy.mod(numpy.arange(36), 12) >> print dataarray > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, 4, > 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, > 10, 11]) >> datamatrix = dataarray.reshape((-1, 12)) >> print datamatrix > array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) > > Hope that helps. > > Ben Root > > > On Fri, May 28, 2010 at 3:28 PM, mdekauwe wrote: > >> >> OK so I just need to have a quick loop across the 12 months then, that is >> fine, just thought there might have been a sneaky way! >> >> Really appreciated, getting there slowly! >> >> >> >> josef.pktd wrote: >> > >> > On Fri, May 28, 2010 at 4:14 PM, mdekauwe wrote: >> >> >> >> ok - something like this then...but how would i get the index for the >> >> month >> >> for the data array (where month is 0, 1, 2, 4 ... 11)? >> >> >> >> data[month,:] = array[xrange(0, numyears * nummonths, >> nummonths),VAR,:,0] >> > >> > you would still need to start at the right month >> > data[month,:] = array[xrange(month, numyears * nummonths, >> > nummonths),VAR,:,0] >> > or >> > data[month,:] = array[month: numyears * nummonths : nummonths),VAR,:,0] >> > >> > an alternative would be a reshape with an extra month dimension and >> > then sum only once over the year axis. this might be faster but >> > trickier to get the correct reshape . >> > >> > Josef >> > >> >> >> >> and would that be quicker than making an array months... >> >> >> >> months = np.arange(numyears * nummonths) >> >> >> >> and you that instead like you suggested x[start:end:12,:]? >> >> >> >> Many thanks again... >> >> >> >> >> >> josef.pktd wrote: >> >>> >> >>> On Fri, May 28, 2010 at 3:53 PM, mdekauwe wrote: >> >>>> >> >>>> Ok thanks...I'll take a look. >> >>>> >> >>>> Back to my loops issue. What if instead this time I wanted to take >> an >> >>>> average so every march in 11 years, is there a quicker way to go >> about >> >>>> doing >> >>>> that than my current method? >> >>>> >> >>>> nummonths = 12 >> >>>> numyears = 11 >> >>>> >> >>>> for month in xrange(nummonths): >> >>>> for i in xrange(numpts): >> >>>> for ym in xrange(month, numyears * nummonths, nummonths): >> >>>> data[month, i] += array[ym, VAR, land_pts_index[i], 0] >> >>> >> >>> >> >>> x[start:end:12,:] gives you every 12th row of an array x >> >>> >> >>> something like this should work to get rid of the inner loop, or you >> >>> could directly put >> >>> range(month, numyears * nummonths, nummonths) into the array instead >> >>> of ym and sum() >> >>> >> >>> Josef >> >>> >> >>> >> >>>> >> >>>> so for each point in the array for a given month i am jumping >> through >> >>>> and >> >>>> getting the next years month and so on, summing it. >> >>>> >> >>>> Thanks... >> >>>> >> >>>> >> >>>> josef.pktd wrote: >> >>>>> >> >>>>> On Wed, May 26, 2010 at 5:03 PM, mdekauwe >> wrote: >> >>>>>> >> >>>>>> Could you possibly if you have time explain further your comment >> re >> >>>>>> the >> >>>>>> p-values, your suggesting I am misusing them? >> >>>>> >> >>>>> Depends on your use and interpretation >> >>>>> >> >>>>> test statistics, p-values are random variables, if you look at >> several >> >>>>> tests at the same time, some p-values will be large just by chance. >> >>>>> If, for example you just look at the largest test statistic, then >> the >> >>>>> distribution for the max of several test statistics is not the same >> as >> >>>>> the distribution for a single test statistic >> >>>>> >> >>>>> http://en.wikipedia.org/wiki/Multiple_comparisons >> >>>>> http://www.itl.nist.gov/div898/handbook/prc/section4/prc47.htm >> >>>>> >> >>>>> we also just had a related discussion for ANOVA post-hoc tests on >> the >> >>>>> pystatsmodels group. >> >>>>> >> >>>>> Josef >> >>>>>> >> >>>>>> Thanks. >> >>>>>> >> >>>>>> >> >>>>>> josef.pktd wrote: >> >>>>>>> >> >>>>>>> On Sat, May 22, 2010 at 6:21 AM, mdekauwe >> >>>>>>> wrote: >> >>>>>>>> >> >>>>>>>> Sounds like I am stuck with the loop as I need to do the >> comparison >> >>>>>>>> for >> >>>>>>>> each >> >>>>>>>> pixel of the world and then I have a basemap function call which >> I >> >>>>>>>> guess >> >>>>>>>> slows it down further...hmm >> >>>>>>> >> >>>>>>> I don't see much that could be done differently, after a brief >> look. >> >>>>>>> >> >>>>>>> stats.pearsonr could be replaced by an array version using >> directly >> >>>>>>> the formula for correlation even with nans. wilcoxon looks slow, >> and >> >>>>>>> I >> >>>>>>> never tried or seen a faster version. >> >>>>>>> >> >>>>>>> just a reminder, the p-values are for a single test, when you >> have >> >>>>>>> many of them, then they don't have the right size/confidence >> level >> >>>>>>> for >> >>>>>>> an overall or joint test. (some packages report a Bonferroni >> >>>>>>> correction in this case) >> >>>>>>> >> >>>>>>> Josef >> >>>>>>> >> >>>>>>> >> >>>>>>>> >> >>>>>>>> i.e. >> >>>>>>>> >> >>>>>>>> def compareSnowData(jules_var): >> >>>>>>>> # Extract the 11 years of snow data and return >> >>>>>>>> outrows = 180 >> >>>>>>>> outcols = 360 >> >>>>>>>> numyears = 11 >> >>>>>>>> nummonths = 12 >> >>>>>>>> >> >>>>>>>> # Read various files >> >>>>>>>> fname="world_valid_jules_pts.ascii" >> >>>>>>>> (numpts, land_pts_index, latitude, longitude, rows, cols) = >> >>>>>>>> jo.read_land_points_ascii(fname, 1.0) >> >>>>>>>> >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax0.mon.gra" >> >>>>>>>> jules_data1 = jo.readJulesOutBinary(fname, numrows=15238, >> >>>>>>>> numcols=1, >> >>>>>>>> \ >> >>>>>>>> timesteps=132, numvars=26) >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax3.mon.gra" >> >>>>>>>> jules_data2 = jo.readJulesOutBinary(fname, numrows=15238, >> >>>>>>>> numcols=1, >> >>>>>>>> \ >> >>>>>>>> timesteps=132, numvars=26) >> >>>>>>>> >> >>>>>>>> # grab some space >> >>>>>>>> data1_snow = np.zeros((nummonths * numyears, numpts), >> >>>>>>>> dtype=np.float32) >> >>>>>>>> data2_snow = np.zeros((nummonths * numyears, numpts), >> >>>>>>>> dtype=np.float32) >> >>>>>>>> pearsonsr_snow = np.ones((outrows, outcols), >> dtype=np.float32) >> * >> >>>>>>>> np.nan >> >>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), >> dtype=np.float32) >> >>>>>>>> * >> >>>>>>>> np.nan >> >>>>>>>> >> >>>>>>>> # extract the data >> >>>>>>>> data1_snow = jules_data1[:,jules_var,:,0] >> >>>>>>>> data2_snow = jules_data2[:,jules_var,:,0] >> >>>>>>>> data1_snow = np.where(data1_snow < 0.0, np.nan, data1_snow) >> >>>>>>>> data2_snow = np.where(data2_snow < 0.0, np.nan, data2_snow) >> >>>>>>>> #for month in xrange(numyears * nummonths): >> >>>>>>>> # for i in xrange(numpts): >> >>>>>>>> # data1 = >> >>>>>>>> jules_data1[month,jules_var,land_pts_index[i],0] >> >>>>>>>> # data2 = >> >>>>>>>> jules_data2[month,jules_var,land_pts_index[i],0] >> >>>>>>>> # if data1 >= 0.0: >> >>>>>>>> # data1_snow[month,i] = data1 >> >>>>>>>> # else: >> >>>>>>>> # data1_snow[month,i] = np.nan >> >>>>>>>> # if data2 > 0.0: >> >>>>>>>> # data2_snow[month,i] = data2 >> >>>>>>>> # else: >> >>>>>>>> # data2_snow[month,i] = np.nan >> >>>>>>>> >> >>>>>>>> # exclude any months from *both* arrays where we have dodgy >> >>>>>>>> data, >> >>>>>>>> else >> >>>>>>>> we >> >>>>>>>> # can't do the correlations correctly!! >> >>>>>>>> data1_snow = np.where(np.isnan(data2_snow), np.nan, >> data1_snow) >> >>>>>>>> data2_snow = np.where(np.isnan(data1_snow), np.nan, >> data1_snow) >> >>>>>>>> >> >>>>>>>> # put data on a regular grid... >> >>>>>>>> print 'regridding landpts...' >> >>>>>>>> for i in xrange(numpts): >> >>>>>>>> # exclude the NaN, note masking them doesn't work in the >> >>>>>>>> stats >> >>>>>>>> func >> >>>>>>>> x = data1_snow[:,i] >> >>>>>>>> x = x[np.isfinite(x)] >> >>>>>>>> y = data2_snow[:,i] >> >>>>>>>> y = y[np.isfinite(y)] >> >>>>>>>> >> >>>>>>>> # r^2 >> >>>>>>>> # exclude v.small arrays, i.e. we need just less over 4 >> >>>>>>>> years >> >>>>>>>> of >> >>>>>>>> data >> >>>>>>>> if len(x) and len(y) > 50: >> >>>>>>>> pearsonsr_snow[((180-1)-(rows[i]-1)),cols[i]-1] = >> >>>>>>>> (stats.pearsonr(x, y)[0])**2 >> >>>>>>>> >> >>>>>>>> # wilcox signed rank test >> >>>>>>>> # make sure we have enough samples to do the test >> >>>>>>>> d = x - y >> >>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # Keep all >> >>>>>>>> non-zero >> >>>>>>>> differences >> >>>>>>>> count = len(d) >> >>>>>>>> if count > 10: >> >>>>>>>> z, pval = stats.wilcoxon(x, y) >> >>>>>>>> # only map out sign different data >> >>>>>>>> if pval < 0.05: >> >>>>>>>> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] >> = >> >>>>>>>> np.mean(x - y) >> >>>>>>>> >> >>>>>>>> return (pearsonsr_snow, wilcoxStats_snow) >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> josef.pktd wrote: >> >>>>>>>>> >> >>>>>>>>> On Fri, May 21, 2010 at 10:14 PM, mdekauwe >> >>>>>>>>> wrote: >> >>>>>>>>>> >> >>>>>>>>>> Also I then need to remap the 2D array I make onto another >> grid >> >>>>>>>>>> (the >> >>>>>>>>>> world in >> >>>>>>>>>> this case). Which again I had am doing with a loop (note >> numpts >> >>>>>>>>>> is >> >>>>>>>>>> a >> >>>>>>>>>> lot >> >>>>>>>>>> bigger than my example above). >> >>>>>>>>>> >> >>>>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), >> dtype=np.float32) >> >>>>>>>>>> * >> >>>>>>>>>> np.nan >> >>>>>>>>>> for i in xrange(numpts): >> >>>>>>>>>> # exclude the NaN, note masking them doesn't work in >> the >> >>>>>>>>>> stats >> >>>>>>>>>> func >> >>>>>>>>>> x = data1_snow[:,i] >> >>>>>>>>>> x = x[np.isfinite(x)] >> >>>>>>>>>> y = data2_snow[:,i] >> >>>>>>>>>> y = y[np.isfinite(y)] >> >>>>>>>>>> >> >>>>>>>>>> # wilcox signed rank test >> >>>>>>>>>> # make sure we have enough samples to do the test >> >>>>>>>>>> d = x - y >> >>>>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # Keep >> all >> >>>>>>>>>> non-zero >> >>>>>>>>>> differences >> >>>>>>>>>> count = len(d) >> >>>>>>>>>> if count > 10: >> >>>>>>>>>> z, pval = stats.wilcoxon(x, y) >> >>>>>>>>>> # only map out sign different data >> >>>>>>>>>> if pval < 0.05: >> >>>>>>>>>> >> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] >> >>>>>>>>>> = >> >>>>>>>>>> np.mean(x - y) >> >>>>>>>>>> >> >>>>>>>>>> Now I think I can push the data in one move into the >> >>>>>>>>>> wilcoxStats_snow >> >>>>>>>>>> array >> >>>>>>>>>> by removing the index, >> >>>>>>>>>> but I can't see how I will get the individual x and y pts for >> >>>>>>>>>> each >> >>>>>>>>>> array >> >>>>>>>>>> member correctly without the loop, this was my attempt which >> of >> >>>>>>>>>> course >> >>>>>>>>>> doesn't work! >> >>>>>>>>>> >> >>>>>>>>>> x = data1_snow[:,:] >> >>>>>>>>>> x = x[np.isfinite(x)] >> >>>>>>>>>> y = data2_snow[:,:] >> >>>>>>>>>> y = y[np.isfinite(y)] >> >>>>>>>>>> >> >>>>>>>>>> # r^2 >> >>>>>>>>>> # exclude v.small arrays, i.e. we need just less over 4 years >> of >> >>>>>>>>>> data >> >>>>>>>>>> if len(x) and len(y) > 50: >> >>>>>>>>>> pearsonsr_snow[((180-1)-(rows-1)),cols-1] = >> (stats.pearsonr(x, >> >>>>>>>>>> y)[0])**2 >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> If you want to do pairwise comparisons with stats.wilcoxon, >> then >> >>>>>>>>> you >> >>>>>>>>> might be stuck with the loop, since wilcoxon takes only two 1d >> >>>>>>>>> arrays >> >>>>>>>>> at a time (if I read the help correctly). >> >>>>>>>>> >> >>>>>>>>> Also the presence of nans might force the use a loop. >> stats.mstats >> >>>>>>>>> has >> >>>>>>>>> masked array versions, but I didn't see wilcoxon in the list. >> >>>>>>>>> (Even >> >>>>>>>>> when vectorized operations would work with regular arrays, nan >> or >> >>>>>>>>> masked array versions still have to loop in many cases.) >> >>>>>>>>> >> >>>>>>>>> If you have many columns with count <= 10, so that wilcoxon is >> not >> >>>>>>>>> calculated then it might be worth to use only array operations >> up >> >>>>>>>>> to >> >>>>>>>>> that point. If wilcoxon is calculated most of the time, then >> it's >> >>>>>>>>> not >> >>>>>>>>> worth thinking too hard about this. >> >>>>>>>>> >> >>>>>>>>> Josef >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> thanks. >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> mdekauwe wrote: >> >>>>>>>>>>> >> >>>>>>>>>>> Yes as Zachary said index is only 0 to 15237, so both methods >> >>>>>>>>>>> work. >> >>>>>>>>>>> >> >>>>>>>>>>> I don't quite get what you mean about slicing with axis > 3. >> Is >> >>>>>>>>>>> there >> >>>>>>>>>>> a >> >>>>>>>>>>> link you can recommend I should read? Does that mean given I >> >>>>>>>>>>> have >> >>>>>>>>>>> 4dims >> >>>>>>>>>>> that Josef's suggestion would be more advised in this case? >> >>>>>>>>> >> >>>>>>>>> There were several discussions on the mailing lists (fancy >> slicing >> >>>>>>>>> and >> >>>>>>>>> indexing). Your case is safe, but if you run in future into >> funny >> >>>>>>>>> shapes, you can look up the details. >> >>>>>>>>> when in doubt, I use np.arange(...) >> >>>>>>>>> >> >>>>>>>>> Josef >> >>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> Thanks. >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> josef.pktd wrote: >> >>>>>>>>>>>> >> >>>>>>>>>>>> On Fri, May 21, 2010 at 10:55 AM, mdekauwe < >> mdekauwe at gmail.com> >> >>>>>>>>>>>> wrote: >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> Thanks that works... >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> So the way to do it is with np.arange(tsteps)[:,None], that >> >>>>>>>>>>>>> was >> >>>>>>>>>>>>> the >> >>>>>>>>>>>>> step >> >>>>>>>>>>>>> I >> >>>>>>>>>>>>> was struggling with, so this forms a 2D array which >> replaces >> >>>>>>>>>>>>> the >> >>>>>>>>>>>>> the >> >>>>>>>>>>>>> two >> >>>>>>>>>>>>> for >> >>>>>>>>>>>>> loops? Do I have that right? >> >>>>>>>>>>>> >> >>>>>>>>>>>> Yes, but as Zachary showed, if you need the full index in a >> >>>>>>>>>>>> dimension, >> >>>>>>>>>>>> then you can use slicing. It might be faster. >> >>>>>>>>>>>> And a warning, mixing slices and index arrays with 3 or more >> >>>>>>>>>>>> dimensions can have some surprise switching of axes. >> >>>>>>>>>>>> >> >>>>>>>>>>>> Josef >> >>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> A lot quicker...! >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> Martin >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> josef.pktd wrote: >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> On Fri, May 21, 2010 at 8:59 AM, mdekauwe >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> wrote: >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> Hi, >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> I am trying to extract data from a 4D array and store it >> in >> >>>>>>>>>>>>>>> a >> >>>>>>>>>>>>>>> 2D >> >>>>>>>>>>>>>>> array, >> >>>>>>>>>>>>>>> but >> >>>>>>>>>>>>>>> avoid my current usage of the for loops for speed, as in >> >>>>>>>>>>>>>>> reality >> >>>>>>>>>>>>>>> the >> >>>>>>>>>>>>>>> arrays >> >>>>>>>>>>>>>>> sizes are quite big. Could someone also try and explain >> the >> >>>>>>>>>>>>>>> solution >> >>>>>>>>>>>>>>> as >> >>>>>>>>>>>>>>> well >> >>>>>>>>>>>>>>> if they have a spare moment as I am still finding it >> quite >> >>>>>>>>>>>>>>> difficult >> >>>>>>>>>>>>>>> to >> >>>>>>>>>>>>>>> get >> >>>>>>>>>>>>>>> over the habit of using loops (C convert for my sins). I >> get >> >>>>>>>>>>>>>>> that >> >>>>>>>>>>>>>>> one >> >>>>>>>>>>>>>>> could >> >>>>>>>>>>>>>>> precompute the indices's i and j i.e. >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> i = np.arange(tsteps) >> >>>>>>>>>>>>>>> j = np.arange(numpts) >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> but just can't get my head round how i then use them... >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> Thanks, >> >>>>>>>>>>>>>>> Martin >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> import numpy as np >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> numpts=10 >> >>>>>>>>>>>>>>> tsteps = 12 >> >>>>>>>>>>>>>>> vari = 22 >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> data = np.random.random((tsteps, vari, numpts, 1)) >> >>>>>>>>>>>>>>> new_data = np.zeros((tsteps, numpts), dtype=np.float32) >> >>>>>>>>>>>>>>> index = np.arange(numpts) >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> for i in xrange(tsteps): >> >>>>>>>>>>>>>>> for j in xrange(numpts): >> >>>>>>>>>>>>>>> new_data[i,j] = data[i,5,index[j],0] >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> The index arrays need to be broadcastable against each >> other. >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> I think this should do it >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> new_data = data[np.arange(tsteps)[:,None], 5, >> >>>>>>>>>>>>>> np.arange(numpts), >> >>>>>>>>>>>>>> 0] >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> Josef >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> -- >> >>>>>>>>>>>>>>> View this message in context: >> >>>>>>>>>>>>>>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28633477.html >> >>>>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at >> Nabble.com. >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> _______________________________________________ >> >>>>>>>>>>>>>>> SciPy-User mailing list >> >>>>>>>>>>>>>>> SciPy-User at scipy.org >> >>>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>> _______________________________________________ >> >>>>>>>>>>>>>> SciPy-User mailing list >> >>>>>>>>>>>>>> SciPy-User at scipy.org >> >>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> -- >> >>>>>>>>>>>>> View this message in context: >> >>>>>>>>>>>>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28634924.html >> >>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at >> Nabble.com. >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> _______________________________________________ >> >>>>>>>>>>>>> SciPy-User mailing list >> >>>>>>>>>>>>> SciPy-User at scipy.org >> >>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>>>>>>>>>>>> >> >>>>>>>>>>>> _______________________________________________ >> >>>>>>>>>>>> SciPy-User mailing list >> >>>>>>>>>>>> SciPy-User at scipy.org >> >>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> -- >> >>>>>>>>>> View this message in context: >> >>>>>>>>>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28640656.html >> >>>>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >> >>>>>>>>>> >> >>>>>>>>>> _______________________________________________ >> >>>>>>>>>> SciPy-User mailing list >> >>>>>>>>>> SciPy-User at scipy.org >> >>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>>>>>>>>> >> >>>>>>>>> _______________________________________________ >> >>>>>>>>> SciPy-User mailing list >> >>>>>>>>> SciPy-User at scipy.org >> >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>> >> >>>>>>>> -- >> >>>>>>>> View this message in context: >> >>>>>>>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28642434.html >> >>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >> >>>>>>>> >> >>>>>>>> _______________________________________________ >> >>>>>>>> SciPy-User mailing list >> >>>>>>>> SciPy-User at scipy.org >> >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>>>>>>> >> >>>>>>> _______________________________________________ >> >>>>>>> SciPy-User mailing list >> >>>>>>> SciPy-User at scipy.org >> >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>>>>>> >> >>>>>>> >> >>>>>> >> >>>>>> -- >> >>>>>> View this message in context: >> >>>>>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28686356.html >> >>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >> >>>>>> >> >>>>>> _______________________________________________ >> >>>>>> SciPy-User mailing list >> >>>>>> SciPy-User at scipy.org >> >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>>>>> >> >>>>> _______________________________________________ >> >>>>> SciPy-User mailing list >> >>>>> SciPy-User at scipy.org >> >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>>>> >> >>>>> >> >>>> >> >>>> -- >> >>>> View this message in context: >> >>>> http://old.nabble.com/removing-for-loops...-tp28633477p28711249.html >> >>>> Sent from the Scipy-User mailing list archive at Nabble.com. >> >>>> >> >>>> _______________________________________________ >> >>>> SciPy-User mailing list >> >>>> SciPy-User at scipy.org >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>>> >> >>> _______________________________________________ >> >>> SciPy-User mailing list >> >>> SciPy-User at scipy.org >> >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>> >> >> >> >> -- >> >> View this message in context: >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28711444.html >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> -- >> View this message in context: >> http://old.nabble.com/removing-for-loops...-tp28633477p28711581.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/removing-for-loops...-tp28633477p28824023.html Sent from the Scipy-User mailing list archive at Nabble.com. From jsalvati at u.washington.edu Tue Jun 8 18:46:28 2010 From: jsalvati at u.washington.edu (John Salvatier) Date: Tue, 8 Jun 2010 15:46:28 -0700 Subject: [SciPy-User] Can I create a 3 argument UFunc easily? Message-ID: Hello, I would like to make a 3 argument UFunc that finds the weighted average of two of the arguments using the 3rd argument as the weight. This way, the .accumulate method of the ufunc can be used as an exponentially weighted moving average function. Unfortunately I am not very familiar with the Numpy C API, so I was hoping to use the Cython hack for making UFuncs ( http://wiki.cython.org/MarkLodato/CreatingUfuncs). However, looking at the UFunc C API doc (http://docs.scipy.org/doc/numpy/reference/c-api.ufunc.html), it looks like numpy only has 2 argument "generic functions". Is there a simple way to create a "generic function" that takes 3 arguments that will still work with accumulate? Is there another way to create the sort of UFunc I want? Best Regards, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophermarkstrickland at gmail.com Tue Jun 8 18:52:30 2010 From: christophermarkstrickland at gmail.com (Chris Strickland) Date: Wed, 9 Jun 2010 08:52:30 +1000 Subject: [SciPy-User] log pdf, cdf, etc In-Reply-To: <201005272122.54911.christopher.strickland@qut.edu.au> References: <201005272122.54911.christopher.strickland@qut.edu.au> Message-ID: I posted this over a week ago and we have a running thread with discussion on it. So ignore this somewhat mysterious re-appearance of an old post. If you are interested in the discussion post in the other thread. On Thu, May 27, 2010 at 9:22 PM, Chris Strickland < christopher.strickland at qut.edu.au> wrote: > Hi, > > When using any of the distributions of scipy.stats there does not seem to > be > the ability (or at least I cannot figure out how) to have the function > return > the log of the pdf, cdf, sf, etc. For statistical analysis this is > essential. > For instance suppose we are interested in an exponential distribution for a > random variable x with a hyperparameter lambda there needs to be an option > that returns -log(lambda)-x/lambda. It is not sufficient (numerically) to > calculate log(scipy.stats.expon.pdf(x,lambda)). > > Is there a way to do this using the distributions in scipy.stats? > > If there is not is it possible for me to suggest that this feature is > added. > There is such an excellent range of distributions, each with such an > impressive range of options, it seems ashame to have to mostly manually > code > up the log of pdfs and often call the log of CDFs from R. > > Thanks, > Chris. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jun 8 19:27:33 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 8 Jun 2010 19:27:33 -0400 Subject: [SciPy-User] Can I create a 3 argument UFunc easily? In-Reply-To: References: Message-ID: On Tue, Jun 8, 2010 at 18:46, John Salvatier wrote: > Hello, > > I would like to make a 3 argument UFunc that finds the weighted average of > two of the arguments using the 3rd argument as the weight. This way, the > .accumulate method of the ufunc can be used as an exponentially weighted > moving average function. > > Unfortunately I am not very familiar with the Numpy C API, so I was hoping > to use the Cython hack for making UFuncs > (http://wiki.cython.org/MarkLodato/CreatingUfuncs). However, looking at the > UFunc C API doc > (http://docs.scipy.org/doc/numpy/reference/c-api.ufunc.html), it looks like > numpy only has 2 argument "generic functions". Is there a simple way to > create a "generic function" that takes 3 arguments that will still work with > accumulate? Is there another way to create the sort of UFunc I want? While you can make n-argument ufuncs (scipy.special has many of them), .accumulate() only works for 2-argument ufuncs. All in all, it's a lot easier and more performant to simply code up an EWMA in C rather than "tricking" the general ufunc machinery into achieving a specific effect. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwf at cs.toronto.edu Tue Jun 8 20:01:19 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 8 Jun 2010 20:01:19 -0400 Subject: [SciPy-User] Python scipy error. In-Reply-To: References: Message-ID: <5CFD0CA4-AC67-45D0-BE8A-D4513A32C7E8@cs.toronto.edu> On 2010-05-26, at 12:07 AM, Padma TAN wrote: > Hi > > Error message I got when needed to run this. Please assist! Please send the output of numpy.show_config() and also tell us what OS and distribution (e.g. Ubuntu) this is from. It looks like you have a misconfigured BLAS. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.root at ou.edu Tue Jun 8 20:49:53 2010 From: ben.root at ou.edu (Benjamin Root) Date: Tue, 8 Jun 2010 19:49:53 -0500 Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: <28824023.post@talk.nabble.com> References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28711581.post@talk.nabble.com> <28824023.post@talk.nabble.com> Message-ID: The np.mod in my example caused the data points to stay within [0, 11] in order to illustrate that these are months. In my example, months are column, years are rows. In your desired output, months are rows and years are columns. It makes very little difference which way you have it. Anyway, let's imagine that we have a time series of data "jules". We can easily reshape this like so: > jules_2d = jules.reshape((-1, 12)) > jules_monthly = np.mean(jules_2d, axis=0) > print jules_monthly.shape (12,) voila! You have 12 values in jules_monthly which are means for that month across all years. protip - if you want yearly averages just change the ax parameter in np.mean(): > jules_yearly = np.mean(jules_2d, axis=1) I hope that makes my previous explanation clearer. Ben Root On Tue, Jun 8, 2010 at 5:41 PM, mdekauwe wrote: > > OK... > > but if I do... > > In [28]: np.mod(np.arange(nummonths*numyears), nummonths).reshape((-1, > nummonths)) > Out[28]: > array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) > > When really I would be after something like this I think? > > array([ 0, 12, 24, 36, 48, 60, 72, 84, 96, 108, 120], > [ 1, 13, 25, 37, 49, 61, 73, 85, 97, 109, 121], > [ 2, 14, 26, 38, 50, 62, 74, 86, 98, 110, 122] > etc, etc > > i.e. so for each month jump across the years. > > Not quite sure of this example...this is what I currently have which does > seem to work, though I guess not completely efficiently. > > for month in xrange(nummonths): > tmp = jules[xrange(0, numyears * nummonths, nummonths),VAR,:,0] > tmp[tmp < 0.0] = np.nan > data[month,:] = np.mean(tmp, axis=0) > > > > > Benjamin Root-2 wrote: > > > > If you want an average for each month from your timeseries, then the > > sneaky > > way would be to reshape your array so that the time dimension is split > > into > > two (month, year) dimensions. > > > > For a 1-D array, this would be: > > > >> dataarray = numpy.mod(numpy.arange(36), 12) > >> print dataarray > > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, > 4, > > 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, 4, 5, 6, 7, 8, > 9, > > 10, 11]) > >> datamatrix = dataarray.reshape((-1, 12)) > >> print datamatrix > > array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) > > > > Hope that helps. > > > > Ben Root > > > > > > On Fri, May 28, 2010 at 3:28 PM, mdekauwe wrote: > > > >> > >> OK so I just need to have a quick loop across the 12 months then, that > is > >> fine, just thought there might have been a sneaky way! > >> > >> Really appreciated, getting there slowly! > >> > >> > >> > >> josef.pktd wrote: > >> > > >> > On Fri, May 28, 2010 at 4:14 PM, mdekauwe wrote: > >> >> > >> >> ok - something like this then...but how would i get the index for the > >> >> month > >> >> for the data array (where month is 0, 1, 2, 4 ... 11)? > >> >> > >> >> data[month,:] = array[xrange(0, numyears * nummonths, > >> nummonths),VAR,:,0] > >> > > >> > you would still need to start at the right month > >> > data[month,:] = array[xrange(month, numyears * nummonths, > >> > nummonths),VAR,:,0] > >> > or > >> > data[month,:] = array[month: numyears * nummonths : > nummonths),VAR,:,0] > >> > > >> > an alternative would be a reshape with an extra month dimension and > >> > then sum only once over the year axis. this might be faster but > >> > trickier to get the correct reshape . > >> > > >> > Josef > >> > > >> >> > >> >> and would that be quicker than making an array months... > >> >> > >> >> months = np.arange(numyears * nummonths) > >> >> > >> >> and you that instead like you suggested x[start:end:12,:]? > >> >> > >> >> Many thanks again... > >> >> > >> >> > >> >> josef.pktd wrote: > >> >>> > >> >>> On Fri, May 28, 2010 at 3:53 PM, mdekauwe > wrote: > >> >>>> > >> >>>> Ok thanks...I'll take a look. > >> >>>> > >> >>>> Back to my loops issue. What if instead this time I wanted to take > >> an > >> >>>> average so every march in 11 years, is there a quicker way to go > >> about > >> >>>> doing > >> >>>> that than my current method? > >> >>>> > >> >>>> nummonths = 12 > >> >>>> numyears = 11 > >> >>>> > >> >>>> for month in xrange(nummonths): > >> >>>> for i in xrange(numpts): > >> >>>> for ym in xrange(month, numyears * nummonths, nummonths): > >> >>>> data[month, i] += array[ym, VAR, land_pts_index[i], 0] > >> >>> > >> >>> > >> >>> x[start:end:12,:] gives you every 12th row of an array x > >> >>> > >> >>> something like this should work to get rid of the inner loop, or you > >> >>> could directly put > >> >>> range(month, numyears * nummonths, nummonths) into the array instead > >> >>> of ym and sum() > >> >>> > >> >>> Josef > >> >>> > >> >>> > >> >>>> > >> >>>> so for each point in the array for a given month i am jumping > >> through > >> >>>> and > >> >>>> getting the next years month and so on, summing it. > >> >>>> > >> >>>> Thanks... > >> >>>> > >> >>>> > >> >>>> josef.pktd wrote: > >> >>>>> > >> >>>>> On Wed, May 26, 2010 at 5:03 PM, mdekauwe > >> wrote: > >> >>>>>> > >> >>>>>> Could you possibly if you have time explain further your comment > >> re > >> >>>>>> the > >> >>>>>> p-values, your suggesting I am misusing them? > >> >>>>> > >> >>>>> Depends on your use and interpretation > >> >>>>> > >> >>>>> test statistics, p-values are random variables, if you look at > >> several > >> >>>>> tests at the same time, some p-values will be large just by > chance. > >> >>>>> If, for example you just look at the largest test statistic, then > >> the > >> >>>>> distribution for the max of several test statistics is not the > same > >> as > >> >>>>> the distribution for a single test statistic > >> >>>>> > >> >>>>> http://en.wikipedia.org/wiki/Multiple_comparisons > >> >>>>> http://www.itl.nist.gov/div898/handbook/prc/section4/prc47.htm > >> >>>>> > >> >>>>> we also just had a related discussion for ANOVA post-hoc tests on > >> the > >> >>>>> pystatsmodels group. > >> >>>>> > >> >>>>> Josef > >> >>>>>> > >> >>>>>> Thanks. > >> >>>>>> > >> >>>>>> > >> >>>>>> josef.pktd wrote: > >> >>>>>>> > >> >>>>>>> On Sat, May 22, 2010 at 6:21 AM, mdekauwe > >> >>>>>>> wrote: > >> >>>>>>>> > >> >>>>>>>> Sounds like I am stuck with the loop as I need to do the > >> comparison > >> >>>>>>>> for > >> >>>>>>>> each > >> >>>>>>>> pixel of the world and then I have a basemap function call > which > >> I > >> >>>>>>>> guess > >> >>>>>>>> slows it down further...hmm > >> >>>>>>> > >> >>>>>>> I don't see much that could be done differently, after a brief > >> look. > >> >>>>>>> > >> >>>>>>> stats.pearsonr could be replaced by an array version using > >> directly > >> >>>>>>> the formula for correlation even with nans. wilcoxon looks slow, > >> and > >> >>>>>>> I > >> >>>>>>> never tried or seen a faster version. > >> >>>>>>> > >> >>>>>>> just a reminder, the p-values are for a single test, when you > >> have > >> >>>>>>> many of them, then they don't have the right size/confidence > >> level > >> >>>>>>> for > >> >>>>>>> an overall or joint test. (some packages report a Bonferroni > >> >>>>>>> correction in this case) > >> >>>>>>> > >> >>>>>>> Josef > >> >>>>>>> > >> >>>>>>> > >> >>>>>>>> > >> >>>>>>>> i.e. > >> >>>>>>>> > >> >>>>>>>> def compareSnowData(jules_var): > >> >>>>>>>> # Extract the 11 years of snow data and return > >> >>>>>>>> outrows = 180 > >> >>>>>>>> outcols = 360 > >> >>>>>>>> numyears = 11 > >> >>>>>>>> nummonths = 12 > >> >>>>>>>> > >> >>>>>>>> # Read various files > >> >>>>>>>> fname="world_valid_jules_pts.ascii" > >> >>>>>>>> (numpts, land_pts_index, latitude, longitude, rows, cols) = > >> >>>>>>>> jo.read_land_points_ascii(fname, 1.0) > >> >>>>>>>> > >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax0.mon.gra" > >> >>>>>>>> jules_data1 = jo.readJulesOutBinary(fname, numrows=15238, > >> >>>>>>>> numcols=1, > >> >>>>>>>> \ > >> >>>>>>>> timesteps=132, numvars=26) > >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax3.mon.gra" > >> >>>>>>>> jules_data2 = jo.readJulesOutBinary(fname, numrows=15238, > >> >>>>>>>> numcols=1, > >> >>>>>>>> \ > >> >>>>>>>> timesteps=132, numvars=26) > >> >>>>>>>> > >> >>>>>>>> # grab some space > >> >>>>>>>> data1_snow = np.zeros((nummonths * numyears, numpts), > >> >>>>>>>> dtype=np.float32) > >> >>>>>>>> data2_snow = np.zeros((nummonths * numyears, numpts), > >> >>>>>>>> dtype=np.float32) > >> >>>>>>>> pearsonsr_snow = np.ones((outrows, outcols), > >> dtype=np.float32) > >> * > >> >>>>>>>> np.nan > >> >>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), > >> dtype=np.float32) > >> >>>>>>>> * > >> >>>>>>>> np.nan > >> >>>>>>>> > >> >>>>>>>> # extract the data > >> >>>>>>>> data1_snow = jules_data1[:,jules_var,:,0] > >> >>>>>>>> data2_snow = jules_data2[:,jules_var,:,0] > >> >>>>>>>> data1_snow = np.where(data1_snow < 0.0, np.nan, data1_snow) > >> >>>>>>>> data2_snow = np.where(data2_snow < 0.0, np.nan, data2_snow) > >> >>>>>>>> #for month in xrange(numyears * nummonths): > >> >>>>>>>> # for i in xrange(numpts): > >> >>>>>>>> # data1 = > >> >>>>>>>> jules_data1[month,jules_var,land_pts_index[i],0] > >> >>>>>>>> # data2 = > >> >>>>>>>> jules_data2[month,jules_var,land_pts_index[i],0] > >> >>>>>>>> # if data1 >= 0.0: > >> >>>>>>>> # data1_snow[month,i] = data1 > >> >>>>>>>> # else: > >> >>>>>>>> # data1_snow[month,i] = np.nan > >> >>>>>>>> # if data2 > 0.0: > >> >>>>>>>> # data2_snow[month,i] = data2 > >> >>>>>>>> # else: > >> >>>>>>>> # data2_snow[month,i] = np.nan > >> >>>>>>>> > >> >>>>>>>> # exclude any months from *both* arrays where we have dodgy > >> >>>>>>>> data, > >> >>>>>>>> else > >> >>>>>>>> we > >> >>>>>>>> # can't do the correlations correctly!! > >> >>>>>>>> data1_snow = np.where(np.isnan(data2_snow), np.nan, > >> data1_snow) > >> >>>>>>>> data2_snow = np.where(np.isnan(data1_snow), np.nan, > >> data1_snow) > >> >>>>>>>> > >> >>>>>>>> # put data on a regular grid... > >> >>>>>>>> print 'regridding landpts...' > >> >>>>>>>> for i in xrange(numpts): > >> >>>>>>>> # exclude the NaN, note masking them doesn't work in the > >> >>>>>>>> stats > >> >>>>>>>> func > >> >>>>>>>> x = data1_snow[:,i] > >> >>>>>>>> x = x[np.isfinite(x)] > >> >>>>>>>> y = data2_snow[:,i] > >> >>>>>>>> y = y[np.isfinite(y)] > >> >>>>>>>> > >> >>>>>>>> # r^2 > >> >>>>>>>> # exclude v.small arrays, i.e. we need just less over 4 > >> >>>>>>>> years > >> >>>>>>>> of > >> >>>>>>>> data > >> >>>>>>>> if len(x) and len(y) > 50: > >> >>>>>>>> pearsonsr_snow[((180-1)-(rows[i]-1)),cols[i]-1] = > >> >>>>>>>> (stats.pearsonr(x, y)[0])**2 > >> >>>>>>>> > >> >>>>>>>> # wilcox signed rank test > >> >>>>>>>> # make sure we have enough samples to do the test > >> >>>>>>>> d = x - y > >> >>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # Keep > all > >> >>>>>>>> non-zero > >> >>>>>>>> differences > >> >>>>>>>> count = len(d) > >> >>>>>>>> if count > 10: > >> >>>>>>>> z, pval = stats.wilcoxon(x, y) > >> >>>>>>>> # only map out sign different data > >> >>>>>>>> if pval < 0.05: > >> >>>>>>>> > wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] > >> = > >> >>>>>>>> np.mean(x - y) > >> >>>>>>>> > >> >>>>>>>> return (pearsonsr_snow, wilcoxStats_snow) > >> >>>>>>>> > >> >>>>>>>> > >> >>>>>>>> josef.pktd wrote: > >> >>>>>>>>> > >> >>>>>>>>> On Fri, May 21, 2010 at 10:14 PM, mdekauwe < > mdekauwe at gmail.com> > >> >>>>>>>>> wrote: > >> >>>>>>>>>> > >> >>>>>>>>>> Also I then need to remap the 2D array I make onto another > >> grid > >> >>>>>>>>>> (the > >> >>>>>>>>>> world in > >> >>>>>>>>>> this case). Which again I had am doing with a loop (note > >> numpts > >> >>>>>>>>>> is > >> >>>>>>>>>> a > >> >>>>>>>>>> lot > >> >>>>>>>>>> bigger than my example above). > >> >>>>>>>>>> > >> >>>>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), > >> dtype=np.float32) > >> >>>>>>>>>> * > >> >>>>>>>>>> np.nan > >> >>>>>>>>>> for i in xrange(numpts): > >> >>>>>>>>>> # exclude the NaN, note masking them doesn't work in > >> the > >> >>>>>>>>>> stats > >> >>>>>>>>>> func > >> >>>>>>>>>> x = data1_snow[:,i] > >> >>>>>>>>>> x = x[np.isfinite(x)] > >> >>>>>>>>>> y = data2_snow[:,i] > >> >>>>>>>>>> y = y[np.isfinite(y)] > >> >>>>>>>>>> > >> >>>>>>>>>> # wilcox signed rank test > >> >>>>>>>>>> # make sure we have enough samples to do the test > >> >>>>>>>>>> d = x - y > >> >>>>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # Keep > >> all > >> >>>>>>>>>> non-zero > >> >>>>>>>>>> differences > >> >>>>>>>>>> count = len(d) > >> >>>>>>>>>> if count > 10: > >> >>>>>>>>>> z, pval = stats.wilcoxon(x, y) > >> >>>>>>>>>> # only map out sign different data > >> >>>>>>>>>> if pval < 0.05: > >> >>>>>>>>>> > >> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] > >> >>>>>>>>>> = > >> >>>>>>>>>> np.mean(x - y) > >> >>>>>>>>>> > >> >>>>>>>>>> Now I think I can push the data in one move into the > >> >>>>>>>>>> wilcoxStats_snow > >> >>>>>>>>>> array > >> >>>>>>>>>> by removing the index, > >> >>>>>>>>>> but I can't see how I will get the individual x and y pts for > >> >>>>>>>>>> each > >> >>>>>>>>>> array > >> >>>>>>>>>> member correctly without the loop, this was my attempt which > >> of > >> >>>>>>>>>> course > >> >>>>>>>>>> doesn't work! > >> >>>>>>>>>> > >> >>>>>>>>>> x = data1_snow[:,:] > >> >>>>>>>>>> x = x[np.isfinite(x)] > >> >>>>>>>>>> y = data2_snow[:,:] > >> >>>>>>>>>> y = y[np.isfinite(y)] > >> >>>>>>>>>> > >> >>>>>>>>>> # r^2 > >> >>>>>>>>>> # exclude v.small arrays, i.e. we need just less over 4 years > >> of > >> >>>>>>>>>> data > >> >>>>>>>>>> if len(x) and len(y) > 50: > >> >>>>>>>>>> pearsonsr_snow[((180-1)-(rows-1)),cols-1] = > >> (stats.pearsonr(x, > >> >>>>>>>>>> y)[0])**2 > >> >>>>>>>>> > >> >>>>>>>>> > >> >>>>>>>>> If you want to do pairwise comparisons with stats.wilcoxon, > >> then > >> >>>>>>>>> you > >> >>>>>>>>> might be stuck with the loop, since wilcoxon takes only two 1d > >> >>>>>>>>> arrays > >> >>>>>>>>> at a time (if I read the help correctly). > >> >>>>>>>>> > >> >>>>>>>>> Also the presence of nans might force the use a loop. > >> stats.mstats > >> >>>>>>>>> has > >> >>>>>>>>> masked array versions, but I didn't see wilcoxon in the list. > >> >>>>>>>>> (Even > >> >>>>>>>>> when vectorized operations would work with regular arrays, nan > >> or > >> >>>>>>>>> masked array versions still have to loop in many cases.) > >> >>>>>>>>> > >> >>>>>>>>> If you have many columns with count <= 10, so that wilcoxon is > >> not > >> >>>>>>>>> calculated then it might be worth to use only array operations > >> up > >> >>>>>>>>> to > >> >>>>>>>>> that point. If wilcoxon is calculated most of the time, then > >> it's > >> >>>>>>>>> not > >> >>>>>>>>> worth thinking too hard about this. > >> >>>>>>>>> > >> >>>>>>>>> Josef > >> >>>>>>>>> > >> >>>>>>>>> > >> >>>>>>>>>> > >> >>>>>>>>>> thanks. > >> >>>>>>>>>> > >> >>>>>>>>>> > >> >>>>>>>>>> > >> >>>>>>>>>> > >> >>>>>>>>>> mdekauwe wrote: > >> >>>>>>>>>>> > >> >>>>>>>>>>> Yes as Zachary said index is only 0 to 15237, so both > methods > >> >>>>>>>>>>> work. > >> >>>>>>>>>>> > >> >>>>>>>>>>> I don't quite get what you mean about slicing with axis > 3. > >> Is > >> >>>>>>>>>>> there > >> >>>>>>>>>>> a > >> >>>>>>>>>>> link you can recommend I should read? Does that mean given I > >> >>>>>>>>>>> have > >> >>>>>>>>>>> 4dims > >> >>>>>>>>>>> that Josef's suggestion would be more advised in this case? > >> >>>>>>>>> > >> >>>>>>>>> There were several discussions on the mailing lists (fancy > >> slicing > >> >>>>>>>>> and > >> >>>>>>>>> indexing). Your case is safe, but if you run in future into > >> funny > >> >>>>>>>>> shapes, you can look up the details. > >> >>>>>>>>> when in doubt, I use np.arange(...) > >> >>>>>>>>> > >> >>>>>>>>> Josef > >> >>>>>>>>> > >> >>>>>>>>>>> > >> >>>>>>>>>>> Thanks. > >> >>>>>>>>>>> > >> >>>>>>>>>>> > >> >>>>>>>>>>> > >> >>>>>>>>>>> josef.pktd wrote: > >> >>>>>>>>>>>> > >> >>>>>>>>>>>> On Fri, May 21, 2010 at 10:55 AM, mdekauwe < > >> mdekauwe at gmail.com> > >> >>>>>>>>>>>> wrote: > >> >>>>>>>>>>>>> > >> >>>>>>>>>>>>> Thanks that works... > >> >>>>>>>>>>>>> > >> >>>>>>>>>>>>> So the way to do it is with np.arange(tsteps)[:,None], > that > >> >>>>>>>>>>>>> was > >> >>>>>>>>>>>>> the > >> >>>>>>>>>>>>> step > >> >>>>>>>>>>>>> I > >> >>>>>>>>>>>>> was struggling with, so this forms a 2D array which > >> replaces > >> >>>>>>>>>>>>> the > >> >>>>>>>>>>>>> the > >> >>>>>>>>>>>>> two > >> >>>>>>>>>>>>> for > >> >>>>>>>>>>>>> loops? Do I have that right? > >> >>>>>>>>>>>> > >> >>>>>>>>>>>> Yes, but as Zachary showed, if you need the full index in a > >> >>>>>>>>>>>> dimension, > >> >>>>>>>>>>>> then you can use slicing. It might be faster. > >> >>>>>>>>>>>> And a warning, mixing slices and index arrays with 3 or > more > >> >>>>>>>>>>>> dimensions can have some surprise switching of axes. > >> >>>>>>>>>>>> > >> >>>>>>>>>>>> Josef > >> >>>>>>>>>>>> > >> >>>>>>>>>>>>> > >> >>>>>>>>>>>>> A lot quicker...! > >> >>>>>>>>>>>>> > >> >>>>>>>>>>>>> Martin > >> >>>>>>>>>>>>> > >> >>>>>>>>>>>>> > >> >>>>>>>>>>>>> josef.pktd wrote: > >> >>>>>>>>>>>>>> > >> >>>>>>>>>>>>>> On Fri, May 21, 2010 at 8:59 AM, mdekauwe > >> >>>>>>>>>>>>>> > >> >>>>>>>>>>>>>> wrote: > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>>> Hi, > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>>> I am trying to extract data from a 4D array and store it > >> in > >> >>>>>>>>>>>>>>> a > >> >>>>>>>>>>>>>>> 2D > >> >>>>>>>>>>>>>>> array, > >> >>>>>>>>>>>>>>> but > >> >>>>>>>>>>>>>>> avoid my current usage of the for loops for speed, as in > >> >>>>>>>>>>>>>>> reality > >> >>>>>>>>>>>>>>> the > >> >>>>>>>>>>>>>>> arrays > >> >>>>>>>>>>>>>>> sizes are quite big. Could someone also try and explain > >> the > >> >>>>>>>>>>>>>>> solution > >> >>>>>>>>>>>>>>> as > >> >>>>>>>>>>>>>>> well > >> >>>>>>>>>>>>>>> if they have a spare moment as I am still finding it > >> quite > >> >>>>>>>>>>>>>>> difficult > >> >>>>>>>>>>>>>>> to > >> >>>>>>>>>>>>>>> get > >> >>>>>>>>>>>>>>> over the habit of using loops (C convert for my sins). I > >> get > >> >>>>>>>>>>>>>>> that > >> >>>>>>>>>>>>>>> one > >> >>>>>>>>>>>>>>> could > >> >>>>>>>>>>>>>>> precompute the indices's i and j i.e. > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>>> i = np.arange(tsteps) > >> >>>>>>>>>>>>>>> j = np.arange(numpts) > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>>> but just can't get my head round how i then use them... > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>>> Thanks, > >> >>>>>>>>>>>>>>> Martin > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>>> import numpy as np > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>>> numpts=10 > >> >>>>>>>>>>>>>>> tsteps = 12 > >> >>>>>>>>>>>>>>> vari = 22 > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>>> data = np.random.random((tsteps, vari, numpts, 1)) > >> >>>>>>>>>>>>>>> new_data = np.zeros((tsteps, numpts), dtype=np.float32) > >> >>>>>>>>>>>>>>> index = np.arange(numpts) > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>>> for i in xrange(tsteps): > >> >>>>>>>>>>>>>>> for j in xrange(numpts): > >> >>>>>>>>>>>>>>> new_data[i,j] = data[i,5,index[j],0] > >> >>>>>>>>>>>>>> > >> >>>>>>>>>>>>>> The index arrays need to be broadcastable against each > >> other. > >> >>>>>>>>>>>>>> > >> >>>>>>>>>>>>>> I think this should do it > >> >>>>>>>>>>>>>> > >> >>>>>>>>>>>>>> new_data = data[np.arange(tsteps)[:,None], 5, > >> >>>>>>>>>>>>>> np.arange(numpts), > >> >>>>>>>>>>>>>> 0] > >> >>>>>>>>>>>>>> > >> >>>>>>>>>>>>>> Josef > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>>> -- > >> >>>>>>>>>>>>>>> View this message in context: > >> >>>>>>>>>>>>>>> > >> http://old.nabble.com/removing-for-loops...-tp28633477p28633477.html > >> >>>>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at > >> Nabble.com. > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>>> _______________________________________________ > >> >>>>>>>>>>>>>>> SciPy-User mailing list > >> >>>>>>>>>>>>>>> SciPy-User at scipy.org > >> >>>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>>>>>>>>>>>>>> > >> >>>>>>>>>>>>>> _______________________________________________ > >> >>>>>>>>>>>>>> SciPy-User mailing list > >> >>>>>>>>>>>>>> SciPy-User at scipy.org > >> >>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>>>>>>>>>>>>> > >> >>>>>>>>>>>>>> > >> >>>>>>>>>>>>> > >> >>>>>>>>>>>>> -- > >> >>>>>>>>>>>>> View this message in context: > >> >>>>>>>>>>>>> > >> http://old.nabble.com/removing-for-loops...-tp28633477p28634924.html > >> >>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at > >> Nabble.com. > >> >>>>>>>>>>>>> > >> >>>>>>>>>>>>> _______________________________________________ > >> >>>>>>>>>>>>> SciPy-User mailing list > >> >>>>>>>>>>>>> SciPy-User at scipy.org > >> >>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>>>>>>>>>>>> > >> >>>>>>>>>>>> _______________________________________________ > >> >>>>>>>>>>>> SciPy-User mailing list > >> >>>>>>>>>>>> SciPy-User at scipy.org > >> >>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>>>>>>>>>>> > >> >>>>>>>>>>>> > >> >>>>>>>>>>> > >> >>>>>>>>>>> > >> >>>>>>>>>> > >> >>>>>>>>>> -- > >> >>>>>>>>>> View this message in context: > >> >>>>>>>>>> > >> http://old.nabble.com/removing-for-loops...-tp28633477p28640656.html > >> >>>>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. > >> >>>>>>>>>> > >> >>>>>>>>>> _______________________________________________ > >> >>>>>>>>>> SciPy-User mailing list > >> >>>>>>>>>> SciPy-User at scipy.org > >> >>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>>>>>>>>> > >> >>>>>>>>> _______________________________________________ > >> >>>>>>>>> SciPy-User mailing list > >> >>>>>>>>> SciPy-User at scipy.org > >> >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>>>>>>>> > >> >>>>>>>>> > >> >>>>>>>> > >> >>>>>>>> -- > >> >>>>>>>> View this message in context: > >> >>>>>>>> > >> http://old.nabble.com/removing-for-loops...-tp28633477p28642434.html > >> >>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. > >> >>>>>>>> > >> >>>>>>>> _______________________________________________ > >> >>>>>>>> SciPy-User mailing list > >> >>>>>>>> SciPy-User at scipy.org > >> >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>>>>>>> > >> >>>>>>> _______________________________________________ > >> >>>>>>> SciPy-User mailing list > >> >>>>>>> SciPy-User at scipy.org > >> >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>>>>>> > >> >>>>>>> > >> >>>>>> > >> >>>>>> -- > >> >>>>>> View this message in context: > >> >>>>>> > >> http://old.nabble.com/removing-for-loops...-tp28633477p28686356.html > >> >>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. > >> >>>>>> > >> >>>>>> _______________________________________________ > >> >>>>>> SciPy-User mailing list > >> >>>>>> SciPy-User at scipy.org > >> >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>>>>> > >> >>>>> _______________________________________________ > >> >>>>> SciPy-User mailing list > >> >>>>> SciPy-User at scipy.org > >> >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>>>> > >> >>>>> > >> >>>> > >> >>>> -- > >> >>>> View this message in context: > >> >>>> > http://old.nabble.com/removing-for-loops...-tp28633477p28711249.html > >> >>>> Sent from the Scipy-User mailing list archive at Nabble.com. > >> >>>> > >> >>>> _______________________________________________ > >> >>>> SciPy-User mailing list > >> >>>> SciPy-User at scipy.org > >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>>> > >> >>> _______________________________________________ > >> >>> SciPy-User mailing list > >> >>> SciPy-User at scipy.org > >> >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> > >> >>> > >> >> > >> >> -- > >> >> View this message in context: > >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28711444.html > >> >> Sent from the Scipy-User mailing list archive at Nabble.com. > >> >> > >> >> _______________________________________________ > >> >> SciPy-User mailing list > >> >> SciPy-User at scipy.org > >> >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >> > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > > >> > >> -- > >> View this message in context: > >> http://old.nabble.com/removing-for-loops...-tp28633477p28711581.html > >> Sent from the Scipy-User mailing list archive at Nabble.com. > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > View this message in context: > http://old.nabble.com/removing-for-loops...-tp28633477p28824023.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdekauwe at gmail.com Tue Jun 8 21:34:22 2010 From: mdekauwe at gmail.com (mdekauwe) Date: Tue, 8 Jun 2010 18:34:22 -0700 (PDT) Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: <28824023.post@talk.nabble.com> References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28711581.post@talk.nabble.com> <28824023.post@talk.nabble.com> Message-ID: <28825042.post@talk.nabble.com> Actually that should have been... for month in xrange(nummonths): tmp = jules[xrange(month, numyears * nummonths, nummonths),VAR,:,0] tmp[tmp < 0.0] = np.nan data[month,:] = np.mean(tmp, axis=0) sorry! mdekauwe wrote: > > OK... > > but if I do... > > In [28]: np.mod(np.arange(nummonths*numyears), nummonths).reshape((-1, > nummonths)) > Out[28]: > array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) > > When really I would be after something like this I think? > > array([ 0, 12, 24, 36, 48, 60, 72, 84, 96, 108, 120], > [ 1, 13, 25, 37, 49, 61, 73, 85, 97, 109, 121], > [ 2, 14, 26, 38, 50, 62, 74, 86, 98, 110, 122] > etc, etc > > i.e. so for each month jump across the years. > > Not quite sure of this example...this is what I currently have which does > seem to work, though I guess not completely efficiently. > > for month in xrange(nummonths): > tmp = jules[xrange(0, numyears * nummonths, nummonths),VAR,:,0] > tmp[tmp < 0.0] = np.nan > data[month,:] = np.mean(tmp, axis=0) > > > > > Benjamin Root-2 wrote: >> >> If you want an average for each month from your timeseries, then the >> sneaky >> way would be to reshape your array so that the time dimension is split >> into >> two (month, year) dimensions. >> >> For a 1-D array, this would be: >> >>> dataarray = numpy.mod(numpy.arange(36), 12) >>> print dataarray >> array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, >> 4, >> 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, 4, 5, 6, 7, 8, >> 9, >> 10, 11]) >>> datamatrix = dataarray.reshape((-1, 12)) >>> print datamatrix >> array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) >> >> Hope that helps. >> >> Ben Root >> >> >> On Fri, May 28, 2010 at 3:28 PM, mdekauwe wrote: >> >>> >>> OK so I just need to have a quick loop across the 12 months then, that >>> is >>> fine, just thought there might have been a sneaky way! >>> >>> Really appreciated, getting there slowly! >>> >>> >>> >>> josef.pktd wrote: >>> > >>> > On Fri, May 28, 2010 at 4:14 PM, mdekauwe wrote: >>> >> >>> >> ok - something like this then...but how would i get the index for the >>> >> month >>> >> for the data array (where month is 0, 1, 2, 4 ... 11)? >>> >> >>> >> data[month,:] = array[xrange(0, numyears * nummonths, >>> nummonths),VAR,:,0] >>> > >>> > you would still need to start at the right month >>> > data[month,:] = array[xrange(month, numyears * nummonths, >>> > nummonths),VAR,:,0] >>> > or >>> > data[month,:] = array[month: numyears * nummonths : >>> nummonths),VAR,:,0] >>> > >>> > an alternative would be a reshape with an extra month dimension and >>> > then sum only once over the year axis. this might be faster but >>> > trickier to get the correct reshape . >>> > >>> > Josef >>> > >>> >> >>> >> and would that be quicker than making an array months... >>> >> >>> >> months = np.arange(numyears * nummonths) >>> >> >>> >> and you that instead like you suggested x[start:end:12,:]? >>> >> >>> >> Many thanks again... >>> >> >>> >> >>> >> josef.pktd wrote: >>> >>> >>> >>> On Fri, May 28, 2010 at 3:53 PM, mdekauwe >>> wrote: >>> >>>> >>> >>>> Ok thanks...I'll take a look. >>> >>>> >>> >>>> Back to my loops issue. What if instead this time I wanted to take >>> an >>> >>>> average so every march in 11 years, is there a quicker way to go >>> about >>> >>>> doing >>> >>>> that than my current method? >>> >>>> >>> >>>> nummonths = 12 >>> >>>> numyears = 11 >>> >>>> >>> >>>> for month in xrange(nummonths): >>> >>>> for i in xrange(numpts): >>> >>>> for ym in xrange(month, numyears * nummonths, nummonths): >>> >>>> data[month, i] += array[ym, VAR, land_pts_index[i], 0] >>> >>> >>> >>> >>> >>> x[start:end:12,:] gives you every 12th row of an array x >>> >>> >>> >>> something like this should work to get rid of the inner loop, or you >>> >>> could directly put >>> >>> range(month, numyears * nummonths, nummonths) into the array instead >>> >>> of ym and sum() >>> >>> >>> >>> Josef >>> >>> >>> >>> >>> >>>> >>> >>>> so for each point in the array for a given month i am jumping >>> through >>> >>>> and >>> >>>> getting the next years month and so on, summing it. >>> >>>> >>> >>>> Thanks... >>> >>>> >>> >>>> >>> >>>> josef.pktd wrote: >>> >>>>> >>> >>>>> On Wed, May 26, 2010 at 5:03 PM, mdekauwe >>> wrote: >>> >>>>>> >>> >>>>>> Could you possibly if you have time explain further your comment >>> re >>> >>>>>> the >>> >>>>>> p-values, your suggesting I am misusing them? >>> >>>>> >>> >>>>> Depends on your use and interpretation >>> >>>>> >>> >>>>> test statistics, p-values are random variables, if you look at >>> several >>> >>>>> tests at the same time, some p-values will be large just by >>> chance. >>> >>>>> If, for example you just look at the largest test statistic, then >>> the >>> >>>>> distribution for the max of several test statistics is not the >>> same >>> as >>> >>>>> the distribution for a single test statistic >>> >>>>> >>> >>>>> http://en.wikipedia.org/wiki/Multiple_comparisons >>> >>>>> http://www.itl.nist.gov/div898/handbook/prc/section4/prc47.htm >>> >>>>> >>> >>>>> we also just had a related discussion for ANOVA post-hoc tests on >>> the >>> >>>>> pystatsmodels group. >>> >>>>> >>> >>>>> Josef >>> >>>>>> >>> >>>>>> Thanks. >>> >>>>>> >>> >>>>>> >>> >>>>>> josef.pktd wrote: >>> >>>>>>> >>> >>>>>>> On Sat, May 22, 2010 at 6:21 AM, mdekauwe >>> >>>>>>> wrote: >>> >>>>>>>> >>> >>>>>>>> Sounds like I am stuck with the loop as I need to do the >>> comparison >>> >>>>>>>> for >>> >>>>>>>> each >>> >>>>>>>> pixel of the world and then I have a basemap function call >>> which I >>> >>>>>>>> guess >>> >>>>>>>> slows it down further...hmm >>> >>>>>>> >>> >>>>>>> I don't see much that could be done differently, after a brief >>> look. >>> >>>>>>> >>> >>>>>>> stats.pearsonr could be replaced by an array version using >>> directly >>> >>>>>>> the formula for correlation even with nans. wilcoxon looks slow, >>> and >>> >>>>>>> I >>> >>>>>>> never tried or seen a faster version. >>> >>>>>>> >>> >>>>>>> just a reminder, the p-values are for a single test, when you >>> have >>> >>>>>>> many of them, then they don't have the right size/confidence >>> level >>> >>>>>>> for >>> >>>>>>> an overall or joint test. (some packages report a Bonferroni >>> >>>>>>> correction in this case) >>> >>>>>>> >>> >>>>>>> Josef >>> >>>>>>> >>> >>>>>>> >>> >>>>>>>> >>> >>>>>>>> i.e. >>> >>>>>>>> >>> >>>>>>>> def compareSnowData(jules_var): >>> >>>>>>>> # Extract the 11 years of snow data and return >>> >>>>>>>> outrows = 180 >>> >>>>>>>> outcols = 360 >>> >>>>>>>> numyears = 11 >>> >>>>>>>> nummonths = 12 >>> >>>>>>>> >>> >>>>>>>> # Read various files >>> >>>>>>>> fname="world_valid_jules_pts.ascii" >>> >>>>>>>> (numpts, land_pts_index, latitude, longitude, rows, cols) = >>> >>>>>>>> jo.read_land_points_ascii(fname, 1.0) >>> >>>>>>>> >>> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax0.mon.gra" >>> >>>>>>>> jules_data1 = jo.readJulesOutBinary(fname, numrows=15238, >>> >>>>>>>> numcols=1, >>> >>>>>>>> \ >>> >>>>>>>> timesteps=132, numvars=26) >>> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax3.mon.gra" >>> >>>>>>>> jules_data2 = jo.readJulesOutBinary(fname, numrows=15238, >>> >>>>>>>> numcols=1, >>> >>>>>>>> \ >>> >>>>>>>> timesteps=132, numvars=26) >>> >>>>>>>> >>> >>>>>>>> # grab some space >>> >>>>>>>> data1_snow = np.zeros((nummonths * numyears, numpts), >>> >>>>>>>> dtype=np.float32) >>> >>>>>>>> data2_snow = np.zeros((nummonths * numyears, numpts), >>> >>>>>>>> dtype=np.float32) >>> >>>>>>>> pearsonsr_snow = np.ones((outrows, outcols), >>> dtype=np.float32) >>> * >>> >>>>>>>> np.nan >>> >>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), >>> dtype=np.float32) >>> >>>>>>>> * >>> >>>>>>>> np.nan >>> >>>>>>>> >>> >>>>>>>> # extract the data >>> >>>>>>>> data1_snow = jules_data1[:,jules_var,:,0] >>> >>>>>>>> data2_snow = jules_data2[:,jules_var,:,0] >>> >>>>>>>> data1_snow = np.where(data1_snow < 0.0, np.nan, data1_snow) >>> >>>>>>>> data2_snow = np.where(data2_snow < 0.0, np.nan, data2_snow) >>> >>>>>>>> #for month in xrange(numyears * nummonths): >>> >>>>>>>> # for i in xrange(numpts): >>> >>>>>>>> # data1 = >>> >>>>>>>> jules_data1[month,jules_var,land_pts_index[i],0] >>> >>>>>>>> # data2 = >>> >>>>>>>> jules_data2[month,jules_var,land_pts_index[i],0] >>> >>>>>>>> # if data1 >= 0.0: >>> >>>>>>>> # data1_snow[month,i] = data1 >>> >>>>>>>> # else: >>> >>>>>>>> # data1_snow[month,i] = np.nan >>> >>>>>>>> # if data2 > 0.0: >>> >>>>>>>> # data2_snow[month,i] = data2 >>> >>>>>>>> # else: >>> >>>>>>>> # data2_snow[month,i] = np.nan >>> >>>>>>>> >>> >>>>>>>> # exclude any months from *both* arrays where we have dodgy >>> >>>>>>>> data, >>> >>>>>>>> else >>> >>>>>>>> we >>> >>>>>>>> # can't do the correlations correctly!! >>> >>>>>>>> data1_snow = np.where(np.isnan(data2_snow), np.nan, >>> data1_snow) >>> >>>>>>>> data2_snow = np.where(np.isnan(data1_snow), np.nan, >>> data1_snow) >>> >>>>>>>> >>> >>>>>>>> # put data on a regular grid... >>> >>>>>>>> print 'regridding landpts...' >>> >>>>>>>> for i in xrange(numpts): >>> >>>>>>>> # exclude the NaN, note masking them doesn't work in the >>> >>>>>>>> stats >>> >>>>>>>> func >>> >>>>>>>> x = data1_snow[:,i] >>> >>>>>>>> x = x[np.isfinite(x)] >>> >>>>>>>> y = data2_snow[:,i] >>> >>>>>>>> y = y[np.isfinite(y)] >>> >>>>>>>> >>> >>>>>>>> # r^2 >>> >>>>>>>> # exclude v.small arrays, i.e. we need just less over 4 >>> >>>>>>>> years >>> >>>>>>>> of >>> >>>>>>>> data >>> >>>>>>>> if len(x) and len(y) > 50: >>> >>>>>>>> pearsonsr_snow[((180-1)-(rows[i]-1)),cols[i]-1] = >>> >>>>>>>> (stats.pearsonr(x, y)[0])**2 >>> >>>>>>>> >>> >>>>>>>> # wilcox signed rank test >>> >>>>>>>> # make sure we have enough samples to do the test >>> >>>>>>>> d = x - y >>> >>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # Keep >>> all >>> >>>>>>>> non-zero >>> >>>>>>>> differences >>> >>>>>>>> count = len(d) >>> >>>>>>>> if count > 10: >>> >>>>>>>> z, pval = stats.wilcoxon(x, y) >>> >>>>>>>> # only map out sign different data >>> >>>>>>>> if pval < 0.05: >>> >>>>>>>> >>> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] = >>> >>>>>>>> np.mean(x - y) >>> >>>>>>>> >>> >>>>>>>> return (pearsonsr_snow, wilcoxStats_snow) >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> josef.pktd wrote: >>> >>>>>>>>> >>> >>>>>>>>> On Fri, May 21, 2010 at 10:14 PM, mdekauwe >>> >>> >>>>>>>>> wrote: >>> >>>>>>>>>> >>> >>>>>>>>>> Also I then need to remap the 2D array I make onto another >>> grid >>> >>>>>>>>>> (the >>> >>>>>>>>>> world in >>> >>>>>>>>>> this case). Which again I had am doing with a loop (note >>> numpts >>> >>>>>>>>>> is >>> >>>>>>>>>> a >>> >>>>>>>>>> lot >>> >>>>>>>>>> bigger than my example above). >>> >>>>>>>>>> >>> >>>>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), >>> dtype=np.float32) >>> >>>>>>>>>> * >>> >>>>>>>>>> np.nan >>> >>>>>>>>>> for i in xrange(numpts): >>> >>>>>>>>>> # exclude the NaN, note masking them doesn't work in >>> the >>> >>>>>>>>>> stats >>> >>>>>>>>>> func >>> >>>>>>>>>> x = data1_snow[:,i] >>> >>>>>>>>>> x = x[np.isfinite(x)] >>> >>>>>>>>>> y = data2_snow[:,i] >>> >>>>>>>>>> y = y[np.isfinite(y)] >>> >>>>>>>>>> >>> >>>>>>>>>> # wilcox signed rank test >>> >>>>>>>>>> # make sure we have enough samples to do the test >>> >>>>>>>>>> d = x - y >>> >>>>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # Keep >>> all >>> >>>>>>>>>> non-zero >>> >>>>>>>>>> differences >>> >>>>>>>>>> count = len(d) >>> >>>>>>>>>> if count > 10: >>> >>>>>>>>>> z, pval = stats.wilcoxon(x, y) >>> >>>>>>>>>> # only map out sign different data >>> >>>>>>>>>> if pval < 0.05: >>> >>>>>>>>>> >>> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] >>> >>>>>>>>>> = >>> >>>>>>>>>> np.mean(x - y) >>> >>>>>>>>>> >>> >>>>>>>>>> Now I think I can push the data in one move into the >>> >>>>>>>>>> wilcoxStats_snow >>> >>>>>>>>>> array >>> >>>>>>>>>> by removing the index, >>> >>>>>>>>>> but I can't see how I will get the individual x and y pts for >>> >>>>>>>>>> each >>> >>>>>>>>>> array >>> >>>>>>>>>> member correctly without the loop, this was my attempt which >>> of >>> >>>>>>>>>> course >>> >>>>>>>>>> doesn't work! >>> >>>>>>>>>> >>> >>>>>>>>>> x = data1_snow[:,:] >>> >>>>>>>>>> x = x[np.isfinite(x)] >>> >>>>>>>>>> y = data2_snow[:,:] >>> >>>>>>>>>> y = y[np.isfinite(y)] >>> >>>>>>>>>> >>> >>>>>>>>>> # r^2 >>> >>>>>>>>>> # exclude v.small arrays, i.e. we need just less over 4 years >>> of >>> >>>>>>>>>> data >>> >>>>>>>>>> if len(x) and len(y) > 50: >>> >>>>>>>>>> pearsonsr_snow[((180-1)-(rows-1)),cols-1] = >>> (stats.pearsonr(x, >>> >>>>>>>>>> y)[0])**2 >>> >>>>>>>>> >>> >>>>>>>>> >>> >>>>>>>>> If you want to do pairwise comparisons with stats.wilcoxon, >>> then >>> >>>>>>>>> you >>> >>>>>>>>> might be stuck with the loop, since wilcoxon takes only two 1d >>> >>>>>>>>> arrays >>> >>>>>>>>> at a time (if I read the help correctly). >>> >>>>>>>>> >>> >>>>>>>>> Also the presence of nans might force the use a loop. >>> stats.mstats >>> >>>>>>>>> has >>> >>>>>>>>> masked array versions, but I didn't see wilcoxon in the list. >>> >>>>>>>>> (Even >>> >>>>>>>>> when vectorized operations would work with regular arrays, nan >>> or >>> >>>>>>>>> masked array versions still have to loop in many cases.) >>> >>>>>>>>> >>> >>>>>>>>> If you have many columns with count <= 10, so that wilcoxon is >>> not >>> >>>>>>>>> calculated then it might be worth to use only array operations >>> up >>> >>>>>>>>> to >>> >>>>>>>>> that point. If wilcoxon is calculated most of the time, then >>> it's >>> >>>>>>>>> not >>> >>>>>>>>> worth thinking too hard about this. >>> >>>>>>>>> >>> >>>>>>>>> Josef >>> >>>>>>>>> >>> >>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> thanks. >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> mdekauwe wrote: >>> >>>>>>>>>>> >>> >>>>>>>>>>> Yes as Zachary said index is only 0 to 15237, so both >>> methods >>> >>>>>>>>>>> work. >>> >>>>>>>>>>> >>> >>>>>>>>>>> I don't quite get what you mean about slicing with axis > 3. >>> Is >>> >>>>>>>>>>> there >>> >>>>>>>>>>> a >>> >>>>>>>>>>> link you can recommend I should read? Does that mean given I >>> >>>>>>>>>>> have >>> >>>>>>>>>>> 4dims >>> >>>>>>>>>>> that Josef's suggestion would be more advised in this case? >>> >>>>>>>>> >>> >>>>>>>>> There were several discussions on the mailing lists (fancy >>> slicing >>> >>>>>>>>> and >>> >>>>>>>>> indexing). Your case is safe, but if you run in future into >>> funny >>> >>>>>>>>> shapes, you can look up the details. >>> >>>>>>>>> when in doubt, I use np.arange(...) >>> >>>>>>>>> >>> >>>>>>>>> Josef >>> >>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> Thanks. >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> josef.pktd wrote: >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> On Fri, May 21, 2010 at 10:55 AM, mdekauwe < >>> mdekauwe at gmail.com> >>> >>>>>>>>>>>> wrote: >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> Thanks that works... >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> So the way to do it is with np.arange(tsteps)[:,None], >>> that >>> >>>>>>>>>>>>> was >>> >>>>>>>>>>>>> the >>> >>>>>>>>>>>>> step >>> >>>>>>>>>>>>> I >>> >>>>>>>>>>>>> was struggling with, so this forms a 2D array which >>> replaces >>> >>>>>>>>>>>>> the >>> >>>>>>>>>>>>> the >>> >>>>>>>>>>>>> two >>> >>>>>>>>>>>>> for >>> >>>>>>>>>>>>> loops? Do I have that right? >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> Yes, but as Zachary showed, if you need the full index in a >>> >>>>>>>>>>>> dimension, >>> >>>>>>>>>>>> then you can use slicing. It might be faster. >>> >>>>>>>>>>>> And a warning, mixing slices and index arrays with 3 or >>> more >>> >>>>>>>>>>>> dimensions can have some surprise switching of axes. >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> Josef >>> >>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> A lot quicker...! >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> Martin >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> josef.pktd wrote: >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> On Fri, May 21, 2010 at 8:59 AM, mdekauwe >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> wrote: >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> Hi, >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> I am trying to extract data from a 4D array and store it >>> in >>> >>>>>>>>>>>>>>> a >>> >>>>>>>>>>>>>>> 2D >>> >>>>>>>>>>>>>>> array, >>> >>>>>>>>>>>>>>> but >>> >>>>>>>>>>>>>>> avoid my current usage of the for loops for speed, as in >>> >>>>>>>>>>>>>>> reality >>> >>>>>>>>>>>>>>> the >>> >>>>>>>>>>>>>>> arrays >>> >>>>>>>>>>>>>>> sizes are quite big. Could someone also try and explain >>> the >>> >>>>>>>>>>>>>>> solution >>> >>>>>>>>>>>>>>> as >>> >>>>>>>>>>>>>>> well >>> >>>>>>>>>>>>>>> if they have a spare moment as I am still finding it >>> quite >>> >>>>>>>>>>>>>>> difficult >>> >>>>>>>>>>>>>>> to >>> >>>>>>>>>>>>>>> get >>> >>>>>>>>>>>>>>> over the habit of using loops (C convert for my sins). I >>> get >>> >>>>>>>>>>>>>>> that >>> >>>>>>>>>>>>>>> one >>> >>>>>>>>>>>>>>> could >>> >>>>>>>>>>>>>>> precompute the indices's i and j i.e. >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> i = np.arange(tsteps) >>> >>>>>>>>>>>>>>> j = np.arange(numpts) >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> but just can't get my head round how i then use them... >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> Thanks, >>> >>>>>>>>>>>>>>> Martin >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> import numpy as np >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> numpts=10 >>> >>>>>>>>>>>>>>> tsteps = 12 >>> >>>>>>>>>>>>>>> vari = 22 >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> data = np.random.random((tsteps, vari, numpts, 1)) >>> >>>>>>>>>>>>>>> new_data = np.zeros((tsteps, numpts), dtype=np.float32) >>> >>>>>>>>>>>>>>> index = np.arange(numpts) >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> for i in xrange(tsteps): >>> >>>>>>>>>>>>>>> for j in xrange(numpts): >>> >>>>>>>>>>>>>>> new_data[i,j] = data[i,5,index[j],0] >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> The index arrays need to be broadcastable against each >>> other. >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> I think this should do it >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> new_data = data[np.arange(tsteps)[:,None], 5, >>> >>>>>>>>>>>>>> np.arange(numpts), >>> >>>>>>>>>>>>>> 0] >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> Josef >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> -- >>> >>>>>>>>>>>>>>> View this message in context: >>> >>>>>>>>>>>>>>> >>> http://old.nabble.com/removing-for-loops...-tp28633477p28633477.html >>> >>>>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at >>> Nabble.com. >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> _______________________________________________ >>> >>>>>>>>>>>>>>> SciPy-User mailing list >>> >>>>>>>>>>>>>>> SciPy-User at scipy.org >>> >>>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> _______________________________________________ >>> >>>>>>>>>>>>>> SciPy-User mailing list >>> >>>>>>>>>>>>>> SciPy-User at scipy.org >>> >>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> -- >>> >>>>>>>>>>>>> View this message in context: >>> >>>>>>>>>>>>> >>> http://old.nabble.com/removing-for-loops...-tp28633477p28634924.html >>> >>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at >>> Nabble.com. >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> _______________________________________________ >>> >>>>>>>>>>>>> SciPy-User mailing list >>> >>>>>>>>>>>>> SciPy-User at scipy.org >>> >>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>> _______________________________________________ >>> >>>>>>>>>>>> SciPy-User mailing list >>> >>>>>>>>>>>> SciPy-User at scipy.org >>> >>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> -- >>> >>>>>>>>>> View this message in context: >>> >>>>>>>>>> >>> http://old.nabble.com/removing-for-loops...-tp28633477p28640656.html >>> >>>>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >>>>>>>>>> >>> >>>>>>>>>> _______________________________________________ >>> >>>>>>>>>> SciPy-User mailing list >>> >>>>>>>>>> SciPy-User at scipy.org >>> >>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>>>>>>>>> >>> >>>>>>>>> _______________________________________________ >>> >>>>>>>>> SciPy-User mailing list >>> >>>>>>>>> SciPy-User at scipy.org >>> >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>>>>>>>> >>> >>>>>>>>> >>> >>>>>>>> >>> >>>>>>>> -- >>> >>>>>>>> View this message in context: >>> >>>>>>>> >>> http://old.nabble.com/removing-for-loops...-tp28633477p28642434.html >>> >>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >>>>>>>> >>> >>>>>>>> _______________________________________________ >>> >>>>>>>> SciPy-User mailing list >>> >>>>>>>> SciPy-User at scipy.org >>> >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>>>>>>> >>> >>>>>>> _______________________________________________ >>> >>>>>>> SciPy-User mailing list >>> >>>>>>> SciPy-User at scipy.org >>> >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>>>>>> >>> >>>>>>> >>> >>>>>> >>> >>>>>> -- >>> >>>>>> View this message in context: >>> >>>>>> >>> http://old.nabble.com/removing-for-loops...-tp28633477p28686356.html >>> >>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >>>>>> >>> >>>>>> _______________________________________________ >>> >>>>>> SciPy-User mailing list >>> >>>>>> SciPy-User at scipy.org >>> >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>>>>> >>> >>>>> _______________________________________________ >>> >>>>> SciPy-User mailing list >>> >>>>> SciPy-User at scipy.org >>> >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>>>> >>> >>>>> >>> >>>> >>> >>>> -- >>> >>>> View this message in context: >>> >>>> >>> http://old.nabble.com/removing-for-loops...-tp28633477p28711249.html >>> >>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >>>> >>> >>>> _______________________________________________ >>> >>>> SciPy-User mailing list >>> >>>> SciPy-User at scipy.org >>> >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>>> >>> >>> _______________________________________________ >>> >>> SciPy-User mailing list >>> >>> SciPy-User at scipy.org >>> >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >>> >> >>> >> -- >>> >> View this message in context: >>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28711444.html >>> >> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >> >>> >> _______________________________________________ >>> >> SciPy-User mailing list >>> >> SciPy-User at scipy.org >>> >> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>> > _______________________________________________ >>> > SciPy-User mailing list >>> > SciPy-User at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> > >>> > >>> >>> -- >>> View this message in context: >>> http://old.nabble.com/removing-for-loops...-tp28633477p28711581.html >>> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > -- View this message in context: http://old.nabble.com/removing-for-loops...-tp28633477p28825042.html Sent from the Scipy-User mailing list archive at Nabble.com. From ben.root at ou.edu Tue Jun 8 22:39:11 2010 From: ben.root at ou.edu (Benjamin Root) Date: Tue, 8 Jun 2010 21:39:11 -0500 Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28711581.post@talk.nabble.com> <28824023.post@talk.nabble.com> Message-ID: Correction for me as well. To mask out the negative values, use masked arrays. So we will turn jules_2d into a masked array (second line), then all subsequent commands will still work as expected. It is very similar to replacing negative values with nans and using nanmin(). > jules_2d = jules.reshape((-1, 12)) > jules_2d = np.ma.masked_array(jules_2d, mask=jules_2d < 0.0) > jules_monthly = np.mean(jules_2d, axis=0) > print jules_monthly.shape (12,) Ben Root On Tue, Jun 8, 2010 at 7:49 PM, Benjamin Root wrote: > The np.mod in my example caused the data points to stay within [0, 11] in > order to illustrate that these are months. In my example, months are > column, years are rows. In your desired output, months are rows and years > are columns. It makes very little difference which way you have it. > > Anyway, let's imagine that we have a time series of data "jules". We can > easily reshape this like so: > > > jules_2d = jules.reshape((-1, 12)) > > jules_monthly = np.mean(jules_2d, axis=0) > > print jules_monthly.shape > (12,) > > voila! You have 12 values in jules_monthly which are means for that month > across all years. > > protip - if you want yearly averages just change the ax parameter in > np.mean(): > > jules_yearly = np.mean(jules_2d, axis=1) > > I hope that makes my previous explanation clearer. > > Ben Root > > > On Tue, Jun 8, 2010 at 5:41 PM, mdekauwe wrote: > >> >> OK... >> >> but if I do... >> >> In [28]: np.mod(np.arange(nummonths*numyears), nummonths).reshape((-1, >> nummonths)) >> Out[28]: >> array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) >> >> When really I would be after something like this I think? >> >> array([ 0, 12, 24, 36, 48, 60, 72, 84, 96, 108, 120], >> [ 1, 13, 25, 37, 49, 61, 73, 85, 97, 109, 121], >> [ 2, 14, 26, 38, 50, 62, 74, 86, 98, 110, 122] >> etc, etc >> >> i.e. so for each month jump across the years. >> >> Not quite sure of this example...this is what I currently have which does >> seem to work, though I guess not completely efficiently. >> >> for month in xrange(nummonths): >> tmp = jules[xrange(0, numyears * nummonths, nummonths),VAR,:,0] >> tmp[tmp < 0.0] = np.nan >> data[month,:] = np.mean(tmp, axis=0) >> >> >> >> >> Benjamin Root-2 wrote: >> > >> > If you want an average for each month from your timeseries, then the >> > sneaky >> > way would be to reshape your array so that the time dimension is split >> > into >> > two (month, year) dimensions. >> > >> > For a 1-D array, this would be: >> > >> >> dataarray = numpy.mod(numpy.arange(36), 12) >> >> print dataarray >> > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, >> 4, >> > 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, 4, 5, 6, 7, 8, >> 9, >> > 10, 11]) >> >> datamatrix = dataarray.reshape((-1, 12)) >> >> print datamatrix >> > array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) >> > >> > Hope that helps. >> > >> > Ben Root >> > >> > >> > On Fri, May 28, 2010 at 3:28 PM, mdekauwe wrote: >> > >> >> >> >> OK so I just need to have a quick loop across the 12 months then, that >> is >> >> fine, just thought there might have been a sneaky way! >> >> >> >> Really appreciated, getting there slowly! >> >> >> >> >> >> >> >> josef.pktd wrote: >> >> > >> >> > On Fri, May 28, 2010 at 4:14 PM, mdekauwe >> wrote: >> >> >> >> >> >> ok - something like this then...but how would i get the index for >> the >> >> >> month >> >> >> for the data array (where month is 0, 1, 2, 4 ... 11)? >> >> >> >> >> >> data[month,:] = array[xrange(0, numyears * nummonths, >> >> nummonths),VAR,:,0] >> >> > >> >> > you would still need to start at the right month >> >> > data[month,:] = array[xrange(month, numyears * nummonths, >> >> > nummonths),VAR,:,0] >> >> > or >> >> > data[month,:] = array[month: numyears * nummonths : >> nummonths),VAR,:,0] >> >> > >> >> > an alternative would be a reshape with an extra month dimension and >> >> > then sum only once over the year axis. this might be faster but >> >> > trickier to get the correct reshape . >> >> > >> >> > Josef >> >> > >> >> >> >> >> >> and would that be quicker than making an array months... >> >> >> >> >> >> months = np.arange(numyears * nummonths) >> >> >> >> >> >> and you that instead like you suggested x[start:end:12,:]? >> >> >> >> >> >> Many thanks again... >> >> >> >> >> >> >> >> >> josef.pktd wrote: >> >> >>> >> >> >>> On Fri, May 28, 2010 at 3:53 PM, mdekauwe >> wrote: >> >> >>>> >> >> >>>> Ok thanks...I'll take a look. >> >> >>>> >> >> >>>> Back to my loops issue. What if instead this time I wanted to take >> >> an >> >> >>>> average so every march in 11 years, is there a quicker way to go >> >> about >> >> >>>> doing >> >> >>>> that than my current method? >> >> >>>> >> >> >>>> nummonths = 12 >> >> >>>> numyears = 11 >> >> >>>> >> >> >>>> for month in xrange(nummonths): >> >> >>>> for i in xrange(numpts): >> >> >>>> for ym in xrange(month, numyears * nummonths, nummonths): >> >> >>>> data[month, i] += array[ym, VAR, land_pts_index[i], 0] >> >> >>> >> >> >>> >> >> >>> x[start:end:12,:] gives you every 12th row of an array x >> >> >>> >> >> >>> something like this should work to get rid of the inner loop, or >> you >> >> >>> could directly put >> >> >>> range(month, numyears * nummonths, nummonths) into the array >> instead >> >> >>> of ym and sum() >> >> >>> >> >> >>> Josef >> >> >>> >> >> >>> >> >> >>>> >> >> >>>> so for each point in the array for a given month i am jumping >> >> through >> >> >>>> and >> >> >>>> getting the next years month and so on, summing it. >> >> >>>> >> >> >>>> Thanks... >> >> >>>> >> >> >>>> >> >> >>>> josef.pktd wrote: >> >> >>>>> >> >> >>>>> On Wed, May 26, 2010 at 5:03 PM, mdekauwe >> >> wrote: >> >> >>>>>> >> >> >>>>>> Could you possibly if you have time explain further your comment >> >> re >> >> >>>>>> the >> >> >>>>>> p-values, your suggesting I am misusing them? >> >> >>>>> >> >> >>>>> Depends on your use and interpretation >> >> >>>>> >> >> >>>>> test statistics, p-values are random variables, if you look at >> >> several >> >> >>>>> tests at the same time, some p-values will be large just by >> chance. >> >> >>>>> If, for example you just look at the largest test statistic, then >> >> the >> >> >>>>> distribution for the max of several test statistics is not the >> same >> >> as >> >> >>>>> the distribution for a single test statistic >> >> >>>>> >> >> >>>>> http://en.wikipedia.org/wiki/Multiple_comparisons >> >> >>>>> http://www.itl.nist.gov/div898/handbook/prc/section4/prc47.htm >> >> >>>>> >> >> >>>>> we also just had a related discussion for ANOVA post-hoc tests on >> >> the >> >> >>>>> pystatsmodels group. >> >> >>>>> >> >> >>>>> Josef >> >> >>>>>> >> >> >>>>>> Thanks. >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> josef.pktd wrote: >> >> >>>>>>> >> >> >>>>>>> On Sat, May 22, 2010 at 6:21 AM, mdekauwe >> >> >>>>>>> wrote: >> >> >>>>>>>> >> >> >>>>>>>> Sounds like I am stuck with the loop as I need to do the >> >> comparison >> >> >>>>>>>> for >> >> >>>>>>>> each >> >> >>>>>>>> pixel of the world and then I have a basemap function call >> which >> >> I >> >> >>>>>>>> guess >> >> >>>>>>>> slows it down further...hmm >> >> >>>>>>> >> >> >>>>>>> I don't see much that could be done differently, after a brief >> >> look. >> >> >>>>>>> >> >> >>>>>>> stats.pearsonr could be replaced by an array version using >> >> directly >> >> >>>>>>> the formula for correlation even with nans. wilcoxon looks >> slow, >> >> and >> >> >>>>>>> I >> >> >>>>>>> never tried or seen a faster version. >> >> >>>>>>> >> >> >>>>>>> just a reminder, the p-values are for a single test, when you >> >> have >> >> >>>>>>> many of them, then they don't have the right size/confidence >> >> level >> >> >>>>>>> for >> >> >>>>>>> an overall or joint test. (some packages report a Bonferroni >> >> >>>>>>> correction in this case) >> >> >>>>>>> >> >> >>>>>>> Josef >> >> >>>>>>> >> >> >>>>>>> >> >> >>>>>>>> >> >> >>>>>>>> i.e. >> >> >>>>>>>> >> >> >>>>>>>> def compareSnowData(jules_var): >> >> >>>>>>>> # Extract the 11 years of snow data and return >> >> >>>>>>>> outrows = 180 >> >> >>>>>>>> outcols = 360 >> >> >>>>>>>> numyears = 11 >> >> >>>>>>>> nummonths = 12 >> >> >>>>>>>> >> >> >>>>>>>> # Read various files >> >> >>>>>>>> fname="world_valid_jules_pts.ascii" >> >> >>>>>>>> (numpts, land_pts_index, latitude, longitude, rows, cols) = >> >> >>>>>>>> jo.read_land_points_ascii(fname, 1.0) >> >> >>>>>>>> >> >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax0.mon.gra" >> >> >>>>>>>> jules_data1 = jo.readJulesOutBinary(fname, numrows=15238, >> >> >>>>>>>> numcols=1, >> >> >>>>>>>> \ >> >> >>>>>>>> timesteps=132, numvars=26) >> >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax3.mon.gra" >> >> >>>>>>>> jules_data2 = jo.readJulesOutBinary(fname, numrows=15238, >> >> >>>>>>>> numcols=1, >> >> >>>>>>>> \ >> >> >>>>>>>> timesteps=132, numvars=26) >> >> >>>>>>>> >> >> >>>>>>>> # grab some space >> >> >>>>>>>> data1_snow = np.zeros((nummonths * numyears, numpts), >> >> >>>>>>>> dtype=np.float32) >> >> >>>>>>>> data2_snow = np.zeros((nummonths * numyears, numpts), >> >> >>>>>>>> dtype=np.float32) >> >> >>>>>>>> pearsonsr_snow = np.ones((outrows, outcols), >> >> dtype=np.float32) >> >> * >> >> >>>>>>>> np.nan >> >> >>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), >> >> dtype=np.float32) >> >> >>>>>>>> * >> >> >>>>>>>> np.nan >> >> >>>>>>>> >> >> >>>>>>>> # extract the data >> >> >>>>>>>> data1_snow = jules_data1[:,jules_var,:,0] >> >> >>>>>>>> data2_snow = jules_data2[:,jules_var,:,0] >> >> >>>>>>>> data1_snow = np.where(data1_snow < 0.0, np.nan, data1_snow) >> >> >>>>>>>> data2_snow = np.where(data2_snow < 0.0, np.nan, data2_snow) >> >> >>>>>>>> #for month in xrange(numyears * nummonths): >> >> >>>>>>>> # for i in xrange(numpts): >> >> >>>>>>>> # data1 = >> >> >>>>>>>> jules_data1[month,jules_var,land_pts_index[i],0] >> >> >>>>>>>> # data2 = >> >> >>>>>>>> jules_data2[month,jules_var,land_pts_index[i],0] >> >> >>>>>>>> # if data1 >= 0.0: >> >> >>>>>>>> # data1_snow[month,i] = data1 >> >> >>>>>>>> # else: >> >> >>>>>>>> # data1_snow[month,i] = np.nan >> >> >>>>>>>> # if data2 > 0.0: >> >> >>>>>>>> # data2_snow[month,i] = data2 >> >> >>>>>>>> # else: >> >> >>>>>>>> # data2_snow[month,i] = np.nan >> >> >>>>>>>> >> >> >>>>>>>> # exclude any months from *both* arrays where we have dodgy >> >> >>>>>>>> data, >> >> >>>>>>>> else >> >> >>>>>>>> we >> >> >>>>>>>> # can't do the correlations correctly!! >> >> >>>>>>>> data1_snow = np.where(np.isnan(data2_snow), np.nan, >> >> data1_snow) >> >> >>>>>>>> data2_snow = np.where(np.isnan(data1_snow), np.nan, >> >> data1_snow) >> >> >>>>>>>> >> >> >>>>>>>> # put data on a regular grid... >> >> >>>>>>>> print 'regridding landpts...' >> >> >>>>>>>> for i in xrange(numpts): >> >> >>>>>>>> # exclude the NaN, note masking them doesn't work in >> the >> >> >>>>>>>> stats >> >> >>>>>>>> func >> >> >>>>>>>> x = data1_snow[:,i] >> >> >>>>>>>> x = x[np.isfinite(x)] >> >> >>>>>>>> y = data2_snow[:,i] >> >> >>>>>>>> y = y[np.isfinite(y)] >> >> >>>>>>>> >> >> >>>>>>>> # r^2 >> >> >>>>>>>> # exclude v.small arrays, i.e. we need just less over 4 >> >> >>>>>>>> years >> >> >>>>>>>> of >> >> >>>>>>>> data >> >> >>>>>>>> if len(x) and len(y) > 50: >> >> >>>>>>>> pearsonsr_snow[((180-1)-(rows[i]-1)),cols[i]-1] = >> >> >>>>>>>> (stats.pearsonr(x, y)[0])**2 >> >> >>>>>>>> >> >> >>>>>>>> # wilcox signed rank test >> >> >>>>>>>> # make sure we have enough samples to do the test >> >> >>>>>>>> d = x - y >> >> >>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # Keep >> all >> >> >>>>>>>> non-zero >> >> >>>>>>>> differences >> >> >>>>>>>> count = len(d) >> >> >>>>>>>> if count > 10: >> >> >>>>>>>> z, pval = stats.wilcoxon(x, y) >> >> >>>>>>>> # only map out sign different data >> >> >>>>>>>> if pval < 0.05: >> >> >>>>>>>> >> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] >> >> = >> >> >>>>>>>> np.mean(x - y) >> >> >>>>>>>> >> >> >>>>>>>> return (pearsonsr_snow, wilcoxStats_snow) >> >> >>>>>>>> >> >> >>>>>>>> >> >> >>>>>>>> josef.pktd wrote: >> >> >>>>>>>>> >> >> >>>>>>>>> On Fri, May 21, 2010 at 10:14 PM, mdekauwe < >> mdekauwe at gmail.com> >> >> >>>>>>>>> wrote: >> >> >>>>>>>>>> >> >> >>>>>>>>>> Also I then need to remap the 2D array I make onto another >> >> grid >> >> >>>>>>>>>> (the >> >> >>>>>>>>>> world in >> >> >>>>>>>>>> this case). Which again I had am doing with a loop (note >> >> numpts >> >> >>>>>>>>>> is >> >> >>>>>>>>>> a >> >> >>>>>>>>>> lot >> >> >>>>>>>>>> bigger than my example above). >> >> >>>>>>>>>> >> >> >>>>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), >> >> dtype=np.float32) >> >> >>>>>>>>>> * >> >> >>>>>>>>>> np.nan >> >> >>>>>>>>>> for i in xrange(numpts): >> >> >>>>>>>>>> # exclude the NaN, note masking them doesn't work in >> >> the >> >> >>>>>>>>>> stats >> >> >>>>>>>>>> func >> >> >>>>>>>>>> x = data1_snow[:,i] >> >> >>>>>>>>>> x = x[np.isfinite(x)] >> >> >>>>>>>>>> y = data2_snow[:,i] >> >> >>>>>>>>>> y = y[np.isfinite(y)] >> >> >>>>>>>>>> >> >> >>>>>>>>>> # wilcox signed rank test >> >> >>>>>>>>>> # make sure we have enough samples to do the test >> >> >>>>>>>>>> d = x - y >> >> >>>>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # Keep >> >> all >> >> >>>>>>>>>> non-zero >> >> >>>>>>>>>> differences >> >> >>>>>>>>>> count = len(d) >> >> >>>>>>>>>> if count > 10: >> >> >>>>>>>>>> z, pval = stats.wilcoxon(x, y) >> >> >>>>>>>>>> # only map out sign different data >> >> >>>>>>>>>> if pval < 0.05: >> >> >>>>>>>>>> >> >> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] >> >> >>>>>>>>>> = >> >> >>>>>>>>>> np.mean(x - y) >> >> >>>>>>>>>> >> >> >>>>>>>>>> Now I think I can push the data in one move into the >> >> >>>>>>>>>> wilcoxStats_snow >> >> >>>>>>>>>> array >> >> >>>>>>>>>> by removing the index, >> >> >>>>>>>>>> but I can't see how I will get the individual x and y pts >> for >> >> >>>>>>>>>> each >> >> >>>>>>>>>> array >> >> >>>>>>>>>> member correctly without the loop, this was my attempt which >> >> of >> >> >>>>>>>>>> course >> >> >>>>>>>>>> doesn't work! >> >> >>>>>>>>>> >> >> >>>>>>>>>> x = data1_snow[:,:] >> >> >>>>>>>>>> x = x[np.isfinite(x)] >> >> >>>>>>>>>> y = data2_snow[:,:] >> >> >>>>>>>>>> y = y[np.isfinite(y)] >> >> >>>>>>>>>> >> >> >>>>>>>>>> # r^2 >> >> >>>>>>>>>> # exclude v.small arrays, i.e. we need just less over 4 >> years >> >> of >> >> >>>>>>>>>> data >> >> >>>>>>>>>> if len(x) and len(y) > 50: >> >> >>>>>>>>>> pearsonsr_snow[((180-1)-(rows-1)),cols-1] = >> >> (stats.pearsonr(x, >> >> >>>>>>>>>> y)[0])**2 >> >> >>>>>>>>> >> >> >>>>>>>>> >> >> >>>>>>>>> If you want to do pairwise comparisons with stats.wilcoxon, >> >> then >> >> >>>>>>>>> you >> >> >>>>>>>>> might be stuck with the loop, since wilcoxon takes only two >> 1d >> >> >>>>>>>>> arrays >> >> >>>>>>>>> at a time (if I read the help correctly). >> >> >>>>>>>>> >> >> >>>>>>>>> Also the presence of nans might force the use a loop. >> >> stats.mstats >> >> >>>>>>>>> has >> >> >>>>>>>>> masked array versions, but I didn't see wilcoxon in the list. >> >> >>>>>>>>> (Even >> >> >>>>>>>>> when vectorized operations would work with regular arrays, >> nan >> >> or >> >> >>>>>>>>> masked array versions still have to loop in many cases.) >> >> >>>>>>>>> >> >> >>>>>>>>> If you have many columns with count <= 10, so that wilcoxon >> is >> >> not >> >> >>>>>>>>> calculated then it might be worth to use only array >> operations >> >> up >> >> >>>>>>>>> to >> >> >>>>>>>>> that point. If wilcoxon is calculated most of the time, then >> >> it's >> >> >>>>>>>>> not >> >> >>>>>>>>> worth thinking too hard about this. >> >> >>>>>>>>> >> >> >>>>>>>>> Josef >> >> >>>>>>>>> >> >> >>>>>>>>> >> >> >>>>>>>>>> >> >> >>>>>>>>>> thanks. >> >> >>>>>>>>>> >> >> >>>>>>>>>> >> >> >>>>>>>>>> >> >> >>>>>>>>>> >> >> >>>>>>>>>> mdekauwe wrote: >> >> >>>>>>>>>>> >> >> >>>>>>>>>>> Yes as Zachary said index is only 0 to 15237, so both >> methods >> >> >>>>>>>>>>> work. >> >> >>>>>>>>>>> >> >> >>>>>>>>>>> I don't quite get what you mean about slicing with axis > >> 3. >> >> Is >> >> >>>>>>>>>>> there >> >> >>>>>>>>>>> a >> >> >>>>>>>>>>> link you can recommend I should read? Does that mean given >> I >> >> >>>>>>>>>>> have >> >> >>>>>>>>>>> 4dims >> >> >>>>>>>>>>> that Josef's suggestion would be more advised in this case? >> >> >>>>>>>>> >> >> >>>>>>>>> There were several discussions on the mailing lists (fancy >> >> slicing >> >> >>>>>>>>> and >> >> >>>>>>>>> indexing). Your case is safe, but if you run in future into >> >> funny >> >> >>>>>>>>> shapes, you can look up the details. >> >> >>>>>>>>> when in doubt, I use np.arange(...) >> >> >>>>>>>>> >> >> >>>>>>>>> Josef >> >> >>>>>>>>> >> >> >>>>>>>>>>> >> >> >>>>>>>>>>> Thanks. >> >> >>>>>>>>>>> >> >> >>>>>>>>>>> >> >> >>>>>>>>>>> >> >> >>>>>>>>>>> josef.pktd wrote: >> >> >>>>>>>>>>>> >> >> >>>>>>>>>>>> On Fri, May 21, 2010 at 10:55 AM, mdekauwe < >> >> mdekauwe at gmail.com> >> >> >>>>>>>>>>>> wrote: >> >> >>>>>>>>>>>>> >> >> >>>>>>>>>>>>> Thanks that works... >> >> >>>>>>>>>>>>> >> >> >>>>>>>>>>>>> So the way to do it is with np.arange(tsteps)[:,None], >> that >> >> >>>>>>>>>>>>> was >> >> >>>>>>>>>>>>> the >> >> >>>>>>>>>>>>> step >> >> >>>>>>>>>>>>> I >> >> >>>>>>>>>>>>> was struggling with, so this forms a 2D array which >> >> replaces >> >> >>>>>>>>>>>>> the >> >> >>>>>>>>>>>>> the >> >> >>>>>>>>>>>>> two >> >> >>>>>>>>>>>>> for >> >> >>>>>>>>>>>>> loops? Do I have that right? >> >> >>>>>>>>>>>> >> >> >>>>>>>>>>>> Yes, but as Zachary showed, if you need the full index in >> a >> >> >>>>>>>>>>>> dimension, >> >> >>>>>>>>>>>> then you can use slicing. It might be faster. >> >> >>>>>>>>>>>> And a warning, mixing slices and index arrays with 3 or >> more >> >> >>>>>>>>>>>> dimensions can have some surprise switching of axes. >> >> >>>>>>>>>>>> >> >> >>>>>>>>>>>> Josef >> >> >>>>>>>>>>>> >> >> >>>>>>>>>>>>> >> >> >>>>>>>>>>>>> A lot quicker...! >> >> >>>>>>>>>>>>> >> >> >>>>>>>>>>>>> Martin >> >> >>>>>>>>>>>>> >> >> >>>>>>>>>>>>> >> >> >>>>>>>>>>>>> josef.pktd wrote: >> >> >>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>> On Fri, May 21, 2010 at 8:59 AM, mdekauwe >> >> >>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>> wrote: >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>>> Hi, >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>>> I am trying to extract data from a 4D array and store >> it >> >> in >> >> >>>>>>>>>>>>>>> a >> >> >>>>>>>>>>>>>>> 2D >> >> >>>>>>>>>>>>>>> array, >> >> >>>>>>>>>>>>>>> but >> >> >>>>>>>>>>>>>>> avoid my current usage of the for loops for speed, as >> in >> >> >>>>>>>>>>>>>>> reality >> >> >>>>>>>>>>>>>>> the >> >> >>>>>>>>>>>>>>> arrays >> >> >>>>>>>>>>>>>>> sizes are quite big. Could someone also try and explain >> >> the >> >> >>>>>>>>>>>>>>> solution >> >> >>>>>>>>>>>>>>> as >> >> >>>>>>>>>>>>>>> well >> >> >>>>>>>>>>>>>>> if they have a spare moment as I am still finding it >> >> quite >> >> >>>>>>>>>>>>>>> difficult >> >> >>>>>>>>>>>>>>> to >> >> >>>>>>>>>>>>>>> get >> >> >>>>>>>>>>>>>>> over the habit of using loops (C convert for my sins). >> I >> >> get >> >> >>>>>>>>>>>>>>> that >> >> >>>>>>>>>>>>>>> one >> >> >>>>>>>>>>>>>>> could >> >> >>>>>>>>>>>>>>> precompute the indices's i and j i.e. >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>>> i = np.arange(tsteps) >> >> >>>>>>>>>>>>>>> j = np.arange(numpts) >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>>> but just can't get my head round how i then use them... >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>>> Thanks, >> >> >>>>>>>>>>>>>>> Martin >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>>> import numpy as np >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>>> numpts=10 >> >> >>>>>>>>>>>>>>> tsteps = 12 >> >> >>>>>>>>>>>>>>> vari = 22 >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>>> data = np.random.random((tsteps, vari, numpts, 1)) >> >> >>>>>>>>>>>>>>> new_data = np.zeros((tsteps, numpts), dtype=np.float32) >> >> >>>>>>>>>>>>>>> index = np.arange(numpts) >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>>> for i in xrange(tsteps): >> >> >>>>>>>>>>>>>>> for j in xrange(numpts): >> >> >>>>>>>>>>>>>>> new_data[i,j] = data[i,5,index[j],0] >> >> >>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>> The index arrays need to be broadcastable against each >> >> other. >> >> >>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>> I think this should do it >> >> >>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>> new_data = data[np.arange(tsteps)[:,None], 5, >> >> >>>>>>>>>>>>>> np.arange(numpts), >> >> >>>>>>>>>>>>>> 0] >> >> >>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>> Josef >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>>> -- >> >> >>>>>>>>>>>>>>> View this message in context: >> >> >>>>>>>>>>>>>>> >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28633477.html >> >> >>>>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at >> >> Nabble.com. >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>>> _______________________________________________ >> >> >>>>>>>>>>>>>>> SciPy-User mailing list >> >> >>>>>>>>>>>>>>> SciPy-User at scipy.org >> >> >>>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >>>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>> _______________________________________________ >> >> >>>>>>>>>>>>>> SciPy-User mailing list >> >> >>>>>>>>>>>>>> SciPy-User at scipy.org >> >> >>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >>>>>>>>>>>>>> >> >> >>>>>>>>>>>>>> >> >> >>>>>>>>>>>>> >> >> >>>>>>>>>>>>> -- >> >> >>>>>>>>>>>>> View this message in context: >> >> >>>>>>>>>>>>> >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28634924.html >> >> >>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at >> >> Nabble.com. >> >> >>>>>>>>>>>>> >> >> >>>>>>>>>>>>> _______________________________________________ >> >> >>>>>>>>>>>>> SciPy-User mailing list >> >> >>>>>>>>>>>>> SciPy-User at scipy.org >> >> >>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >>>>>>>>>>>>> >> >> >>>>>>>>>>>> _______________________________________________ >> >> >>>>>>>>>>>> SciPy-User mailing list >> >> >>>>>>>>>>>> SciPy-User at scipy.org >> >> >>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >>>>>>>>>>>> >> >> >>>>>>>>>>>> >> >> >>>>>>>>>>> >> >> >>>>>>>>>>> >> >> >>>>>>>>>> >> >> >>>>>>>>>> -- >> >> >>>>>>>>>> View this message in context: >> >> >>>>>>>>>> >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28640656.html >> >> >>>>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >>>>>>>>>> >> >> >>>>>>>>>> _______________________________________________ >> >> >>>>>>>>>> SciPy-User mailing list >> >> >>>>>>>>>> SciPy-User at scipy.org >> >> >>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >>>>>>>>>> >> >> >>>>>>>>> _______________________________________________ >> >> >>>>>>>>> SciPy-User mailing list >> >> >>>>>>>>> SciPy-User at scipy.org >> >> >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >>>>>>>>> >> >> >>>>>>>>> >> >> >>>>>>>> >> >> >>>>>>>> -- >> >> >>>>>>>> View this message in context: >> >> >>>>>>>> >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28642434.html >> >> >>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >>>>>>>> >> >> >>>>>>>> _______________________________________________ >> >> >>>>>>>> SciPy-User mailing list >> >> >>>>>>>> SciPy-User at scipy.org >> >> >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >>>>>>>> >> >> >>>>>>> _______________________________________________ >> >> >>>>>>> SciPy-User mailing list >> >> >>>>>>> SciPy-User at scipy.org >> >> >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >>>>>>> >> >> >>>>>>> >> >> >>>>>> >> >> >>>>>> -- >> >> >>>>>> View this message in context: >> >> >>>>>> >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28686356.html >> >> >>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >>>>>> >> >> >>>>>> _______________________________________________ >> >> >>>>>> SciPy-User mailing list >> >> >>>>>> SciPy-User at scipy.org >> >> >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >>>>>> >> >> >>>>> _______________________________________________ >> >> >>>>> SciPy-User mailing list >> >> >>>>> SciPy-User at scipy.org >> >> >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >>>>> >> >> >>>>> >> >> >>>> >> >> >>>> -- >> >> >>>> View this message in context: >> >> >>>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28711249.html >> >> >>>> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >>>> >> >> >>>> _______________________________________________ >> >> >>>> SciPy-User mailing list >> >> >>>> SciPy-User at scipy.org >> >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >>>> >> >> >>> _______________________________________________ >> >> >>> SciPy-User mailing list >> >> >>> SciPy-User at scipy.org >> >> >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >>> >> >> >>> >> >> >> >> >> >> -- >> >> >> View this message in context: >> >> >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28711444.html >> >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> >> >> _______________________________________________ >> >> >> SciPy-User mailing list >> >> >> SciPy-User at scipy.org >> >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> > _______________________________________________ >> >> > SciPy-User mailing list >> >> > SciPy-User at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > >> >> > >> >> >> >> -- >> >> View this message in context: >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28711581.html >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> -- >> View this message in context: >> http://old.nabble.com/removing-for-loops...-tp28633477p28824023.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Jun 8 23:28:43 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 8 Jun 2010 21:28:43 -0600 Subject: [SciPy-User] [SciPy-user] numpy and C In-Reply-To: <28767579.post@talk.nabble.com> References: <28767579.post@talk.nabble.com> Message-ID: On Thu, Jun 3, 2010 at 7:36 AM, tinauser wrote: > > Hallo, > I'm pretty new with both Python and C > I have a C application that run a python script.The python script then uses > some > C functions written in the code that embedd the script itself. > I want to pass array from the C code (where an API is giving pointer to > data > I > want to plot in Python) and Python (a GUI). > > What I've done so far is to allocate before calling the Pyhon script a > PyArray > with PyArray_SimpleNew. Since the data are unsigned char, the command is: > my_second_array = (PyArrayObject *)PyArray_SimpleNew(2,dim,NPY_UBYTE); > When I call the Python script,I sent my_second_array.On a timer, > my_second_array > is used as a parameter for a C written function: the idea is to assign the > pointer of a frame to my_second_array.data. > > PyArrayObject *Pymatout_img=NULL; > PyArg_ParseTuple(args, "O", &Pymatout_img);//Pymatout_img is the matrix > that > was > created in C during the initialization with PyArray_SimpleNew > Pymatout_img->data= cam_frame->data; > > The problem is that the compiler (I'm using visual C 2008) says that cannot > convert *char to *unsigned char... > > Can someone explains me what i'm doing wrong? > > Can you show the code that is causing the problem? There should be a line number somewhere. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Wed Jun 9 00:34:22 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Tue, 8 Jun 2010 23:34:22 -0500 Subject: [SciPy-User] curve_fit error: Optional parameters not found... In-Reply-To: References: Message-ID: <6557BD28-36BC-41D8-93D8-4DC6EED313D4@enthought.com> On Jun 8, 2010, at 11:36 AM, Jeremy Conlin wrote: > I downloaded scipy 0.8b1 yesterday; I was excited to try out the new > curve_fit function. Today I have been playing with it and some of the > time it works. Other times I get the error: > > RuntimeError: Optimal parameters not found: Both actual and predicted > relative reductions in the sum of squares > are at most 0.000000 and the relative error between two consecutive > iterates is at > most 0.000000 At the core of this routine is a nonlinear least-squares optimization. Optimization algorithms can fail to converge. It looks like that is happening here. You can try providing weights to your data-points or adjusting the function that is being fit. The "fix" could be to improve the error reporting and handling, but there are always going to be cases where the algorithm won't be able to find an optimum. -Travis From oliphant at enthought.com Wed Jun 9 00:41:29 2010 From: oliphant at enthought.com (Travis Oliphant) Date: Tue, 8 Jun 2010 23:41:29 -0500 Subject: [SciPy-User] Triangular Distribution ppf method In-Reply-To: References: Message-ID: <72356B31-24A2-4A58-B8FA-98E955CE67A6@enthought.com> On May 25, 2010, at 7:33 PM, Leon Adams wrote: > Hi all, > There seems to be a bug of some sort in evaluating the ppf method of the scipy.stats.triang.ppf function. Evaluating the distribution with a location parameter 1 or greater seems to problematic. I am looking for confirmation on this behavior and suggestions for work around. Make sure you understand the shape parameter for this distribution and how the location and scale parameter interact with it. The required shape parameter is the peak of the pdf as a percentage of the width. The location parameter is the start of the non-zero portion of the triangular-shaped pdf. The scale parameter is the width of the non-zero portion of the pdf. >>> from scipy.stats import triang >>> triang.ppf([0.1, 0.5, 0.8], 0.5, loc=20, scale=10) array([ 22.23606798, 25. , 26.83772234]) -Travis From josef.pktd at gmail.com Wed Jun 9 01:02:41 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 9 Jun 2010 01:02:41 -0400 Subject: [SciPy-User] curve_fit error: Optional parameters not found... In-Reply-To: <6557BD28-36BC-41D8-93D8-4DC6EED313D4@enthought.com> References: <6557BD28-36BC-41D8-93D8-4DC6EED313D4@enthought.com> Message-ID: On Wed, Jun 9, 2010 at 12:34 AM, Travis Oliphant wrote: > > On Jun 8, 2010, at 11:36 AM, Jeremy Conlin wrote: > >> I downloaded scipy 0.8b1 yesterday; I was excited to try out the new >> curve_fit function. ?Today I have been playing with it and some of the >> time it works. ?Other times I get the error: >> >> RuntimeError: Optimal parameters not found: Both actual and predicted >> relative reductions in the sum of squares >> ?are at most 0.000000 and the relative error between two consecutive >> iterates is at >> ?most 0.000000 > > At the core of this routine is a nonlinear least-squares optimization. ? Optimization algorithms can fail to converge. ?It looks like that is happening here. ? You can try providing weights to your data-points or adjusting the function that is being fit. > > The "fix" could be to improve the error reporting and handling, but there are always going to be cases where the algorithm won't be able to find an optimum. Travis, do you have an opinion about http://projects.scipy.org/scipy/ticket/984 (and associated http://projects.scipy.org/scipy/ticket/1111 ) ? In some cases, raising an exception for ier>1 looks to strict. Josef > > -Travis > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Wed Jun 9 02:53:55 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Jun 2010 08:53:55 +0200 Subject: [SciPy-User] scipy.io.matlab.loadmat error In-Reply-To: References: <8CA9D85A-CA93-4B7F-8434-02F633C44090@gmail.com> Message-ID: On Fri, 4 Jun 2010 20:30:21 -0300 Fernando Guimar?es Ferreira wrote: > So, > > Things have change... I rebuilt numpy and scipy. It >turns out that > scipy.io.matlab.loadmat is working again.... > However scipy.test('1', '10') is still failing > > I attached the output... i can't understand why... > > I installed the dmg package from the sourceForge >repository. > > Any idea? > > Cheers, >Fernando > > I cannot reproduce the problem here. >>> numpy.__version__ '2.0.0.dev8460' >>> scipy.__version__ '0.9.0.dev6495' ====================================================================== ERROR: test_decomp.test_lapack_misaligned(, (array([[ 1.734e-255, 8.189e-217, 4.025e-178, 1.903e-139, 9.344e-101, ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/linalg/tests/test_decomp.py", line 1071, in check_lapack_misaligned func(*a,**kwargs) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/linalg/basic.py", line 49, in solve a1, b1 = map(asarray_chkfinite,(a,b)) File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/function_base.py", line 528, in asarray_chkfinite "array must not contain infs or NaNs") ValueError: array must not contain infs or NaNs ====================================================================== ERROR: Ticket #1124. ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/signal/tests/test_signaltools.py", line 287, in test_none signal.medfilt(None) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/signal/signaltools.py", line 317, in medfilt return sigtools._order_filterND(volume,domain,order) ValueError: order_filterND not available for this type ====================================================================== ERROR: test_mpmath.test_expi_complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/testing/decorators.py", line 146, in skipper_func return f(*args, **kwargs) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/special/tests/test_mpmath.py", line 46, in test_expi_complex dataset = np.array(dataset, dtype=np.complex_) TypeError: a float is required ---------------------------------------------------------------------- Ran 4625 tests in 166.157s FAILED (KNOWNFAIL=12, SKIP=17, errors=3) From tinauser at libero.it Wed Jun 9 07:38:21 2010 From: tinauser at libero.it (tinauser) Date: Wed, 9 Jun 2010 04:38:21 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] numpy and C In-Reply-To: References: <28767579.post@talk.nabble.com> Message-ID: <28829120.post@talk.nabble.com> Dear Charles, thanks for the reply. The part of code causing the problem was exactly this Pymatout_img->data= cam_frame->data; where Pymatout is a PyArrayObject and cam_frame is a structure having a pointer to undefined char data. The code works all right if I recast in this way Pymatout_img->data= (char*)cam_frame->data; I'm not sure if this is allowed;I guessed it works because even if Pymatout_img->data is always a pointer to char, the PyArrayObject looks in ->descr->type_num to see what is the data type. cheers Charles R Harris wrote: > > On Thu, Jun 3, 2010 at 7:36 AM, tinauser wrote: > >> >> Hallo, >> I'm pretty new with both Python and C >> I have a C application that run a python script.The python script then >> uses >> some >> C functions written in the code that embedd the script itself. >> I want to pass array from the C code (where an API is giving pointer to >> data >> I >> want to plot in Python) and Python (a GUI). >> >> What I've done so far is to allocate before calling the Pyhon script a >> PyArray >> with PyArray_SimpleNew. Since the data are unsigned char, the command is: >> my_second_array = (PyArrayObject *)PyArray_SimpleNew(2,dim,NPY_UBYTE); >> When I call the Python script,I sent my_second_array.On a timer, >> my_second_array >> is used as a parameter for a C written function: the idea is to assign >> the >> pointer of a frame to my_second_array.data. >> >> PyArrayObject *Pymatout_img=NULL; >> PyArg_ParseTuple(args, "O", &Pymatout_img);//Pymatout_img is the matrix >> that >> was >> created in C during the initialization with PyArray_SimpleNew >> Pymatout_img->data= cam_frame->data; >> >> The problem is that the compiler (I'm using visual C 2008) says that >> cannot >> convert *char to *unsigned char... >> >> Can someone explains me what i'm doing wrong? >> >> > Can you show the code that is causing the problem? There should be a line > number somewhere. > > Chuck > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/numpy-and-C-tp28767579p28829120.html Sent from the Scipy-User mailing list archive at Nabble.com. From charlesr.harris at gmail.com Wed Jun 9 09:46:19 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 9 Jun 2010 07:46:19 -0600 Subject: [SciPy-User] [SciPy-user] numpy and C In-Reply-To: <28829120.post@talk.nabble.com> References: <28767579.post@talk.nabble.com> <28829120.post@talk.nabble.com> Message-ID: On Wed, Jun 9, 2010 at 5:38 AM, tinauser wrote: > > Dear Charles, > thanks for the reply. > The part of code causing the problem was exactly this > > Pymatout_img->data= cam_frame->data; > where Pymatout is a PyArrayObject and cam_frame is a structure having a > pointer to undefined char data. > > The code works all right if I recast in this way > > Pymatout_img->data= (char*)cam_frame->data; > > I'm not sure if this is allowed;I guessed it works because even if > Pymatout_img->data is always a pointer to char, the PyArrayObject looks in > ->descr->type_num to see what is the data type. > > Numpy uses char* all over the place and later casts to the needed type, it's the old way of doing void*. So your explicit cast is fine. For some compilers, gcc for example, you also need to use a compiler flag to let the compiler know that you are going to do such things. In gcc the flag is -fno-strict-aliasing but I don't think you need to worry about this in VC. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Jun 9 10:23:34 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 9 Jun 2010 08:23:34 -0600 Subject: [SciPy-User] [SciPy-user] numpy and C In-Reply-To: References: <28767579.post@talk.nabble.com> <28829120.post@talk.nabble.com> Message-ID: On Wed, Jun 9, 2010 at 7:46 AM, Charles R Harris wrote: > > > On Wed, Jun 9, 2010 at 5:38 AM, tinauser wrote: > >> >> Dear Charles, >> thanks for the reply. >> The part of code causing the problem was exactly this >> >> Pymatout_img->data= cam_frame->data; >> where Pymatout is a PyArrayObject and cam_frame is a structure having a >> pointer to undefined char data. >> >> The code works all right if I recast in this way >> >> Pymatout_img->data= (char*)cam_frame->data; >> >> I'm not sure if this is allowed;I guessed it works because even if >> Pymatout_img->data is always a pointer to char, the PyArrayObject looks in >> ->descr->type_num to see what is the data type. >> >> > Numpy uses char* all over the place and later casts to the needed type, > it's the old way of doing void*. So your explicit cast is fine. For some > compilers, gcc for example, you also need to use a compiler flag to let the > compiler know that you are going to do such things. In gcc the flag is > -fno-strict-aliasing but I don't think you need to worry about this in VC. > > > > That said, managing the data in this way can be problematic as you need to track alignment and worry about freeing of memory. You might want to look at PyArray SimpleNewFromData. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinauser at libero.it Wed Jun 9 10:35:09 2010 From: tinauser at libero.it (tinauser) Date: Wed, 9 Jun 2010 07:35:09 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] numpy and C In-Reply-To: References: <28767579.post@talk.nabble.com> <28829120.post@talk.nabble.com> Message-ID: <28831237.post@talk.nabble.com> Dear Charles, thanks again for the replies. Why do you say that is difficoult to free memory? What I do is to allocate the memory(pyincref) before calling the Python script. The Python script uses then a timer to call a C function to which the allocated PyArrayObject (created with PyArray SimpleNew) is passed. In C, the pointer of the PyArray is assigned to a pointer that points to a sort of data buffer that is filled from a camera. The data buffer is allocated elsewhere. When the python GUI is closed, I just decref my PyArrayObject, that I'm basically using just to pass pointer values. Charles R Harris wrote: > > On Wed, Jun 9, 2010 at 7:46 AM, Charles R Harris > wrote: > >> >> >> On Wed, Jun 9, 2010 at 5:38 AM, tinauser wrote: >> >>> >>> Dear Charles, >>> thanks for the reply. >>> The part of code causing the problem was exactly this >>> >>> Pymatout_img->data= cam_frame->data; >>> where Pymatout is a PyArrayObject and cam_frame is a structure having a >>> pointer to undefined char data. >>> >>> The code works all right if I recast in this way >>> >>> Pymatout_img->data= (char*)cam_frame->data; >>> >>> I'm not sure if this is allowed;I guessed it works because even if >>> Pymatout_img->data is always a pointer to char, the PyArrayObject looks >>> in >>> ->descr->type_num to see what is the data type. >>> >>> >> Numpy uses char* all over the place and later casts to the needed type, >> it's the old way of doing void*. So your explicit cast is fine. For some >> compilers, gcc for example, you also need to use a compiler flag to let >> the >> compiler know that you are going to do such things. In gcc the flag is >> -fno-strict-aliasing but I don't think you need to worry about this in >> VC. >> >> >> >> > That said, managing the data in this way can be problematic as you need to > track alignment and worry about freeing of memory. You might want to look > at > PyArray SimpleNewFromData. > > Chuck > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/numpy-and-C-tp28767579p28831237.html Sent from the Scipy-User mailing list archive at Nabble.com. From charlesr.harris at gmail.com Wed Jun 9 10:57:05 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 9 Jun 2010 08:57:05 -0600 Subject: [SciPy-User] [SciPy-user] numpy and C In-Reply-To: <28831237.post@talk.nabble.com> References: <28767579.post@talk.nabble.com> <28829120.post@talk.nabble.com> <28831237.post@talk.nabble.com> Message-ID: On Wed, Jun 9, 2010 at 8:35 AM, tinauser wrote: > > Dear Charles, > > thanks again for the replies. > Why do you say that is difficoult to free memory? > What I do is to allocate the memory(pyincref) before calling the Python > script. The Python script uses then a timer to call a C function to which > the allocated PyArrayObject (created with PyArray SimpleNew) is passed. In > C, the pointer of the PyArray is assigned to a pointer that points to a > sort > of data buffer that is filled from a camera. The data buffer is allocated > elsewhere. > When the python GUI is closed, I just decref my PyArrayObject, that I'm > basically using just to pass pointer values. > > > I don't know the details of your larger design, so perhaps my concerns are irrelevant. The virtue of PyArray_SimpleNewFromData is that the array can be deallocated without affecting the buffer memory. PyArray SimpleNewFromData (PyObject*) (int nd, npy intp* dims,int typenum, void* data) Sometimes, you want to wrap memory allocated elsewhere into an ndarray object for downstream use. This routine makes it straightforward to do that. The first three arguments are the same as in PyArray SimpleNew, the final argument is a pointer to a block of contiguous memory that the ndarray should use as it?s data-buffer which will be interpreted in C-style contiguous fashion. A new reference to an ndarray is returned, but the ndarray will not own its data. When this ndarray is deallocated, the pointer will not be freed. You should ensure that the provided memory is not freed while the returned array is in existence. The easiest way to handle this is if data comes from another reference-counted Python object. The reference count on this object should be increased after the pointer is passed in, and the base member of the returned ndarray should point to the Python object that owns the data. Then, when the ndarray is deallocated, the base-member will be DECREF?d appropriately. If you want the memory to be freed as soon as the ndarray is deallocated then simply set the OWNDATA flag on the returned ndarray. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Jun 9 11:55:30 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Jun 2010 17:55:30 +0200 Subject: [SciPy-User] web applications Message-ID: Hi all, AFAIK matplotlib can be used in web applications. How about numpy and scipy ? Any pointer or reference would be appreciated. Thanks in advance Nils From robert.kern at gmail.com Wed Jun 9 12:14:10 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 9 Jun 2010 12:14:10 -0400 Subject: [SciPy-User] web applications In-Reply-To: References: Message-ID: On Wed, Jun 9, 2010 at 11:55, Nils Wagner wrote: > Hi all, > > AFAIK matplotlib can be used in web applications. > How about numpy and scipy ? Yes, of course. Since matplotlib uses numpy, it would be weird if numpy didn't work in a web app. > Any pointer or reference would be appreciated. Just use numpy or scipy exactly as you would anywhere else. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gokhansever at gmail.com Wed Jun 9 12:33:28 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 9 Jun 2010 11:33:28 -0500 Subject: [SciPy-User] web applications In-Reply-To: References: Message-ID: On Wed, Jun 9, 2010 at 10:55 AM, Nils Wagner wrote: > Hi all, > > AFAIK matplotlib can be used in web applications. > How about numpy and scipy ? > > Any pointer or reference would be appreciated. > > Thanks in advance > > Nils > I wished a SciPy conference tutorial title: Python Web-based applications for Scientists at http://conference.scipy.org/scipy2010/tutorialsUV.html Who knows, a brave and knowledgeable soul out there waiting your e-mail to show thyself up. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Jun 9 14:49:10 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Jun 2010 20:49:10 +0200 Subject: [SciPy-User] web applications In-Reply-To: References: Message-ID: On Wed, 9 Jun 2010 12:14:10 -0400 Robert Kern wrote: > On Wed, Jun 9, 2010 at 11:55, Nils Wagner > wrote: >> Hi all, >> >> AFAIK matplotlib can be used in web applications. >> How about numpy and scipy ? > > Yes, of course. Since matplotlib uses numpy, it would be >weird if > numpy didn't work in a web app. > >> Any pointer or reference would be appreciated. > > Just use numpy or scipy exactly as you would anywhere >else. > > -- > Robert Kern > > "I have come to believe that the whole world is an >enigma, a harmless > enigma that is made terrible by our own mad attempt to >interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user There are so many web frameworks, e.g. Django, Grok, Pylons, TurboGears, web2py, Zbope. What is recommended ? http://wiki.python.org/moin/WebFrameworks Nils From massimodisasha at gmail.com Wed Jun 9 14:31:17 2010 From: massimodisasha at gmail.com (Massimo Di Stefano) Date: Wed, 9 Jun 2010 20:31:17 +0200 Subject: [SciPy-User] matplotlib and large array Message-ID: <974A51CB-96CC-4810-AFB7-4FD1F80A673F@yahoo.it> Hi All, i need to work with a relative large images "60 mb" (single band geotiff file) i store it in python as a numpy array using python-gdal, the array dinension is (7173 X 7924) single band image, but tring to display it with matshow/imageshow or other matplotlib functions i have that python freeze itself and is not able to load the image. if i use a subset of the image, i 'm able to display it or at least i hade to reduce its resolution using hacks like : reduced_array = array[::3,::3] i don't need full resolution dataset when the image is displaied with a full "zoom out" so the reduction " reduced_array = array[::3,::3] " is good to show the complete image but when i zoom in the image i obviously lost data (less resolution) what do you use to display large dataset ? i'm thinking about a "piramid" with multy array based on the different zoom levels .. but maybe this idea is not so cool. someone already has developed similar code ? thanks to All for any suggestion! Regards, Massimo From Dharhas.Pothina at twdb.state.tx.us Wed Jun 9 15:26:19 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 09 Jun 2010 14:26:19 -0500 Subject: [SciPy-User] web applications In-Reply-To: References: Message-ID: <4C0FA48B.63BA.009B.0@twdb.state.tx.us> I'm developing something using Django and Matplotlib (learning as I go) and it seems to be working fairly well. Django has excellent tutorials and has been fairly easy to pick up. - dharhas >>> "Nils Wagner" 6/9/2010 1:49 PM >>> On Wed, 9 Jun 2010 12:14:10 -0400 Robert Kern wrote: > On Wed, Jun 9, 2010 at 11:55, Nils Wagner > wrote: >> Hi all, >> >> AFAIK matplotlib can be used in web applications. >> How about numpy and scipy ? > > Yes, of course. Since matplotlib uses numpy, it would be >weird if > numpy didn't work in a web app. > >> Any pointer or reference would be appreciated. > > Just use numpy or scipy exactly as you would anywhere >else. > > -- > Robert Kern > > "I have come to believe that the whole world is an >enigma, a harmless > enigma that is made terrible by our own mad attempt to >interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user There are so many web frameworks, e.g. Django, Grok, Pylons, TurboGears, web2py, Zbope. What is recommended ? http://wiki.python.org/moin/WebFrameworks Nils _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From Chris.Barker at noaa.gov Wed Jun 9 15:42:10 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 09 Jun 2010 12:42:10 -0700 Subject: [SciPy-User] web applications In-Reply-To: <4C0FA48B.63BA.009B.0@twdb.state.tx.us> References: <4C0FA48B.63BA.009B.0@twdb.state.tx.us> Message-ID: <4C0FEE92.1070406@noaa.gov> Dharhas Pothina wrote: > I'm developing something using Django and Matplotlib Django has the advantage of "one stop shopping" -- it is an integrated package of all the pieces you need. We use Pylons here, because it is less restrictive about what individual pieces you use -- you can swap out to a different template system or different ORM, or??? To a great degree, it's a matter of taste. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From GISxperts at web.de Wed Jun 9 16:36:58 2010 From: GISxperts at web.de (Hannes Reuter) Date: Wed, 9 Jun 2010 22:36:58 +0200 (CEST) Subject: [SciPy-User] matplotlib and large array In-Reply-To: <974A51CB-96CC-4810-AFB7-4FD1F80A673F@yahoo.it> References: <974A51CB-96CC-4810-AFB7-4FD1F80A673F@yahoo.it> Message-ID: <1801860126.794125.1276115818678.JavaMail.fmail@mwmweb072> qgis is no way for displaying it.. ? Cheers Hannes -- Dr. Hannes Isaak Reuter gisxperts gbr -----Urspr?ngliche Nachricht----- Von: Massimo Di Stefano Gesendet: 09.06.2010 20:31:17 An: matplotlib-users at lists.sourceforge.net Betreff: [SciPy-User] matplotlib and large array Hi All, i need to work with a relative large images "60 mb" (single band geotiff file) i store it in python as a numpy array using python-gdal, the array dinension is (7173 X 7924) single band image, but tring to display it with matshow/imageshow or other matplotlib functions i have that python freeze itself and is not able to load the image. if i use a subset of the image, i 'm able to display it or at least i hade to reduce its resolution using hacks like : reduced_array = array[::3,::3] i don't need full resolution dataset when the image is displaied with a full "zoom out" so the reduction " reduced_array = array[::3,::3] " is good to show the complete image but when i zoom in the image i obviously lost data (less resolution) what do you use to display large dataset ? i'm thinking about a "piramid" with multy array based on the different zoom levels .. but maybe this idea is not so cool. someone already has developed similar code ? thanks to All for any suggestion! Regards, Massimo _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: Hannes.vcf Type: text/x-vcard Size: 287 bytes Desc: not available URL: From david_baddeley at yahoo.com.au Wed Jun 9 19:11:18 2010 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 9 Jun 2010 16:11:18 -0700 (PDT) Subject: [SciPy-User] matplotlib and large array In-Reply-To: <974A51CB-96CC-4810-AFB7-4FD1F80A673F@yahoo.it> References: <974A51CB-96CC-4810-AFB7-4FD1F80A673F@yahoo.it> Message-ID: <312451.71972.qm@web33005.mail.mud.yahoo.com> Hi Massimo, matplotlib is pretty awful with huge images - part of the problem is that if stores a colormapped copy of the image which means that you go from a single image to R,G,B, & A channels. I have a feeling that this copy is also using a floating point datatype. In any case you're multiplying your memory usage by at least 4 (potentially much more if it is using floats/doubles and your data is 8 or 16 bit integers). Another problem is that it interpolates when the screen pixels and image pixels don't match up (which is most of the time). This is really nice when you've got small images you're scaling up, but quite a performance drag for larger images. I ended up coding up my own, wxpython based, viewer which downsamples at low magnification and only pulls out the currently visible ROI at high mag. Only this visible region is then colourmapped (at most the window size - so around 1k by 1k pixels if you've got the window maximised). This makes a huge difference to memory consumption, and performance. There's no provision for plotting axes or other stuff over the image though. My viewer is designed for 3D images with multiple colour channels (microscopy data sets), but will handle 2D images fine. Unfortunately it's currently got a fair bit of application dependent cruft/dependencies. I've been meaning to strip these out so it can be used standalone for a while, so if you don't find an alternative viewer, drop me a line and I'll see if I can get it into some sort of shape. cheers, David ----- Original Message ---- From: Massimo Di Stefano To: matplotlib-users at lists.sourceforge.net Cc: SciPy Users List Sent: Thu, 10 June, 2010 6:31:17 AM Subject: [SciPy-User] matplotlib and large array Hi All, i need to work with a relative large images "60 mb" (single band geotiff file) i store it in python as a numpy array using python-gdal, the array dinension is (7173 X 7924) single band image, but tring to display it with matshow/imageshow or other matplotlib functions i have that python freeze itself and is not able to load the image. if i use a subset of the image, i 'm able to display it or at least i hade to reduce its resolution using hacks like : reduced_array = array[::3,::3] i don't need full resolution dataset when the image is displaied with a full "zoom out" so the reduction " reduced_array = array[::3,::3] " is good to show the complete image but when i zoom in the image i obviously lost data (less resolution) what do you use to display large dataset ? i'm thinking about a "piramid" with multy array based on the different zoom levels .. but maybe this idea is not so cool. someone already has developed similar code ? thanks to All for any suggestion! Regards, Massimo _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From david_baddeley at yahoo.com.au Wed Jun 9 19:39:20 2010 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 9 Jun 2010 16:39:20 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] numpy and C In-Reply-To: <28831237.post@talk.nabble.com> References: <28767579.post@talk.nabble.com> <28829120.post@talk.nabble.com> <28831237.post@talk.nabble.com> Message-ID: <219460.24132.qm@web33001.mail.mud.yahoo.com> If your description holds, what you're doing is allocating a block of memory (with PyArray_SimpleNew), then changing the pointer so that it points to your camera buffer, without ever using the memory you allocated. The original memory allocated with PyArray_SimpleNew will get leaked at this point. When Python comes to garbage collect your array, the camera buffer will be dealloced instead of the original block of memory. This sounds all BAD!!! I have a feeling that PyArray_SimpleNew also sets the reference count to 1 so there's no need to incref it (although you'd be well advised to check up on this). If this is the case, increfing effectively ensures that the array will never be garbage collected and creates a memory leak. depending on how the data gets from the camera into the buffer you've got a few options - is it a preallocated buffer which gets constantly refreshed by the camera, or is it a buffer allocated on the fly to hold the results of a command such as camera_get_frame(*buffer). If it's the first you could either ... Use PyArray_SimpleNewFromData on your camera buffer, with the caveat that the values in the resulting array will be constantly refreshed from the camera. or, use memcopy to copy the contents of the buffer to your newly allocated (with PyArray_SimpleNew) array - this way the python array won't change as the camera takes another frame. This also has the advantage that the c code doesn't need to worry about whether python is still using the original buffer before deleting it. If it's the second the buffer contents won't be changing with time and I'd either use PyArray_SimpleNewFromData, or preferably, as this means you can let python handle the garbage collection for the frame, use PyArray_SimpleNew to allocate an array and pass the data pointer of this array to your camera_get_frame(*buffer) method. If you are stuck with a pre-allocated array and want to keep the python an c memory management as separate as possible, you could also use the memcopy route. cheers, David ----- Original Message ---- From: tinauser To: scipy-user at scipy.org Sent: Thu, 10 June, 2010 2:35:09 AM Subject: Re: [SciPy-User] [SciPy-user] numpy and C Dear Charles, thanks again for the replies. Why do you say that is difficoult to free memory? What I do is to allocate the memory(pyincref) before calling the Python script. The Python script uses then a timer to call a C function to which the allocated PyArrayObject (created with PyArray SimpleNew) is passed. In C, the pointer of the PyArray is assigned to a pointer that points to a sort of data buffer that is filled from a camera. The data buffer is allocated elsewhere. When the python GUI is closed, I just decref my PyArrayObject, that I'm basically using just to pass pointer values. Charles R Harris wrote: > > On Wed, Jun 9, 2010 at 7:46 AM, Charles R Harris > wrote: > >> >> >> On Wed, Jun 9, 2010 at 5:38 AM, tinauser wrote: >> >>> >>> Dear Charles, >>> thanks for the reply. >>> The part of code causing the problem was exactly this >>> >>> Pymatout_img->data= cam_frame->data; >>> where Pymatout is a PyArrayObject and cam_frame is a structure having a >>> pointer to undefined char data. >>> >>> The code works all right if I recast in this way >>> >>> Pymatout_img->data= (char*)cam_frame->data; >>> >>> I'm not sure if this is allowed;I guessed it works because even if >>> Pymatout_img->data is always a pointer to char, the PyArrayObject looks >>> in >>> ->descr->type_num to see what is the data type. >>> >>> >> Numpy uses char* all over the place and later casts to the needed type, >> it's the old way of doing void*. So your explicit cast is fine. For some >> compilers, gcc for example, you also need to use a compiler flag to let >> the >> compiler know that you are going to do such things. In gcc the flag is >> -fno-strict-aliasing but I don't think you need to worry about this in >> VC. >> >> >> >> > That said, managing the data in this way can be problematic as you need to > track alignment and worry about freeing of memory. You might want to look > at > PyArray SimpleNewFromData. > > Chuck > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/numpy-and-C-tp28767579p28831237.html Sent from the Scipy-User mailing list archive at Nabble.com. _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From vincent at vincentdavis.net Wed Jun 9 21:13:35 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Wed, 9 Jun 2010 19:13:35 -0600 Subject: [SciPy-User] web applications In-Reply-To: References: Message-ID: On Wed, Jun 9, 2010 at 12:49 PM, Nils Wagner wrote: > On Wed, 9 Jun 2010 12:14:10 -0400 > ?Robert Kern wrote: >> On Wed, Jun 9, 2010 at 11:55, Nils Wagner >> wrote: >>> Hi all, >>> >>> AFAIK matplotlib can be used in web applications. >>> How about numpy and scipy ? >> >> Yes, of course. Since matplotlib uses numpy, it would be >>weird if >> numpy didn't work in a web app. >> >>> Any pointer or reference would be appreciated. >> >> Just use numpy or scipy exactly as you would anywhere >>else. >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an >>enigma, a harmless >> enigma that is made terrible by our own mad attempt to >>interpret it as >> though it had an underlying truth." >> ?-- Umberto Eco >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > There are so many web frameworks, e.g. Django, Grok, > Pylons, > TurboGears, web2py, Zbope. > > What is recommended ? I guess it depend on your end goal but I found web2py very quick to get up and running and I have done a little using matplotlib and numpy/scipy with it. I would need to look a little but I know there is a very nice tutorial on using matplotlib with web2py. Vincent > > http://wiki.python.org/moin/WebFrameworks > > Nils > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From Chris.Barker at noaa.gov Wed Jun 9 22:34:56 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 09 Jun 2010 19:34:56 -0700 Subject: [SciPy-User] matplotlib and large array In-Reply-To: <312451.71972.qm@web33005.mail.mud.yahoo.com> References: <974A51CB-96CC-4810-AFB7-4FD1F80A673F@yahoo.it> <312451.71972.qm@web33005.mail.mud.yahoo.com> Message-ID: <4C104F50.4080408@noaa.gov> David Baddeley wrote: > I ended up coding up my own, wxpython based, viewer Us too (and when I say "us" I mean Dan Helfman wrote all the code). Ours is specifically designed for geo-referenced data, raster and vector (using GDAL for the reading). It uses wxPython for the GUI, and OpenGL for fast rendering. It's essentially a toolkit for building custom interactive data viewer/manipulators When it brings in a large image, it pyramids and tiles it, which takes a bit of time, but then it's quite fast for zooming, panning, etc. It theory, if GDAL reads it, our tool will too, but I'm not sure we've brought in a greyscale geo-tiff yet, so there may be some tweaking required for that. You can get the maybe-not-quite-up-to-date source an binaries from: http://bitbucket.org/dhelfman/maproom/wiki/Home Send us a note offline if you're interested in more info. -Chris which downsamples at low magnification and only pulls out the currently visible ROI at high mag. Only this visible region is then colourmapped (at most the window size - so around 1k by 1k pixels if you've got the window maximised). This makes a huge difference to memory consumption, and performance. There's no provision for plotting axes or other stuff over the image though. My viewer is designed for 3D images with multiple colour channels (microscopy data sets), but will handle 2D images fine. Unfortunately it's currently got a fair bit of application dependent cruft/dependencies. I've been meaning to strip these out so it can be used standalone for a while, so if you don't find an alternative viewer, drop me a line and I'll see if I can get it into some sort of shape. > > cheers, > David > > > ----- Original Message ---- > From: Massimo Di Stefano > To: matplotlib-users at lists.sourceforge.net > Cc: SciPy Users List > Sent: Thu, 10 June, 2010 6:31:17 AM > Subject: [SciPy-User] matplotlib and large array > > Hi All, > > i need to work with a relative large images "60 mb" (single band geotiff file) > i store it in python as a numpy array using python-gdal, > the array dinension is (7173 X 7924) single band image, > but tring to display it with matshow/imageshow > or other matplotlib functions i have that python freeze itself and is not able to load the image. > > if i use a subset of the image, i 'm able to display it > or at least i hade to reduce its resolution using hacks like : > > reduced_array = array[::3,::3] > > i don't need full resolution dataset when the image is displaied with a full "zoom out" > so the reduction " reduced_array = array[::3,::3] " is good to show the complete image > but when i zoom in the image i obviously lost data (less resolution) > > what do you use to display large dataset ? > > i'm thinking about a "piramid" with multy array based on the different zoom levels > .. but maybe this idea is not so cool. > someone already has developed similar code ? > > thanks to All for any suggestion! > > Regards, > > Massimo > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From charlie at tct-corp.com Thu Jun 10 02:00:28 2010 From: charlie at tct-corp.com (ascetic) Date: Wed, 9 Jun 2010 23:00:28 -0700 (PDT) Subject: [SciPy-User] weave module bad file descriptor error Message-ID: <0ce767e1-8f89-487c-a2a8-d35384e58ee3@40g2000pry.googlegroups.com> I use pythonxy package for easily integrated installation of python and scipy usage. (windows XP) The following message had shown when I want to execute the example (dict_sort.py) from scipy: ===== Dict sort of 1000 items for 3000 iterations: speed in python: 0.59299993515 [0, 1, 2, 3, 4] No module named msvccompiler in numpy.distutils; trying from distutils Found executable C:\Program Files\pythonxy\mingw\bin\g++.exe Traceback (most recent call last): File "C:\Python26\Lib\site-packages\scipy\weave\examples \dict_sort.py", line 121, in sort_compare(a,n) File "C:\Python26\Lib\site-packages\scipy\weave\examples \dict_sort.py", line 89, in sort_compare b=c_sort(a) File "C:\Python26\Lib\site-packages\scipy\weave\examples \dict_sort.py", line 38, in c_sort return inline_tools.inline(code,['adict']) File "..\inline_tools.py", line 335, in inline **kw) File "..\inline_tools.py", line 462, in compile_function verbose=verbose, **kw) File "..\ext_tools.py", line 365, in compile verbose = verbose, **kw) File "..\build_tools.py", line 272, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "C:\Python26\lib\site-packages\numpy\distutils\core.py", line 184, in setup return old_setup(**new_attr) File "C:\Python26\lib\distutils\core.py", line 162, in setup raise SystemExit, error CompileError: error: Bad file descriptor ===== But the Linux version can run it very well. It should not be the problem of weave or distutils module. Can anyone help me solve this question? Thanks! ascetic From seb.haase at gmail.com Thu Jun 10 04:05:57 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Thu, 10 Jun 2010 10:05:57 +0200 Subject: [SciPy-User] Global Curve Fitting of 2 functions to 2 sets of data-curves Message-ID: Hi, so far I have been using scipy.optimize.leastsq to satisfy all my curve fitting needs. But now I am thinking about "global fitting" - i.e. fitting multiple dataset with shared parameters (e.g. ref here: http://www.originlab.com/index.aspx?go=Products/Origin/DataAnalysis/CurveFitting/GlobalFitting) I have looked here (http://www.scipy.org/Cookbook/FittingData) and here (http://docs.scipy.org/doc/scipy/reference/optimize.html) Can someone provide an example ? Which of the routines of scipy.optimize are "easiest" to use ? Finally, I'm thinking about a "much more" complicated fitting task: fitting two sets of datasets with two types of functions. In total I have 10 datasets to be fit with a function f1, and 10 more to be fit with function f2. Each function depends on 6 parameters A1,A2,A3, r1,r2,r3. A1,A2,A3 should be identical ("shared") between all 20 sets, while r1,r2,r3 should be shared between the i-th set of type f1 and the i-th set of f2. Last but not least it would be nice if one could specify constrains such that r1,r2,r3 >0 and A1+A2+A3 == 1 and 0<=Ai<=1. ;-) Is this too much ? Thanks for any help or hints, Sebastian Haase From Dharhas.Pothina at twdb.state.tx.us Thu Jun 10 08:13:36 2010 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 10 Jun 2010 07:13:36 -0500 Subject: [SciPy-User] web applications In-Reply-To: <4C0FEE92.1070406@noaa.gov> References: <4C0FA48B.63BA.009B.0@twdb.state.tx.us> <4C0FEE92.1070406@noaa.gov> Message-ID: <4C1090A0.63BA.009B.0@twdb.state.tx.us> >>> Christopher Barker 6/9/2010 2:42 PM >>> > Django has the advantage of "one stop shopping" -- it is an integrated > package of all the pieces you need. This is one of the main reasons I went with Django. It seemed the simplest to learn that already had all the pieces for someone who is fairly new to web application programming. I don't know enough to want to try and pick and choose different packages. Another reason was hopefully in the future to be able to use GeoDjango which incorporates a lot of GIS features. - dharhas From rigal at rapideye.de Thu Jun 10 08:57:10 2010 From: rigal at rapideye.de (Matthieu Rigal) Date: Thu, 10 Jun 2010 14:57:10 +0200 Subject: [SciPy-User] leastsq returns bizarre, not fitted, output for float values Message-ID: <201006101457.11013.rigal@rapideye.de> Hi folks, Thanks for your help last time, even if I had not reply to my second message... I am using leastsq for several things, but it is returning strange values for one of the case I'm using it for. I simplified it to the following code I'll paste below. The effect is that it is fitting nothing, just giving back the parameters given for initialization. Thus some fitting is possible as you will see in the plotted graph. I'm using SciPy 0.7. It might be a bug, a misusage from my side... or some data type incompatibility I was not able to find on the net or in the source... As you will see, if you transform the x-data to a numpy.int array (by uncommenting a line below), the fitting is working... is it to be expected ? It should then be somewhere in the doc, isn't it ? import numpy from scipy.optimize import leastsq import matplotlib.pyplot as plt def LinearFit(p, y, x): a, b = p return y - (a*x +b) aX = numpy.asarray([ 22.08742332, 23.43987274, 21.59165192, 24.80192566, 26.11182976, 29.18944931, 27.89473152, 30.00043106, 36.24227142, 30.45967293, 30.04778099, 28.11702538, 29.31716728, 27.89473152, 20.59804916, 34.19070053, 48.33156204, 50.82163239, 45.22343063, 42.80136108, 30.71160889, 29.31716728, 25.14836884, 23.50605965, 26.89011765, 40.35306168, 55.074543 , 58.57307816, 60.77198792, 56.14603043, 39.29994583, 38.14756012, 35.76476288, 27.31066895, 23.45325851, 30.46047974, 37.53346634, 41.04254532, 54.47524643, 61.14104462, 61.03421402, 56.14603043, 44.67305756, 35.13313675], dtype=numpy.float32) aY = numpy.asarray([ 25.45091248, 25.50468063, 27.15722656, 25.10549927, 28.44662094, 30.3882637 , 31.90523148, 34.12581253, 36.62049484, 33.90032196, 34.04083252, 29.66094398, 30.68564224, 29.31051826, 25.17509079, 37.28609848, 42.86494827, 48.25041199, 46.88908005, 34.44023132, 31.26217461, 31.8005867 , 28.34657669, 26.77126312, 31.06710815, 41.03251266, 49.48557281, 52.79579926, 50.865448 , 48.03937531, 39.30026245, 38.50889969, 37.07154083, 31.61130905, 27.42698288, 30.84166718, 30.84166718, 40.47367096, 50.37258148, 53.13900757, 53.75816727, 52.74428177, 43.87319183, 33.70808029], dtype=numpy.float32) #aX = numpy.asarray(numpy.rint(aX), dtype=numpy.int) p0 = [1.0] + [aY.min()-aX.min()] aParams, err, i, j, k = leastsq(LinearFit, p0, args=(aY, aX), maxfev=10000,full_output=True) aY0 = aParams[0] * aX + aParams[1] print err, i, j, k print aParams plt.plot(aX, aY, '+', aX, aY0, '+') plt.legend(['input','model']) plt.show() Thanks in advance for the help, Best Regards, M -- Matthieu Rigal RapidEye AG Molkenmarkt 30 14776 Brandenburg an der Havel Germany Follow us on Twitter! www.twitter.com/rapideye_ag Head Office/Sitz der Gesellschaft: Brandenburg an der Havel Management Board/Vorstand: Wolfgang G. Biedermann Chairman of Supervisory Board/Vorsitzender des Aufsichtsrates: Juergen Breitkopf Commercial Register/Handelsregister Potsdam HRB 17 796 Tax Number/Steuernummer: 048/100/00053 VAT-Ident-Number/Ust.-ID: DE 199331235 DIN EN ISO 9001 certified ************************************************************************* Diese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. The information in this e-mail is intended for the named recipients only. It may contain privileged and confidential information. If you have received this communication in error, any use, copying or dissemination of its contents is strictly prohibited. Please erase all copies of the message along with any included attachments and notify RapidEye AG or the sender immediately by telephone at the number indicated on this page. From rigal at rapideye.de Thu Jun 10 10:02:23 2010 From: rigal at rapideye.de (Matthieu Rigal) Date: Thu, 10 Jun 2010 16:02:23 +0200 Subject: [SciPy-User] leastsq returns bizarre, not fitted, output for float values In-Reply-To: <201006101457.11013.rigal@rapideye.de> References: <201006101457.11013.rigal@rapideye.de> Message-ID: <201006101602.23128.rigal@rapideye.de> OK, I've found the bug... Somehow the leastsq function is not working if both data sets are float 32 type. By just adding following line the problem is solved : aX = numpy.asarray(aX, dtype=numpy.float64) Is it a known bug ? Should I add it to the bug tracker ? Best regards, Matthieu On Thursday 10 June 2010 14:57:10 Matthieu Rigal wrote: > Hi folks, > > Thanks for your help last time, even if I had not reply to my second > message... > > I am using leastsq for several things, but it is returning strange > values for one of the case I'm using it for. I simplified it to the > following code I'll paste below. > The effect is that it is fitting nothing, just giving back the > parameters given for initialization. Thus some fitting is possible as > you will see in the plotted graph. > I'm using SciPy 0.7. > > It might be a bug, a misusage from my side... or some data type > incompatibility I was not able to find on the net or in the source... > > As you will see, if you transform the x-data to a numpy.int array (by > uncommenting a line below), the fitting is working... is it to be > expected ? It should then be somewhere in the doc, isn't it ? > > > import numpy > from scipy.optimize import leastsq > import matplotlib.pyplot as plt > > def LinearFit(p, y, x): > a, b = p > return y - (a*x +b) > > aX = numpy.asarray([ 22.08742332, 23.43987274, 21.59165192, > 24.80192566, 26.11182976, 29.18944931, 27.89473152, 30.00043106, > 36.24227142, 30.45967293, 30.04778099, 28.11702538, > 29.31716728, 27.89473152, 20.59804916, 34.19070053, > 48.33156204, 50.82163239, 45.22343063, 42.80136108, > 30.71160889, 29.31716728, 25.14836884, 23.50605965, > 26.89011765, 40.35306168, 55.074543 , 58.57307816, > 60.77198792, 56.14603043, 39.29994583, 38.14756012, > 35.76476288, 27.31066895, 23.45325851, 30.46047974, > 37.53346634, 41.04254532, 54.47524643, 61.14104462, > 61.03421402, 56.14603043, 44.67305756, 35.13313675], > dtype=numpy.float32) > > aY = numpy.asarray([ 25.45091248, 25.50468063, 27.15722656, > 25.10549927, 28.44662094, 30.3882637 , 31.90523148, 34.12581253, > 36.62049484, 33.90032196, 34.04083252, 29.66094398, > 30.68564224, 29.31051826, 25.17509079, 37.28609848, > 42.86494827, 48.25041199, 46.88908005, 34.44023132, > 31.26217461, 31.8005867 , 28.34657669, 26.77126312, > 31.06710815, 41.03251266, 49.48557281, 52.79579926, > 50.865448 , 48.03937531, 39.30026245, 38.50889969, > 37.07154083, 31.61130905, 27.42698288, 30.84166718, > 30.84166718, 40.47367096, 50.37258148, 53.13900757, > 53.75816727, 52.74428177, 43.87319183, 33.70808029], > dtype=numpy.float32) > > #aX = numpy.asarray(numpy.rint(aX), dtype=numpy.int) > > p0 = [1.0] + [aY.min()-aX.min()] > aParams, err, i, j, k = leastsq(LinearFit, p0, args=(aY, aX), > maxfev=10000,full_output=True) > aY0 = aParams[0] * aX + aParams[1] > print err, i, j, k > print aParams > > plt.plot(aX, aY, '+', aX, aY0, '+') > plt.legend(['input','model']) > plt.show() > > > Thanks in advance for the help, > Best Regards, > M -- Matthieu Rigal Product Development RapidEye AG Tel: +49-(0)3381-89 04 331 Molkenmarkt 30 Fax: +49-(0)3381-89 04 101 14776 Brandenburg/Havel Germany http://www.rapideye.de RapidEye AG Molkenmarkt 30 14776 Brandenburg an der Havel Germany Follow us on Twitter! www.twitter.com/rapideye_ag Head Office/Sitz der Gesellschaft: Brandenburg an der Havel Management Board/Vorstand: Wolfgang G. Biedermann Chairman of Supervisory Board/Vorsitzender des Aufsichtsrates: Juergen Breitkopf Commercial Register/Handelsregister Potsdam HRB 17 796 Tax Number/Steuernummer: 048/100/00053 VAT-Ident-Number/Ust.-ID: DE 199331235 DIN EN ISO 9001 certified ************************************************************************* Diese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. The information in this e-mail is intended for the named recipients only. It may contain privileged and confidential information. If you have received this communication in error, any use, copying or dissemination of its contents is strictly prohibited. Please erase all copies of the message along with any included attachments and notify RapidEye AG or the sender immediately by telephone at the number indicated on this page. From charlesr.harris at gmail.com Thu Jun 10 10:10:48 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 10 Jun 2010 08:10:48 -0600 Subject: [SciPy-User] leastsq returns bizarre, not fitted, output for float values In-Reply-To: <201006101602.23128.rigal@rapideye.de> References: <201006101457.11013.rigal@rapideye.de> <201006101602.23128.rigal@rapideye.de> Message-ID: On Thu, Jun 10, 2010 at 8:02 AM, Matthieu Rigal wrote: > OK, I've found the bug... > > Somehow the leastsq function is not working if both data sets are float 32 > type. > By just adding following line the problem is solved : > aX = numpy.asarray(aX, dtype=numpy.float64) > > Is it a known bug ? Should I add it to the bug tracker ? > > Best regards, > Matthieu > > I think you should open a ticket and include a simple example. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis-bz-gg at t-online.de Thu Jun 10 10:16:47 2010 From: denis-bz-gg at t-online.de (denis) Date: Thu, 10 Jun 2010 07:16:47 -0700 (PDT) Subject: [SciPy-User] Crossing of Splines In-Reply-To: References: Message-ID: <580ae5ef-6125-4c35-90cc-62032c51fa31@j8g2000yqd.googlegroups.com> On Jun 8, 8:22?pm, Marco wrote: > Hi all! > > I have 2 different datasets which I fit using interpolate.splrep(). > > I am interested in finding the point where the splines cross: as of > now I use interpolate.splev() to evaluate each spline and then look > for a zero in the difference of the evaluated splines. Following Anne's idea, if you extend class UnivariateSpline to use given knots, you could then subtract the whole splines / all their coefficients and run roots() on the difference, along the lines a = myUniSpline( x, ya, s=s, knots= ) b = myUniSpline( x, yb, s=s, knots= ) a.minus( b.getcoeffs() ) # spline a - b roots = a.roots() Marco, are you interpolating, s=0, or smoothing, s > 0 ? If smoothing, knots wobble around with s and y (try the little test below) -- good, adaptive, for interpolating 1 function, not so good for your problem.. cheers -- denis """ scipy UnivariateSpline sensitivity to s """ from __future__ import division import sys import numpy as np from scipy.interpolate import UnivariateSpline # $scipy/ interpolate/fitpack2.py import pylab as pl np.set_printoptions( 2, threshold=10, edgeitems=3, suppress=True ) N = 100 H = 5 cycle = 4 plot = 0 exec "\n".join( sys.argv[1:] ) # N= ... x = np.arange(N+1) xup = np.arange( 0, N + 1e-6, 1/H ) if cycle == 0: y = np.zeros(N+1) y[N//2] = 1 else: y = np.sin( 2*np.pi * np.arange(N+1) / cycle ) #............................................................................... title = "UnivariateSpline N=%d H=%d cycle=%.2g" % (N, H, cycle) print title for s in (0, .1, .5, 10, None ): uspline = UnivariateSpline( x, y, s=s ) # s=0 interpolates yup = uspline( xup ) res = uspline.get_residual() # == s == |y - yup[::H]|**2 label = "s: %s res: %.2g" % (s, res) print label knots = uspline.get_knots() print "%d knots: %s" % (len(knots), knots ) roots = uspline.roots() print "%d roots: %s" % (len(roots), roots ) print "" if plot: pl.plot( xup, yup, label=label ) if cycle == 0: pl.xlim( N//2 - 10, N//2 + 10 ) if plot: pl.title(title) pl.legend() pl.show() From denis-bz-gg at t-online.de Thu Jun 10 10:43:56 2010 From: denis-bz-gg at t-online.de (denis) Date: Thu, 10 Jun 2010 07:43:56 -0700 (PDT) Subject: [SciPy-User] web applications In-Reply-To: <4C1090A0.63BA.009B.0@twdb.state.tx.us> References: <4C0FA48B.63BA.009B.0@twdb.state.tx.us> <4C0FEE92.1070406@noaa.gov> <4C1090A0.63BA.009B.0@twdb.state.tx.us> Message-ID: Fwiw, http://www.ohloh.net/p/compare?project_0=django&project_1=pylons&project_2=web2py&submit=Go shows plots of lines of code for django pylons web2py. Of course lines of code are only roughly correlated with doc, examples, learn time. (A wibni: wouldn't it be nice if we had a common list of measurables for sw packages -- nr. pages of examples, tutorial, ref, feature comparisons, opinions ? I'm not sure, though, if multi-person projects listen to customers much.) cheers -- denis From charlesr.harris at gmail.com Thu Jun 10 11:15:50 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 10 Jun 2010 09:15:50 -0600 Subject: [SciPy-User] leastsq returns bizarre, not fitted, output for float values In-Reply-To: References: <201006101457.11013.rigal@rapideye.de> <201006101602.23128.rigal@rapideye.de> Message-ID: On Thu, Jun 10, 2010 at 8:10 AM, Charles R Harris wrote: > > > On Thu, Jun 10, 2010 at 8:02 AM, Matthieu Rigal wrote: > >> OK, I've found the bug... >> >> Somehow the leastsq function is not working if both data sets are float 32 >> type. >> By just adding following line the problem is solved : >> aX = numpy.asarray(aX, dtype=numpy.float64) >> >> Is it a known bug ? Should I add it to the bug tracker ? >> >> Best regards, >> Matthieu >> >> > I think you should open a ticket and include a simple example. > > I also note that the documentation of leastsq is totally screwed up and the covariance returned is not the covariance, nor is it the currently documented Jacobian. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Jun 10 11:35:59 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 10 Jun 2010 11:35:59 -0400 Subject: [SciPy-User] leastsq returns bizarre, not fitted, output for float values In-Reply-To: References: <201006101457.11013.rigal@rapideye.de> <201006101602.23128.rigal@rapideye.de> Message-ID: On Thu, Jun 10, 2010 at 11:15 AM, Charles R Harris wrote: > > > On Thu, Jun 10, 2010 at 8:10 AM, Charles R Harris > wrote: >> >> >> On Thu, Jun 10, 2010 at 8:02 AM, Matthieu Rigal wrote: >>> >>> OK, I've found the bug... >>> >>> Somehow the leastsq function is not working if both data sets are float >>> 32 >>> type. >>> By just adding following line the problem is solved : >>> aX = numpy.asarray(aX, dtype=numpy.float64) >>> >>> Is it a known bug ? Should I add it to the bug tracker ? >>> >>> Best regards, >>> Matthieu >>> >> >> I think you should open a ticket and include a simple example. >> > > I also note that the documentation of leastsq is totally screwed up and the > covariance returned is not the covariance, nor is it the currently > documented Jacobian. cov_x is the raw covariance, what's wrong with the explanation I never figured out how to get the Jacobian directly, and am not sure about the details of the jacobian calculation Josef > > Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From charlesr.harris at gmail.com Thu Jun 10 11:56:48 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 10 Jun 2010 09:56:48 -0600 Subject: [SciPy-User] leastsq returns bizarre, not fitted, output for float values In-Reply-To: References: <201006101457.11013.rigal@rapideye.de> <201006101602.23128.rigal@rapideye.de> Message-ID: On Thu, Jun 10, 2010 at 9:35 AM, wrote: > On Thu, Jun 10, 2010 at 11:15 AM, Charles R Harris > wrote: > > > > > > On Thu, Jun 10, 2010 at 8:10 AM, Charles R Harris > > wrote: > >> > >> > >> On Thu, Jun 10, 2010 at 8:02 AM, Matthieu Rigal > wrote: > >>> > >>> OK, I've found the bug... > >>> > >>> Somehow the leastsq function is not working if both data sets are float > >>> 32 > >>> type. > >>> By just adding following line the problem is solved : > >>> aX = numpy.asarray(aX, dtype=numpy.float64) > >>> > >>> Is it a known bug ? Should I add it to the bug tracker ? > >>> > >>> Best regards, > >>> Matthieu > >>> > >> > >> I think you should open a ticket and include a simple example. > >> > > > > I also note that the documentation of leastsq is totally screwed up and > the > > covariance returned is not the covariance, nor is it the currently > > documented Jacobian. > > cov_x is the raw covariance, what's wrong with the explanation > > I never figured out how to get the Jacobian directly, and am not sure > about the details of the jacobian calculation > > In this case the Jacobian is a numerical derivative, essentially the Jacobian in the Gauss-Newton method, and available in its economical qr with column pivoting factored form. What is returned as the covariance is (J^T*J)^{-1} and needs to be multiplied by the variance of the error, either estimated from the residuals or known apriori, in order to get an estimate of the covariance. Things missing from the documentation: signature of the function to be optimized and "at a glance" documentation of what is returned and what returns are optional. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesmckinn at gmail.com Thu Jun 10 13:26:49 2010 From: wesmckinn at gmail.com (Wes McKinney) Date: Thu, 10 Jun 2010 13:26:49 -0400 Subject: [SciPy-User] Can I create a 3 argument UFunc easily? In-Reply-To: References: Message-ID: On Tue, Jun 8, 2010 at 7:27 PM, Robert Kern wrote: > On Tue, Jun 8, 2010 at 18:46, John Salvatier wrote: >> Hello, >> >> I would like to make a 3 argument UFunc that finds the weighted average of >> two of the arguments using the 3rd argument as the weight. This way, the >> .accumulate method of the ufunc can be used as an exponentially weighted >> moving average function. >> >> Unfortunately I am not very familiar with the Numpy C API, so I was hoping >> to use the Cython hack for making UFuncs >> (http://wiki.cython.org/MarkLodato/CreatingUfuncs). However, looking at the >> UFunc C API doc >> (http://docs.scipy.org/doc/numpy/reference/c-api.ufunc.html), it looks like >> numpy only has 2 argument "generic functions". Is there a simple way to >> create a "generic function" that takes 3 arguments that will still work with >> accumulate? Is there another way to create the sort of UFunc I want? > > While you can make n-argument ufuncs (scipy.special has many of them), > .accumulate() only works for 2-argument ufuncs. > > All in all, it's a lot easier and more performant to simply code up an > EWMA in C rather than "tricking" the general ufunc machinery into > achieving a specific effect. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > I can point out a couple EWMA implementations present in scikits.timeseries (C) and pandas (Cython), that you could co-opt if they do what you want. I'm sure there are others out there http://svn.scipy.org/svn/scikits/trunk/timeseries/scikits/timeseries/src/c_tseries.c http://code.google.com/p/pandas/source/browse/trunk/pandas/lib/src/moments.pyx#161 From mdekauwe at gmail.com Thu Jun 10 14:08:41 2010 From: mdekauwe at gmail.com (mdekauwe) Date: Thu, 10 Jun 2010 11:08:41 -0700 (PDT) Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28711581.post@talk.nabble.com> <28824023.post@talk.nabble.com> Message-ID: <28846602.post@talk.nabble.com> Hi, No if I am honest I am a little confused how what you are suggesting would work. As I see it the array I am trying to average from has dims jules[(numyears * nummonths),1,numpts,0]. Where the first dimension (132) is 12 months x 11 years. And as I said before I would like to average the jan from the first, second, third years etc. Then the same for the feb and so on. So I don't see how you get to your 2d array that you mention in the first line? I thought what you were suggesting was I could precompute the step that builds the index for the months e.g mth_index = np.zeros(0) for month in xrange(nummonths): mth_index = np.append(mth_index, np.arange(month, numyears * nummonths, nummonths)) and use this as my index to skip the for loop. Though I still have a for loop I guess! Benjamin Root-2 wrote: > > Correction for me as well. To mask out the negative values, use masked > arrays. So we will turn jules_2d into a masked array (second line), then > all subsequent commands will still work as expected. It is very similar > to > replacing negative values with nans and using nanmin(). > >> jules_2d = jules.reshape((-1, 12)) >> jules_2d = np.ma.masked_array(jules_2d, mask=jules_2d < 0.0) >> jules_monthly = np.mean(jules_2d, axis=0) >> print jules_monthly.shape > (12,) > > Ben Root > > On Tue, Jun 8, 2010 at 7:49 PM, Benjamin Root wrote: > >> The np.mod in my example caused the data points to stay within [0, 11] in >> order to illustrate that these are months. In my example, months are >> column, years are rows. In your desired output, months are rows and >> years >> are columns. It makes very little difference which way you have it. >> >> Anyway, let's imagine that we have a time series of data "jules". We can >> easily reshape this like so: >> >> > jules_2d = jules.reshape((-1, 12)) >> > jules_monthly = np.mean(jules_2d, axis=0) >> > print jules_monthly.shape >> (12,) >> >> voila! You have 12 values in jules_monthly which are means for that >> month >> across all years. >> >> protip - if you want yearly averages just change the ax parameter in >> np.mean(): >> > jules_yearly = np.mean(jules_2d, axis=1) >> >> I hope that makes my previous explanation clearer. >> >> Ben Root >> >> >> On Tue, Jun 8, 2010 at 5:41 PM, mdekauwe wrote: >> >>> >>> OK... >>> >>> but if I do... >>> >>> In [28]: np.mod(np.arange(nummonths*numyears), nummonths).reshape((-1, >>> nummonths)) >>> Out[28]: >>> array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) >>> >>> When really I would be after something like this I think? >>> >>> array([ 0, 12, 24, 36, 48, 60, 72, 84, 96, 108, 120], >>> [ 1, 13, 25, 37, 49, 61, 73, 85, 97, 109, 121], >>> [ 2, 14, 26, 38, 50, 62, 74, 86, 98, 110, 122] >>> etc, etc >>> >>> i.e. so for each month jump across the years. >>> >>> Not quite sure of this example...this is what I currently have which >>> does >>> seem to work, though I guess not completely efficiently. >>> >>> for month in xrange(nummonths): >>> tmp = jules[xrange(0, numyears * nummonths, nummonths),VAR,:,0] >>> tmp[tmp < 0.0] = np.nan >>> data[month,:] = np.mean(tmp, axis=0) >>> >>> >>> >>> >>> Benjamin Root-2 wrote: >>> > >>> > If you want an average for each month from your timeseries, then the >>> > sneaky >>> > way would be to reshape your array so that the time dimension is split >>> > into >>> > two (month, year) dimensions. >>> > >>> > For a 1-D array, this would be: >>> > >>> >> dataarray = numpy.mod(numpy.arange(36), 12) >>> >> print dataarray >>> > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, >>> 4, >>> > 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, 4, 5, 6, 7, 8, >>> 9, >>> > 10, 11]) >>> >> datamatrix = dataarray.reshape((-1, 12)) >>> >> print datamatrix >>> > array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >>> > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >>> > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) >>> > >>> > Hope that helps. >>> > >>> > Ben Root >>> > >>> > >>> > On Fri, May 28, 2010 at 3:28 PM, mdekauwe wrote: >>> > >>> >> >>> >> OK so I just need to have a quick loop across the 12 months then, >>> that >>> is >>> >> fine, just thought there might have been a sneaky way! >>> >> >>> >> Really appreciated, getting there slowly! >>> >> >>> >> >>> >> >>> >> josef.pktd wrote: >>> >> > >>> >> > On Fri, May 28, 2010 at 4:14 PM, mdekauwe >>> wrote: >>> >> >> >>> >> >> ok - something like this then...but how would i get the index for >>> the >>> >> >> month >>> >> >> for the data array (where month is 0, 1, 2, 4 ... 11)? >>> >> >> >>> >> >> data[month,:] = array[xrange(0, numyears * nummonths, >>> >> nummonths),VAR,:,0] >>> >> > >>> >> > you would still need to start at the right month >>> >> > data[month,:] = array[xrange(month, numyears * nummonths, >>> >> > nummonths),VAR,:,0] >>> >> > or >>> >> > data[month,:] = array[month: numyears * nummonths : >>> nummonths),VAR,:,0] >>> >> > >>> >> > an alternative would be a reshape with an extra month dimension and >>> >> > then sum only once over the year axis. this might be faster but >>> >> > trickier to get the correct reshape . >>> >> > >>> >> > Josef >>> >> > >>> >> >> >>> >> >> and would that be quicker than making an array months... >>> >> >> >>> >> >> months = np.arange(numyears * nummonths) >>> >> >> >>> >> >> and you that instead like you suggested x[start:end:12,:]? >>> >> >> >>> >> >> Many thanks again... >>> >> >> >>> >> >> >>> >> >> josef.pktd wrote: >>> >> >>> >>> >> >>> On Fri, May 28, 2010 at 3:53 PM, mdekauwe >>> wrote: >>> >> >>>> >>> >> >>>> Ok thanks...I'll take a look. >>> >> >>>> >>> >> >>>> Back to my loops issue. What if instead this time I wanted to >>> take >>> >> an >>> >> >>>> average so every march in 11 years, is there a quicker way to go >>> >> about >>> >> >>>> doing >>> >> >>>> that than my current method? >>> >> >>>> >>> >> >>>> nummonths = 12 >>> >> >>>> numyears = 11 >>> >> >>>> >>> >> >>>> for month in xrange(nummonths): >>> >> >>>> for i in xrange(numpts): >>> >> >>>> for ym in xrange(month, numyears * nummonths, nummonths): >>> >> >>>> data[month, i] += array[ym, VAR, land_pts_index[i], >>> 0] >>> >> >>> >>> >> >>> >>> >> >>> x[start:end:12,:] gives you every 12th row of an array x >>> >> >>> >>> >> >>> something like this should work to get rid of the inner loop, or >>> you >>> >> >>> could directly put >>> >> >>> range(month, numyears * nummonths, nummonths) into the array >>> instead >>> >> >>> of ym and sum() >>> >> >>> >>> >> >>> Josef >>> >> >>> >>> >> >>> >>> >> >>>> >>> >> >>>> so for each point in the array for a given month i am jumping >>> >> through >>> >> >>>> and >>> >> >>>> getting the next years month and so on, summing it. >>> >> >>>> >>> >> >>>> Thanks... >>> >> >>>> >>> >> >>>> >>> >> >>>> josef.pktd wrote: >>> >> >>>>> >>> >> >>>>> On Wed, May 26, 2010 at 5:03 PM, mdekauwe >>> >> wrote: >>> >> >>>>>> >>> >> >>>>>> Could you possibly if you have time explain further your >>> comment >>> >> re >>> >> >>>>>> the >>> >> >>>>>> p-values, your suggesting I am misusing them? >>> >> >>>>> >>> >> >>>>> Depends on your use and interpretation >>> >> >>>>> >>> >> >>>>> test statistics, p-values are random variables, if you look at >>> >> several >>> >> >>>>> tests at the same time, some p-values will be large just by >>> chance. >>> >> >>>>> If, for example you just look at the largest test statistic, >>> then >>> >> the >>> >> >>>>> distribution for the max of several test statistics is not the >>> same >>> >> as >>> >> >>>>> the distribution for a single test statistic >>> >> >>>>> >>> >> >>>>> http://en.wikipedia.org/wiki/Multiple_comparisons >>> >> >>>>> http://www.itl.nist.gov/div898/handbook/prc/section4/prc47.htm >>> >> >>>>> >>> >> >>>>> we also just had a related discussion for ANOVA post-hoc tests >>> on >>> >> the >>> >> >>>>> pystatsmodels group. >>> >> >>>>> >>> >> >>>>> Josef >>> >> >>>>>> >>> >> >>>>>> Thanks. >>> >> >>>>>> >>> >> >>>>>> >>> >> >>>>>> josef.pktd wrote: >>> >> >>>>>>> >>> >> >>>>>>> On Sat, May 22, 2010 at 6:21 AM, mdekauwe >>> >>> >> >>>>>>> wrote: >>> >> >>>>>>>> >>> >> >>>>>>>> Sounds like I am stuck with the loop as I need to do the >>> >> comparison >>> >> >>>>>>>> for >>> >> >>>>>>>> each >>> >> >>>>>>>> pixel of the world and then I have a basemap function call >>> which >>> >> I >>> >> >>>>>>>> guess >>> >> >>>>>>>> slows it down further...hmm >>> >> >>>>>>> >>> >> >>>>>>> I don't see much that could be done differently, after a >>> brief >>> >> look. >>> >> >>>>>>> >>> >> >>>>>>> stats.pearsonr could be replaced by an array version using >>> >> directly >>> >> >>>>>>> the formula for correlation even with nans. wilcoxon looks >>> slow, >>> >> and >>> >> >>>>>>> I >>> >> >>>>>>> never tried or seen a faster version. >>> >> >>>>>>> >>> >> >>>>>>> just a reminder, the p-values are for a single test, when you >>> >> have >>> >> >>>>>>> many of them, then they don't have the right size/confidence >>> >> level >>> >> >>>>>>> for >>> >> >>>>>>> an overall or joint test. (some packages report a Bonferroni >>> >> >>>>>>> correction in this case) >>> >> >>>>>>> >>> >> >>>>>>> Josef >>> >> >>>>>>> >>> >> >>>>>>> >>> >> >>>>>>>> >>> >> >>>>>>>> i.e. >>> >> >>>>>>>> >>> >> >>>>>>>> def compareSnowData(jules_var): >>> >> >>>>>>>> # Extract the 11 years of snow data and return >>> >> >>>>>>>> outrows = 180 >>> >> >>>>>>>> outcols = 360 >>> >> >>>>>>>> numyears = 11 >>> >> >>>>>>>> nummonths = 12 >>> >> >>>>>>>> >>> >> >>>>>>>> # Read various files >>> >> >>>>>>>> fname="world_valid_jules_pts.ascii" >>> >> >>>>>>>> (numpts, land_pts_index, latitude, longitude, rows, cols) >>> = >>> >> >>>>>>>> jo.read_land_points_ascii(fname, 1.0) >>> >> >>>>>>>> >>> >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax0.mon.gra" >>> >> >>>>>>>> jules_data1 = jo.readJulesOutBinary(fname, numrows=15238, >>> >> >>>>>>>> numcols=1, >>> >> >>>>>>>> \ >>> >> >>>>>>>> timesteps=132, numvars=26) >>> >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax3.mon.gra" >>> >> >>>>>>>> jules_data2 = jo.readJulesOutBinary(fname, numrows=15238, >>> >> >>>>>>>> numcols=1, >>> >> >>>>>>>> \ >>> >> >>>>>>>> timesteps=132, numvars=26) >>> >> >>>>>>>> >>> >> >>>>>>>> # grab some space >>> >> >>>>>>>> data1_snow = np.zeros((nummonths * numyears, numpts), >>> >> >>>>>>>> dtype=np.float32) >>> >> >>>>>>>> data2_snow = np.zeros((nummonths * numyears, numpts), >>> >> >>>>>>>> dtype=np.float32) >>> >> >>>>>>>> pearsonsr_snow = np.ones((outrows, outcols), >>> >> dtype=np.float32) >>> >> * >>> >> >>>>>>>> np.nan >>> >> >>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), >>> >> dtype=np.float32) >>> >> >>>>>>>> * >>> >> >>>>>>>> np.nan >>> >> >>>>>>>> >>> >> >>>>>>>> # extract the data >>> >> >>>>>>>> data1_snow = jules_data1[:,jules_var,:,0] >>> >> >>>>>>>> data2_snow = jules_data2[:,jules_var,:,0] >>> >> >>>>>>>> data1_snow = np.where(data1_snow < 0.0, np.nan, >>> data1_snow) >>> >> >>>>>>>> data2_snow = np.where(data2_snow < 0.0, np.nan, >>> data2_snow) >>> >> >>>>>>>> #for month in xrange(numyears * nummonths): >>> >> >>>>>>>> # for i in xrange(numpts): >>> >> >>>>>>>> # data1 = >>> >> >>>>>>>> jules_data1[month,jules_var,land_pts_index[i],0] >>> >> >>>>>>>> # data2 = >>> >> >>>>>>>> jules_data2[month,jules_var,land_pts_index[i],0] >>> >> >>>>>>>> # if data1 >= 0.0: >>> >> >>>>>>>> # data1_snow[month,i] = data1 >>> >> >>>>>>>> # else: >>> >> >>>>>>>> # data1_snow[month,i] = np.nan >>> >> >>>>>>>> # if data2 > 0.0: >>> >> >>>>>>>> # data2_snow[month,i] = data2 >>> >> >>>>>>>> # else: >>> >> >>>>>>>> # data2_snow[month,i] = np.nan >>> >> >>>>>>>> >>> >> >>>>>>>> # exclude any months from *both* arrays where we have >>> dodgy >>> >> >>>>>>>> data, >>> >> >>>>>>>> else >>> >> >>>>>>>> we >>> >> >>>>>>>> # can't do the correlations correctly!! >>> >> >>>>>>>> data1_snow = np.where(np.isnan(data2_snow), np.nan, >>> >> data1_snow) >>> >> >>>>>>>> data2_snow = np.where(np.isnan(data1_snow), np.nan, >>> >> data1_snow) >>> >> >>>>>>>> >>> >> >>>>>>>> # put data on a regular grid... >>> >> >>>>>>>> print 'regridding landpts...' >>> >> >>>>>>>> for i in xrange(numpts): >>> >> >>>>>>>> # exclude the NaN, note masking them doesn't work in >>> the >>> >> >>>>>>>> stats >>> >> >>>>>>>> func >>> >> >>>>>>>> x = data1_snow[:,i] >>> >> >>>>>>>> x = x[np.isfinite(x)] >>> >> >>>>>>>> y = data2_snow[:,i] >>> >> >>>>>>>> y = y[np.isfinite(y)] >>> >> >>>>>>>> >>> >> >>>>>>>> # r^2 >>> >> >>>>>>>> # exclude v.small arrays, i.e. we need just less over >>> 4 >>> >> >>>>>>>> years >>> >> >>>>>>>> of >>> >> >>>>>>>> data >>> >> >>>>>>>> if len(x) and len(y) > 50: >>> >> >>>>>>>> pearsonsr_snow[((180-1)-(rows[i]-1)),cols[i]-1] = >>> >> >>>>>>>> (stats.pearsonr(x, y)[0])**2 >>> >> >>>>>>>> >>> >> >>>>>>>> # wilcox signed rank test >>> >> >>>>>>>> # make sure we have enough samples to do the test >>> >> >>>>>>>> d = x - y >>> >> >>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # Keep >>> all >>> >> >>>>>>>> non-zero >>> >> >>>>>>>> differences >>> >> >>>>>>>> count = len(d) >>> >> >>>>>>>> if count > 10: >>> >> >>>>>>>> z, pval = stats.wilcoxon(x, y) >>> >> >>>>>>>> # only map out sign different data >>> >> >>>>>>>> if pval < 0.05: >>> >> >>>>>>>> >>> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] >>> >> = >>> >> >>>>>>>> np.mean(x - y) >>> >> >>>>>>>> >>> >> >>>>>>>> return (pearsonsr_snow, wilcoxStats_snow) >>> >> >>>>>>>> >>> >> >>>>>>>> >>> >> >>>>>>>> josef.pktd wrote: >>> >> >>>>>>>>> >>> >> >>>>>>>>> On Fri, May 21, 2010 at 10:14 PM, mdekauwe < >>> mdekauwe at gmail.com> >>> >> >>>>>>>>> wrote: >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> Also I then need to remap the 2D array I make onto another >>> >> grid >>> >> >>>>>>>>>> (the >>> >> >>>>>>>>>> world in >>> >> >>>>>>>>>> this case). Which again I had am doing with a loop (note >>> >> numpts >>> >> >>>>>>>>>> is >>> >> >>>>>>>>>> a >>> >> >>>>>>>>>> lot >>> >> >>>>>>>>>> bigger than my example above). >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), >>> >> dtype=np.float32) >>> >> >>>>>>>>>> * >>> >> >>>>>>>>>> np.nan >>> >> >>>>>>>>>> for i in xrange(numpts): >>> >> >>>>>>>>>> # exclude the NaN, note masking them doesn't work >>> in >>> >> the >>> >> >>>>>>>>>> stats >>> >> >>>>>>>>>> func >>> >> >>>>>>>>>> x = data1_snow[:,i] >>> >> >>>>>>>>>> x = x[np.isfinite(x)] >>> >> >>>>>>>>>> y = data2_snow[:,i] >>> >> >>>>>>>>>> y = y[np.isfinite(y)] >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> # wilcox signed rank test >>> >> >>>>>>>>>> # make sure we have enough samples to do the test >>> >> >>>>>>>>>> d = x - y >>> >> >>>>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # >>> Keep >>> >> all >>> >> >>>>>>>>>> non-zero >>> >> >>>>>>>>>> differences >>> >> >>>>>>>>>> count = len(d) >>> >> >>>>>>>>>> if count > 10: >>> >> >>>>>>>>>> z, pval = stats.wilcoxon(x, y) >>> >> >>>>>>>>>> # only map out sign different data >>> >> >>>>>>>>>> if pval < 0.05: >>> >> >>>>>>>>>> >>> >> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] >>> >> >>>>>>>>>> = >>> >> >>>>>>>>>> np.mean(x - y) >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> Now I think I can push the data in one move into the >>> >> >>>>>>>>>> wilcoxStats_snow >>> >> >>>>>>>>>> array >>> >> >>>>>>>>>> by removing the index, >>> >> >>>>>>>>>> but I can't see how I will get the individual x and y pts >>> for >>> >> >>>>>>>>>> each >>> >> >>>>>>>>>> array >>> >> >>>>>>>>>> member correctly without the loop, this was my attempt >>> which >>> >> of >>> >> >>>>>>>>>> course >>> >> >>>>>>>>>> doesn't work! >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> x = data1_snow[:,:] >>> >> >>>>>>>>>> x = x[np.isfinite(x)] >>> >> >>>>>>>>>> y = data2_snow[:,:] >>> >> >>>>>>>>>> y = y[np.isfinite(y)] >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> # r^2 >>> >> >>>>>>>>>> # exclude v.small arrays, i.e. we need just less over 4 >>> years >>> >> of >>> >> >>>>>>>>>> data >>> >> >>>>>>>>>> if len(x) and len(y) > 50: >>> >> >>>>>>>>>> pearsonsr_snow[((180-1)-(rows-1)),cols-1] = >>> >> (stats.pearsonr(x, >>> >> >>>>>>>>>> y)[0])**2 >>> >> >>>>>>>>> >>> >> >>>>>>>>> >>> >> >>>>>>>>> If you want to do pairwise comparisons with stats.wilcoxon, >>> >> then >>> >> >>>>>>>>> you >>> >> >>>>>>>>> might be stuck with the loop, since wilcoxon takes only two >>> 1d >>> >> >>>>>>>>> arrays >>> >> >>>>>>>>> at a time (if I read the help correctly). >>> >> >>>>>>>>> >>> >> >>>>>>>>> Also the presence of nans might force the use a loop. >>> >> stats.mstats >>> >> >>>>>>>>> has >>> >> >>>>>>>>> masked array versions, but I didn't see wilcoxon in the >>> list. >>> >> >>>>>>>>> (Even >>> >> >>>>>>>>> when vectorized operations would work with regular arrays, >>> nan >>> >> or >>> >> >>>>>>>>> masked array versions still have to loop in many cases.) >>> >> >>>>>>>>> >>> >> >>>>>>>>> If you have many columns with count <= 10, so that wilcoxon >>> is >>> >> not >>> >> >>>>>>>>> calculated then it might be worth to use only array >>> operations >>> >> up >>> >> >>>>>>>>> to >>> >> >>>>>>>>> that point. If wilcoxon is calculated most of the time, >>> then >>> >> it's >>> >> >>>>>>>>> not >>> >> >>>>>>>>> worth thinking too hard about this. >>> >> >>>>>>>>> >>> >> >>>>>>>>> Josef >>> >> >>>>>>>>> >>> >> >>>>>>>>> >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> thanks. >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> mdekauwe wrote: >>> >> >>>>>>>>>>> >>> >> >>>>>>>>>>> Yes as Zachary said index is only 0 to 15237, so both >>> methods >>> >> >>>>>>>>>>> work. >>> >> >>>>>>>>>>> >>> >> >>>>>>>>>>> I don't quite get what you mean about slicing with axis > >>> 3. >>> >> Is >>> >> >>>>>>>>>>> there >>> >> >>>>>>>>>>> a >>> >> >>>>>>>>>>> link you can recommend I should read? Does that mean >>> given >>> I >>> >> >>>>>>>>>>> have >>> >> >>>>>>>>>>> 4dims >>> >> >>>>>>>>>>> that Josef's suggestion would be more advised in this >>> case? >>> >> >>>>>>>>> >>> >> >>>>>>>>> There were several discussions on the mailing lists (fancy >>> >> slicing >>> >> >>>>>>>>> and >>> >> >>>>>>>>> indexing). Your case is safe, but if you run in future into >>> >> funny >>> >> >>>>>>>>> shapes, you can look up the details. >>> >> >>>>>>>>> when in doubt, I use np.arange(...) >>> >> >>>>>>>>> >>> >> >>>>>>>>> Josef >>> >> >>>>>>>>> >>> >> >>>>>>>>>>> >>> >> >>>>>>>>>>> Thanks. >>> >> >>>>>>>>>>> >>> >> >>>>>>>>>>> >>> >> >>>>>>>>>>> >>> >> >>>>>>>>>>> josef.pktd wrote: >>> >> >>>>>>>>>>>> >>> >> >>>>>>>>>>>> On Fri, May 21, 2010 at 10:55 AM, mdekauwe < >>> >> mdekauwe at gmail.com> >>> >> >>>>>>>>>>>> wrote: >>> >> >>>>>>>>>>>>> >>> >> >>>>>>>>>>>>> Thanks that works... >>> >> >>>>>>>>>>>>> >>> >> >>>>>>>>>>>>> So the way to do it is with np.arange(tsteps)[:,None], >>> that >>> >> >>>>>>>>>>>>> was >>> >> >>>>>>>>>>>>> the >>> >> >>>>>>>>>>>>> step >>> >> >>>>>>>>>>>>> I >>> >> >>>>>>>>>>>>> was struggling with, so this forms a 2D array which >>> >> replaces >>> >> >>>>>>>>>>>>> the >>> >> >>>>>>>>>>>>> the >>> >> >>>>>>>>>>>>> two >>> >> >>>>>>>>>>>>> for >>> >> >>>>>>>>>>>>> loops? Do I have that right? >>> >> >>>>>>>>>>>> >>> >> >>>>>>>>>>>> Yes, but as Zachary showed, if you need the full index >>> in >>> a >>> >> >>>>>>>>>>>> dimension, >>> >> >>>>>>>>>>>> then you can use slicing. It might be faster. >>> >> >>>>>>>>>>>> And a warning, mixing slices and index arrays with 3 or >>> more >>> >> >>>>>>>>>>>> dimensions can have some surprise switching of axes. >>> >> >>>>>>>>>>>> >>> >> >>>>>>>>>>>> Josef >>> >> >>>>>>>>>>>> >>> >> >>>>>>>>>>>>> >>> >> >>>>>>>>>>>>> A lot quicker...! >>> >> >>>>>>>>>>>>> >>> >> >>>>>>>>>>>>> Martin >>> >> >>>>>>>>>>>>> >>> >> >>>>>>>>>>>>> >>> >> >>>>>>>>>>>>> josef.pktd wrote: >>> >> >>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>> On Fri, May 21, 2010 at 8:59 AM, mdekauwe >>> >> >>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>> wrote: >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>>> Hi, >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>>> I am trying to extract data from a 4D array and store >>> it >>> >> in >>> >> >>>>>>>>>>>>>>> a >>> >> >>>>>>>>>>>>>>> 2D >>> >> >>>>>>>>>>>>>>> array, >>> >> >>>>>>>>>>>>>>> but >>> >> >>>>>>>>>>>>>>> avoid my current usage of the for loops for speed, as >>> in >>> >> >>>>>>>>>>>>>>> reality >>> >> >>>>>>>>>>>>>>> the >>> >> >>>>>>>>>>>>>>> arrays >>> >> >>>>>>>>>>>>>>> sizes are quite big. Could someone also try and >>> explain >>> >> the >>> >> >>>>>>>>>>>>>>> solution >>> >> >>>>>>>>>>>>>>> as >>> >> >>>>>>>>>>>>>>> well >>> >> >>>>>>>>>>>>>>> if they have a spare moment as I am still finding it >>> >> quite >>> >> >>>>>>>>>>>>>>> difficult >>> >> >>>>>>>>>>>>>>> to >>> >> >>>>>>>>>>>>>>> get >>> >> >>>>>>>>>>>>>>> over the habit of using loops (C convert for my >>> sins). >>> I >>> >> get >>> >> >>>>>>>>>>>>>>> that >>> >> >>>>>>>>>>>>>>> one >>> >> >>>>>>>>>>>>>>> could >>> >> >>>>>>>>>>>>>>> precompute the indices's i and j i.e. >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>>> i = np.arange(tsteps) >>> >> >>>>>>>>>>>>>>> j = np.arange(numpts) >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>>> but just can't get my head round how i then use >>> them... >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>>> Thanks, >>> >> >>>>>>>>>>>>>>> Martin >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>>> import numpy as np >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>>> numpts=10 >>> >> >>>>>>>>>>>>>>> tsteps = 12 >>> >> >>>>>>>>>>>>>>> vari = 22 >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>>> data = np.random.random((tsteps, vari, numpts, 1)) >>> >> >>>>>>>>>>>>>>> new_data = np.zeros((tsteps, numpts), >>> dtype=np.float32) >>> >> >>>>>>>>>>>>>>> index = np.arange(numpts) >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>>> for i in xrange(tsteps): >>> >> >>>>>>>>>>>>>>> for j in xrange(numpts): >>> >> >>>>>>>>>>>>>>> new_data[i,j] = data[i,5,index[j],0] >>> >> >>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>> The index arrays need to be broadcastable against each >>> >> other. >>> >> >>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>> I think this should do it >>> >> >>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>> new_data = data[np.arange(tsteps)[:,None], 5, >>> >> >>>>>>>>>>>>>> np.arange(numpts), >>> >> >>>>>>>>>>>>>> 0] >>> >> >>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>> Josef >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>>> -- >>> >> >>>>>>>>>>>>>>> View this message in context: >>> >> >>>>>>>>>>>>>>> >>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28633477.html >>> >> >>>>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at >>> >> Nabble.com. >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>>> _______________________________________________ >>> >> >>>>>>>>>>>>>>> SciPy-User mailing list >>> >> >>>>>>>>>>>>>>> SciPy-User at scipy.org >>> >> >>>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>> _______________________________________________ >>> >> >>>>>>>>>>>>>> SciPy-User mailing list >>> >> >>>>>>>>>>>>>> SciPy-User at scipy.org >>> >> >>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>>> >>> >> >>>>>>>>>>>>> >>> >> >>>>>>>>>>>>> -- >>> >> >>>>>>>>>>>>> View this message in context: >>> >> >>>>>>>>>>>>> >>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28634924.html >>> >> >>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at >>> >> Nabble.com. >>> >> >>>>>>>>>>>>> >>> >> >>>>>>>>>>>>> _______________________________________________ >>> >> >>>>>>>>>>>>> SciPy-User mailing list >>> >> >>>>>>>>>>>>> SciPy-User at scipy.org >>> >> >>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>>>>>>>>>>>> >>> >> >>>>>>>>>>>> _______________________________________________ >>> >> >>>>>>>>>>>> SciPy-User mailing list >>> >> >>>>>>>>>>>> SciPy-User at scipy.org >>> >> >>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>>>>>>>>>>> >>> >> >>>>>>>>>>>> >>> >> >>>>>>>>>>> >>> >> >>>>>>>>>>> >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> -- >>> >> >>>>>>>>>> View this message in context: >>> >> >>>>>>>>>> >>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28640656.html >>> >> >>>>>>>>>> Sent from the Scipy-User mailing list archive at >>> Nabble.com. >>> >> >>>>>>>>>> >>> >> >>>>>>>>>> _______________________________________________ >>> >> >>>>>>>>>> SciPy-User mailing list >>> >> >>>>>>>>>> SciPy-User at scipy.org >>> >> >>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>>>>>>>>> >>> >> >>>>>>>>> _______________________________________________ >>> >> >>>>>>>>> SciPy-User mailing list >>> >> >>>>>>>>> SciPy-User at scipy.org >>> >> >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>>>>>>>> >>> >> >>>>>>>>> >>> >> >>>>>>>> >>> >> >>>>>>>> -- >>> >> >>>>>>>> View this message in context: >>> >> >>>>>>>> >>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28642434.html >>> >> >>>>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >> >>>>>>>> >>> >> >>>>>>>> _______________________________________________ >>> >> >>>>>>>> SciPy-User mailing list >>> >> >>>>>>>> SciPy-User at scipy.org >>> >> >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>>>>>>> >>> >> >>>>>>> _______________________________________________ >>> >> >>>>>>> SciPy-User mailing list >>> >> >>>>>>> SciPy-User at scipy.org >>> >> >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>>>>>> >>> >> >>>>>>> >>> >> >>>>>> >>> >> >>>>>> -- >>> >> >>>>>> View this message in context: >>> >> >>>>>> >>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28686356.html >>> >> >>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >> >>>>>> >>> >> >>>>>> _______________________________________________ >>> >> >>>>>> SciPy-User mailing list >>> >> >>>>>> SciPy-User at scipy.org >>> >> >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>>>>> >>> >> >>>>> _______________________________________________ >>> >> >>>>> SciPy-User mailing list >>> >> >>>>> SciPy-User at scipy.org >>> >> >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>>>> >>> >> >>>>> >>> >> >>>> >>> >> >>>> -- >>> >> >>>> View this message in context: >>> >> >>>> >>> http://old.nabble.com/removing-for-loops...-tp28633477p28711249.html >>> >> >>>> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >> >>>> >>> >> >>>> _______________________________________________ >>> >> >>>> SciPy-User mailing list >>> >> >>>> SciPy-User at scipy.org >>> >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>>> >>> >> >>> _______________________________________________ >>> >> >>> SciPy-User mailing list >>> >> >>> SciPy-User at scipy.org >>> >> >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>> >>> >> >>> >>> >> >> >>> >> >> -- >>> >> >> View this message in context: >>> >> >> >>> http://old.nabble.com/removing-for-loops...-tp28633477p28711444.html >>> >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >> >> >>> >> >> _______________________________________________ >>> >> >> SciPy-User mailing list >>> >> >> SciPy-User at scipy.org >>> >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> >>> >> > _______________________________________________ >>> >> > SciPy-User mailing list >>> >> > SciPy-User at scipy.org >>> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> > >>> >> > >>> >> >>> >> -- >>> >> View this message in context: >>> >> http://old.nabble.com/removing-for-loops...-tp28633477p28711581.html >>> >> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >> >>> >> _______________________________________________ >>> >> SciPy-User mailing list >>> >> SciPy-User at scipy.org >>> >> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >>> > >>> > _______________________________________________ >>> > SciPy-User mailing list >>> > SciPy-User at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> > >>> > >>> >>> -- >>> View this message in context: >>> http://old.nabble.com/removing-for-loops...-tp28633477p28824023.html >>> Sent from the Scipy-User mailing list archive at Nabble.com. >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/removing-for-loops...-tp28633477p28846602.html Sent from the Scipy-User mailing list archive at Nabble.com. From josef.pktd at gmail.com Thu Jun 10 14:27:13 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 10 Jun 2010 14:27:13 -0400 Subject: [SciPy-User] Global Curve Fitting of 2 functions to 2 sets of data-curves In-Reply-To: References: Message-ID: On Thu, Jun 10, 2010 at 4:05 AM, Sebastian Haase wrote: > Hi, > > so far I have been using scipy.optimize.leastsq to satisfy all my > curve fitting needs. > But now I am thinking about "global fitting" - i.e. fitting multiple > dataset with shared parameters > (e.g. ref here: > http://www.originlab.com/index.aspx?go=Products/Origin/DataAnalysis/CurveFitting/GlobalFitting) > > I have looked here (http://www.scipy.org/Cookbook/FittingData) and here > (http://docs.scipy.org/doc/scipy/reference/optimize.html) > > Can someone provide an example ?? Which of the routines of > scipy.optimize are "easiest" to use ? > > Finally, I'm thinking about a "much more" complicated fitting task: > fitting two sets of datasets with two types of functions. > In total I have 10 datasets to be fit with a function f1, and 10 more > to be fit with function f2. Each function depends on 6 parameters > A1,A2,A3, r1,r2,r3. > A1,A2,A3 should be identical ("shared") between all 20 sets, while > r1,r2,r3 should be shared between the i-th set of type f1 and the i-th > set of f2. > Last but not least it would be nice if one could specify constrains > such that r1,r2,r3 >0 and A1+A2+A3 == 1 and 0<=Ai<=1. > > ;-) ?Is this too much ? > > Thanks for any help or hints, Assuming your noise or error terms are uncorrelated, I would still use optimize.leastsq or optimize.curve_fit using a function that stacks all the errors in one 1-d array. If there are differences in the noise variance, then weights/sigma per function as in curve_fit can be used. common parameter restrictions across functions can be encoded by using the same parameter in several (sub-)functions. In this case, I would impose the constraints through reparameterization, e.g r1 = exp(r1_), ... A1 = exp(A1_)/(exp(A1_) + exp(A2_) + 1) A1 = exp(A2_)/(exp(A1_) + exp(A2_) + 1) A1 = 1/(exp(A1_) + exp(A2_) + 1) (maybe it's more tricky to get the standard deviation of the original parameter estimate) or as an alternative, calculate the total weighted sum of squared errors and use one of the constraint fmin in optimize. Josef > > Sebastian Haase > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ben.root at ou.edu Thu Jun 10 14:56:59 2010 From: ben.root at ou.edu (Benjamin Root) Date: Thu, 10 Jun 2010 13:56:59 -0500 Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: <28846602.post@talk.nabble.com> References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28711581.post@talk.nabble.com> <28824023.post@talk.nabble.com> <28846602.post@talk.nabble.com> Message-ID: Well, let's try a more direct example. I am going to create a 4d array of random values to illustrate. I know the length of the dimensions won't be exactly the same as yours, but the example will still be valid. In this example, I will be able to calculate *all* of the monthly averages for *all* of the variables for *all* of the grid points without a single loop. > jules = np.random.random((132, 10, 50, 3)) > print jules.shape (132, 10, 50, 3) > jules_5d = np.reshape(jules, (-1, 12) + jules.shape[1:]) > print jules_5d.shape (11, 12, 10, 50, 3) > jules_5d = np.ma.masked_array(jules_5d, mask=jules_5d < 0.0) > jules_means = np.mean(jules_5d, axis=0) > print jules_means.shape (12, 10, 50, 3) voila! This matrix has a mean for each month across all eleven years for each datapoint in each of the 10 variables at each (I am assuming) level in the atmosphere. So, if you want to operate on a subset of your jules matrix (for example, you need to do special masking for each variable), then you can just work off of a slice of the original matrix, and many of these same concepts in this example and the previous example still applies. Ben Root On Thu, Jun 10, 2010 at 1:08 PM, mdekauwe wrote: > > Hi, > > No if I am honest I am a little confused how what you are suggesting would > work. As I see it the array I am trying to average from has dims > jules[(numyears * nummonths),1,numpts,0]. Where the first dimension (132) > is > 12 months x 11 years. And as I said before I would like to average the jan > from the first, second, third years etc. Then the same for the feb and so > on. > > So I don't see how you get to your 2d array that you mention in the first > line? I thought what you were suggesting was I could precompute the step > that builds the index for the months e.g > > mth_index = np.zeros(0) > for month in xrange(nummonths): > mth_index = np.append(mth_index, np.arange(month, numyears * nummonths, > nummonths)) > > and use this as my index to skip the for loop. Though I still have a for > loop I guess! > > > > > > > Benjamin Root-2 wrote: > > > > Correction for me as well. To mask out the negative values, use masked > > arrays. So we will turn jules_2d into a masked array (second line), then > > all subsequent commands will still work as expected. It is very similar > > to > > replacing negative values with nans and using nanmin(). > > > >> jules_2d = jules.reshape((-1, 12)) > >> jules_2d = np.ma.masked_array(jules_2d, mask=jules_2d < 0.0) > >> jules_monthly = np.mean(jules_2d, axis=0) > >> print jules_monthly.shape > > (12,) > > > > Ben Root > > > > On Tue, Jun 8, 2010 at 7:49 PM, Benjamin Root wrote: > > > >> The np.mod in my example caused the data points to stay within [0, 11] > in > >> order to illustrate that these are months. In my example, months are > >> column, years are rows. In your desired output, months are rows and > >> years > >> are columns. It makes very little difference which way you have it. > >> > >> Anyway, let's imagine that we have a time series of data "jules". We > can > >> easily reshape this like so: > >> > >> > jules_2d = jules.reshape((-1, 12)) > >> > jules_monthly = np.mean(jules_2d, axis=0) > >> > print jules_monthly.shape > >> (12,) > >> > >> voila! You have 12 values in jules_monthly which are means for that > >> month > >> across all years. > >> > >> protip - if you want yearly averages just change the ax parameter in > >> np.mean(): > >> > jules_yearly = np.mean(jules_2d, axis=1) > >> > >> I hope that makes my previous explanation clearer. > >> > >> Ben Root > >> > >> > >> On Tue, Jun 8, 2010 at 5:41 PM, mdekauwe wrote: > >> > >>> > >>> OK... > >>> > >>> but if I do... > >>> > >>> In [28]: np.mod(np.arange(nummonths*numyears), nummonths).reshape((-1, > >>> nummonths)) > >>> Out[28]: > >>> array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) > >>> > >>> When really I would be after something like this I think? > >>> > >>> array([ 0, 12, 24, 36, 48, 60, 72, 84, 96, 108, 120], > >>> [ 1, 13, 25, 37, 49, 61, 73, 85, 97, 109, 121], > >>> [ 2, 14, 26, 38, 50, 62, 74, 86, 98, 110, 122] > >>> etc, etc > >>> > >>> i.e. so for each month jump across the years. > >>> > >>> Not quite sure of this example...this is what I currently have which > >>> does > >>> seem to work, though I guess not completely efficiently. > >>> > >>> for month in xrange(nummonths): > >>> tmp = jules[xrange(0, numyears * nummonths, nummonths),VAR,:,0] > >>> tmp[tmp < 0.0] = np.nan > >>> data[month,:] = np.mean(tmp, axis=0) > >>> > >>> > >>> > >>> > >>> Benjamin Root-2 wrote: > >>> > > >>> > If you want an average for each month from your timeseries, then the > >>> > sneaky > >>> > way would be to reshape your array so that the time dimension is > split > >>> > into > >>> > two (month, year) dimensions. > >>> > > >>> > For a 1-D array, this would be: > >>> > > >>> >> dataarray = numpy.mod(numpy.arange(36), 12) > >>> >> print dataarray > >>> > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, > 3, > >>> 4, > >>> > 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, 4, 5, 6, 7, > 8, > >>> 9, > >>> > 10, 11]) > >>> >> datamatrix = dataarray.reshape((-1, 12)) > >>> >> print datamatrix > >>> > array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >>> > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >>> > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) > >>> > > >>> > Hope that helps. > >>> > > >>> > Ben Root > >>> > > >>> > > >>> > On Fri, May 28, 2010 at 3:28 PM, mdekauwe > wrote: > >>> > > >>> >> > >>> >> OK so I just need to have a quick loop across the 12 months then, > >>> that > >>> is > >>> >> fine, just thought there might have been a sneaky way! > >>> >> > >>> >> Really appreciated, getting there slowly! > >>> >> > >>> >> > >>> >> > >>> >> josef.pktd wrote: > >>> >> > > >>> >> > On Fri, May 28, 2010 at 4:14 PM, mdekauwe > >>> wrote: > >>> >> >> > >>> >> >> ok - something like this then...but how would i get the index for > >>> the > >>> >> >> month > >>> >> >> for the data array (where month is 0, 1, 2, 4 ... 11)? > >>> >> >> > >>> >> >> data[month,:] = array[xrange(0, numyears * nummonths, > >>> >> nummonths),VAR,:,0] > >>> >> > > >>> >> > you would still need to start at the right month > >>> >> > data[month,:] = array[xrange(month, numyears * nummonths, > >>> >> > nummonths),VAR,:,0] > >>> >> > or > >>> >> > data[month,:] = array[month: numyears * nummonths : > >>> nummonths),VAR,:,0] > >>> >> > > >>> >> > an alternative would be a reshape with an extra month dimension > and > >>> >> > then sum only once over the year axis. this might be faster but > >>> >> > trickier to get the correct reshape . > >>> >> > > >>> >> > Josef > >>> >> > > >>> >> >> > >>> >> >> and would that be quicker than making an array months... > >>> >> >> > >>> >> >> months = np.arange(numyears * nummonths) > >>> >> >> > >>> >> >> and you that instead like you suggested x[start:end:12,:]? > >>> >> >> > >>> >> >> Many thanks again... > >>> >> >> > >>> >> >> > >>> >> >> josef.pktd wrote: > >>> >> >>> > >>> >> >>> On Fri, May 28, 2010 at 3:53 PM, mdekauwe > >>> wrote: > >>> >> >>>> > >>> >> >>>> Ok thanks...I'll take a look. > >>> >> >>>> > >>> >> >>>> Back to my loops issue. What if instead this time I wanted to > >>> take > >>> >> an > >>> >> >>>> average so every march in 11 years, is there a quicker way to > go > >>> >> about > >>> >> >>>> doing > >>> >> >>>> that than my current method? > >>> >> >>>> > >>> >> >>>> nummonths = 12 > >>> >> >>>> numyears = 11 > >>> >> >>>> > >>> >> >>>> for month in xrange(nummonths): > >>> >> >>>> for i in xrange(numpts): > >>> >> >>>> for ym in xrange(month, numyears * nummonths, > nummonths): > >>> >> >>>> data[month, i] += array[ym, VAR, land_pts_index[i], > >>> 0] > >>> >> >>> > >>> >> >>> > >>> >> >>> x[start:end:12,:] gives you every 12th row of an array x > >>> >> >>> > >>> >> >>> something like this should work to get rid of the inner loop, or > >>> you > >>> >> >>> could directly put > >>> >> >>> range(month, numyears * nummonths, nummonths) into the array > >>> instead > >>> >> >>> of ym and sum() > >>> >> >>> > >>> >> >>> Josef > >>> >> >>> > >>> >> >>> > >>> >> >>>> > >>> >> >>>> so for each point in the array for a given month i am jumping > >>> >> through > >>> >> >>>> and > >>> >> >>>> getting the next years month and so on, summing it. > >>> >> >>>> > >>> >> >>>> Thanks... > >>> >> >>>> > >>> >> >>>> > >>> >> >>>> josef.pktd wrote: > >>> >> >>>>> > >>> >> >>>>> On Wed, May 26, 2010 at 5:03 PM, mdekauwe > > >>> >> wrote: > >>> >> >>>>>> > >>> >> >>>>>> Could you possibly if you have time explain further your > >>> comment > >>> >> re > >>> >> >>>>>> the > >>> >> >>>>>> p-values, your suggesting I am misusing them? > >>> >> >>>>> > >>> >> >>>>> Depends on your use and interpretation > >>> >> >>>>> > >>> >> >>>>> test statistics, p-values are random variables, if you look at > >>> >> several > >>> >> >>>>> tests at the same time, some p-values will be large just by > >>> chance. > >>> >> >>>>> If, for example you just look at the largest test statistic, > >>> then > >>> >> the > >>> >> >>>>> distribution for the max of several test statistics is not the > >>> same > >>> >> as > >>> >> >>>>> the distribution for a single test statistic > >>> >> >>>>> > >>> >> >>>>> http://en.wikipedia.org/wiki/Multiple_comparisons > >>> >> >>>>> > http://www.itl.nist.gov/div898/handbook/prc/section4/prc47.htm > >>> >> >>>>> > >>> >> >>>>> we also just had a related discussion for ANOVA post-hoc tests > >>> on > >>> >> the > >>> >> >>>>> pystatsmodels group. > >>> >> >>>>> > >>> >> >>>>> Josef > >>> >> >>>>>> > >>> >> >>>>>> Thanks. > >>> >> >>>>>> > >>> >> >>>>>> > >>> >> >>>>>> josef.pktd wrote: > >>> >> >>>>>>> > >>> >> >>>>>>> On Sat, May 22, 2010 at 6:21 AM, mdekauwe > >>> > >>> >> >>>>>>> wrote: > >>> >> >>>>>>>> > >>> >> >>>>>>>> Sounds like I am stuck with the loop as I need to do the > >>> >> comparison > >>> >> >>>>>>>> for > >>> >> >>>>>>>> each > >>> >> >>>>>>>> pixel of the world and then I have a basemap function call > >>> which > >>> >> I > >>> >> >>>>>>>> guess > >>> >> >>>>>>>> slows it down further...hmm > >>> >> >>>>>>> > >>> >> >>>>>>> I don't see much that could be done differently, after a > >>> brief > >>> >> look. > >>> >> >>>>>>> > >>> >> >>>>>>> stats.pearsonr could be replaced by an array version using > >>> >> directly > >>> >> >>>>>>> the formula for correlation even with nans. wilcoxon looks > >>> slow, > >>> >> and > >>> >> >>>>>>> I > >>> >> >>>>>>> never tried or seen a faster version. > >>> >> >>>>>>> > >>> >> >>>>>>> just a reminder, the p-values are for a single test, when > you > >>> >> have > >>> >> >>>>>>> many of them, then they don't have the right size/confidence > >>> >> level > >>> >> >>>>>>> for > >>> >> >>>>>>> an overall or joint test. (some packages report a Bonferroni > >>> >> >>>>>>> correction in this case) > >>> >> >>>>>>> > >>> >> >>>>>>> Josef > >>> >> >>>>>>> > >>> >> >>>>>>> > >>> >> >>>>>>>> > >>> >> >>>>>>>> i.e. > >>> >> >>>>>>>> > >>> >> >>>>>>>> def compareSnowData(jules_var): > >>> >> >>>>>>>> # Extract the 11 years of snow data and return > >>> >> >>>>>>>> outrows = 180 > >>> >> >>>>>>>> outcols = 360 > >>> >> >>>>>>>> numyears = 11 > >>> >> >>>>>>>> nummonths = 12 > >>> >> >>>>>>>> > >>> >> >>>>>>>> # Read various files > >>> >> >>>>>>>> fname="world_valid_jules_pts.ascii" > >>> >> >>>>>>>> (numpts, land_pts_index, latitude, longitude, rows, > cols) > >>> = > >>> >> >>>>>>>> jo.read_land_points_ascii(fname, 1.0) > >>> >> >>>>>>>> > >>> >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax0.mon.gra" > >>> >> >>>>>>>> jules_data1 = jo.readJulesOutBinary(fname, > numrows=15238, > >>> >> >>>>>>>> numcols=1, > >>> >> >>>>>>>> \ > >>> >> >>>>>>>> timesteps=132, numvars=26) > >>> >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax3.mon.gra" > >>> >> >>>>>>>> jules_data2 = jo.readJulesOutBinary(fname, > numrows=15238, > >>> >> >>>>>>>> numcols=1, > >>> >> >>>>>>>> \ > >>> >> >>>>>>>> timesteps=132, numvars=26) > >>> >> >>>>>>>> > >>> >> >>>>>>>> # grab some space > >>> >> >>>>>>>> data1_snow = np.zeros((nummonths * numyears, numpts), > >>> >> >>>>>>>> dtype=np.float32) > >>> >> >>>>>>>> data2_snow = np.zeros((nummonths * numyears, numpts), > >>> >> >>>>>>>> dtype=np.float32) > >>> >> >>>>>>>> pearsonsr_snow = np.ones((outrows, outcols), > >>> >> dtype=np.float32) > >>> >> * > >>> >> >>>>>>>> np.nan > >>> >> >>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), > >>> >> dtype=np.float32) > >>> >> >>>>>>>> * > >>> >> >>>>>>>> np.nan > >>> >> >>>>>>>> > >>> >> >>>>>>>> # extract the data > >>> >> >>>>>>>> data1_snow = jules_data1[:,jules_var,:,0] > >>> >> >>>>>>>> data2_snow = jules_data2[:,jules_var,:,0] > >>> >> >>>>>>>> data1_snow = np.where(data1_snow < 0.0, np.nan, > >>> data1_snow) > >>> >> >>>>>>>> data2_snow = np.where(data2_snow < 0.0, np.nan, > >>> data2_snow) > >>> >> >>>>>>>> #for month in xrange(numyears * nummonths): > >>> >> >>>>>>>> # for i in xrange(numpts): > >>> >> >>>>>>>> # data1 = > >>> >> >>>>>>>> jules_data1[month,jules_var,land_pts_index[i],0] > >>> >> >>>>>>>> # data2 = > >>> >> >>>>>>>> jules_data2[month,jules_var,land_pts_index[i],0] > >>> >> >>>>>>>> # if data1 >= 0.0: > >>> >> >>>>>>>> # data1_snow[month,i] = data1 > >>> >> >>>>>>>> # else: > >>> >> >>>>>>>> # data1_snow[month,i] = np.nan > >>> >> >>>>>>>> # if data2 > 0.0: > >>> >> >>>>>>>> # data2_snow[month,i] = data2 > >>> >> >>>>>>>> # else: > >>> >> >>>>>>>> # data2_snow[month,i] = np.nan > >>> >> >>>>>>>> > >>> >> >>>>>>>> # exclude any months from *both* arrays where we have > >>> dodgy > >>> >> >>>>>>>> data, > >>> >> >>>>>>>> else > >>> >> >>>>>>>> we > >>> >> >>>>>>>> # can't do the correlations correctly!! > >>> >> >>>>>>>> data1_snow = np.where(np.isnan(data2_snow), np.nan, > >>> >> data1_snow) > >>> >> >>>>>>>> data2_snow = np.where(np.isnan(data1_snow), np.nan, > >>> >> data1_snow) > >>> >> >>>>>>>> > >>> >> >>>>>>>> # put data on a regular grid... > >>> >> >>>>>>>> print 'regridding landpts...' > >>> >> >>>>>>>> for i in xrange(numpts): > >>> >> >>>>>>>> # exclude the NaN, note masking them doesn't work in > >>> the > >>> >> >>>>>>>> stats > >>> >> >>>>>>>> func > >>> >> >>>>>>>> x = data1_snow[:,i] > >>> >> >>>>>>>> x = x[np.isfinite(x)] > >>> >> >>>>>>>> y = data2_snow[:,i] > >>> >> >>>>>>>> y = y[np.isfinite(y)] > >>> >> >>>>>>>> > >>> >> >>>>>>>> # r^2 > >>> >> >>>>>>>> # exclude v.small arrays, i.e. we need just less > over > >>> 4 > >>> >> >>>>>>>> years > >>> >> >>>>>>>> of > >>> >> >>>>>>>> data > >>> >> >>>>>>>> if len(x) and len(y) > 50: > >>> >> >>>>>>>> pearsonsr_snow[((180-1)-(rows[i]-1)),cols[i]-1] > = > >>> >> >>>>>>>> (stats.pearsonr(x, y)[0])**2 > >>> >> >>>>>>>> > >>> >> >>>>>>>> # wilcox signed rank test > >>> >> >>>>>>>> # make sure we have enough samples to do the test > >>> >> >>>>>>>> d = x - y > >>> >> >>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # > Keep > >>> all > >>> >> >>>>>>>> non-zero > >>> >> >>>>>>>> differences > >>> >> >>>>>>>> count = len(d) > >>> >> >>>>>>>> if count > 10: > >>> >> >>>>>>>> z, pval = stats.wilcoxon(x, y) > >>> >> >>>>>>>> # only map out sign different data > >>> >> >>>>>>>> if pval < 0.05: > >>> >> >>>>>>>> > >>> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] > >>> >> = > >>> >> >>>>>>>> np.mean(x - y) > >>> >> >>>>>>>> > >>> >> >>>>>>>> return (pearsonsr_snow, wilcoxStats_snow) > >>> >> >>>>>>>> > >>> >> >>>>>>>> > >>> >> >>>>>>>> josef.pktd wrote: > >>> >> >>>>>>>>> > >>> >> >>>>>>>>> On Fri, May 21, 2010 at 10:14 PM, mdekauwe < > >>> mdekauwe at gmail.com> > >>> >> >>>>>>>>> wrote: > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> Also I then need to remap the 2D array I make onto > another > >>> >> grid > >>> >> >>>>>>>>>> (the > >>> >> >>>>>>>>>> world in > >>> >> >>>>>>>>>> this case). Which again I had am doing with a loop (note > >>> >> numpts > >>> >> >>>>>>>>>> is > >>> >> >>>>>>>>>> a > >>> >> >>>>>>>>>> lot > >>> >> >>>>>>>>>> bigger than my example above). > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), > >>> >> dtype=np.float32) > >>> >> >>>>>>>>>> * > >>> >> >>>>>>>>>> np.nan > >>> >> >>>>>>>>>> for i in xrange(numpts): > >>> >> >>>>>>>>>> # exclude the NaN, note masking them doesn't work > >>> in > >>> >> the > >>> >> >>>>>>>>>> stats > >>> >> >>>>>>>>>> func > >>> >> >>>>>>>>>> x = data1_snow[:,i] > >>> >> >>>>>>>>>> x = x[np.isfinite(x)] > >>> >> >>>>>>>>>> y = data2_snow[:,i] > >>> >> >>>>>>>>>> y = y[np.isfinite(y)] > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> # wilcox signed rank test > >>> >> >>>>>>>>>> # make sure we have enough samples to do the test > >>> >> >>>>>>>>>> d = x - y > >>> >> >>>>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # > >>> Keep > >>> >> all > >>> >> >>>>>>>>>> non-zero > >>> >> >>>>>>>>>> differences > >>> >> >>>>>>>>>> count = len(d) > >>> >> >>>>>>>>>> if count > 10: > >>> >> >>>>>>>>>> z, pval = stats.wilcoxon(x, y) > >>> >> >>>>>>>>>> # only map out sign different data > >>> >> >>>>>>>>>> if pval < 0.05: > >>> >> >>>>>>>>>> > >>> >> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] > >>> >> >>>>>>>>>> = > >>> >> >>>>>>>>>> np.mean(x - y) > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> Now I think I can push the data in one move into the > >>> >> >>>>>>>>>> wilcoxStats_snow > >>> >> >>>>>>>>>> array > >>> >> >>>>>>>>>> by removing the index, > >>> >> >>>>>>>>>> but I can't see how I will get the individual x and y pts > >>> for > >>> >> >>>>>>>>>> each > >>> >> >>>>>>>>>> array > >>> >> >>>>>>>>>> member correctly without the loop, this was my attempt > >>> which > >>> >> of > >>> >> >>>>>>>>>> course > >>> >> >>>>>>>>>> doesn't work! > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> x = data1_snow[:,:] > >>> >> >>>>>>>>>> x = x[np.isfinite(x)] > >>> >> >>>>>>>>>> y = data2_snow[:,:] > >>> >> >>>>>>>>>> y = y[np.isfinite(y)] > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> # r^2 > >>> >> >>>>>>>>>> # exclude v.small arrays, i.e. we need just less over 4 > >>> years > >>> >> of > >>> >> >>>>>>>>>> data > >>> >> >>>>>>>>>> if len(x) and len(y) > 50: > >>> >> >>>>>>>>>> pearsonsr_snow[((180-1)-(rows-1)),cols-1] = > >>> >> (stats.pearsonr(x, > >>> >> >>>>>>>>>> y)[0])**2 > >>> >> >>>>>>>>> > >>> >> >>>>>>>>> > >>> >> >>>>>>>>> If you want to do pairwise comparisons with > stats.wilcoxon, > >>> >> then > >>> >> >>>>>>>>> you > >>> >> >>>>>>>>> might be stuck with the loop, since wilcoxon takes only > two > >>> 1d > >>> >> >>>>>>>>> arrays > >>> >> >>>>>>>>> at a time (if I read the help correctly). > >>> >> >>>>>>>>> > >>> >> >>>>>>>>> Also the presence of nans might force the use a loop. > >>> >> stats.mstats > >>> >> >>>>>>>>> has > >>> >> >>>>>>>>> masked array versions, but I didn't see wilcoxon in the > >>> list. > >>> >> >>>>>>>>> (Even > >>> >> >>>>>>>>> when vectorized operations would work with regular arrays, > >>> nan > >>> >> or > >>> >> >>>>>>>>> masked array versions still have to loop in many cases.) > >>> >> >>>>>>>>> > >>> >> >>>>>>>>> If you have many columns with count <= 10, so that > wilcoxon > >>> is > >>> >> not > >>> >> >>>>>>>>> calculated then it might be worth to use only array > >>> operations > >>> >> up > >>> >> >>>>>>>>> to > >>> >> >>>>>>>>> that point. If wilcoxon is calculated most of the time, > >>> then > >>> >> it's > >>> >> >>>>>>>>> not > >>> >> >>>>>>>>> worth thinking too hard about this. > >>> >> >>>>>>>>> > >>> >> >>>>>>>>> Josef > >>> >> >>>>>>>>> > >>> >> >>>>>>>>> > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> thanks. > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> mdekauwe wrote: > >>> >> >>>>>>>>>>> > >>> >> >>>>>>>>>>> Yes as Zachary said index is only 0 to 15237, so both > >>> methods > >>> >> >>>>>>>>>>> work. > >>> >> >>>>>>>>>>> > >>> >> >>>>>>>>>>> I don't quite get what you mean about slicing with axis > > > >>> 3. > >>> >> Is > >>> >> >>>>>>>>>>> there > >>> >> >>>>>>>>>>> a > >>> >> >>>>>>>>>>> link you can recommend I should read? Does that mean > >>> given > >>> I > >>> >> >>>>>>>>>>> have > >>> >> >>>>>>>>>>> 4dims > >>> >> >>>>>>>>>>> that Josef's suggestion would be more advised in this > >>> case? > >>> >> >>>>>>>>> > >>> >> >>>>>>>>> There were several discussions on the mailing lists (fancy > >>> >> slicing > >>> >> >>>>>>>>> and > >>> >> >>>>>>>>> indexing). Your case is safe, but if you run in future > into > >>> >> funny > >>> >> >>>>>>>>> shapes, you can look up the details. > >>> >> >>>>>>>>> when in doubt, I use np.arange(...) > >>> >> >>>>>>>>> > >>> >> >>>>>>>>> Josef > >>> >> >>>>>>>>> > >>> >> >>>>>>>>>>> > >>> >> >>>>>>>>>>> Thanks. > >>> >> >>>>>>>>>>> > >>> >> >>>>>>>>>>> > >>> >> >>>>>>>>>>> > >>> >> >>>>>>>>>>> josef.pktd wrote: > >>> >> >>>>>>>>>>>> > >>> >> >>>>>>>>>>>> On Fri, May 21, 2010 at 10:55 AM, mdekauwe < > >>> >> mdekauwe at gmail.com> > >>> >> >>>>>>>>>>>> wrote: > >>> >> >>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>> Thanks that works... > >>> >> >>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>> So the way to do it is with np.arange(tsteps)[:,None], > >>> that > >>> >> >>>>>>>>>>>>> was > >>> >> >>>>>>>>>>>>> the > >>> >> >>>>>>>>>>>>> step > >>> >> >>>>>>>>>>>>> I > >>> >> >>>>>>>>>>>>> was struggling with, so this forms a 2D array which > >>> >> replaces > >>> >> >>>>>>>>>>>>> the > >>> >> >>>>>>>>>>>>> the > >>> >> >>>>>>>>>>>>> two > >>> >> >>>>>>>>>>>>> for > >>> >> >>>>>>>>>>>>> loops? Do I have that right? > >>> >> >>>>>>>>>>>> > >>> >> >>>>>>>>>>>> Yes, but as Zachary showed, if you need the full index > >>> in > >>> a > >>> >> >>>>>>>>>>>> dimension, > >>> >> >>>>>>>>>>>> then you can use slicing. It might be faster. > >>> >> >>>>>>>>>>>> And a warning, mixing slices and index arrays with 3 or > >>> more > >>> >> >>>>>>>>>>>> dimensions can have some surprise switching of axes. > >>> >> >>>>>>>>>>>> > >>> >> >>>>>>>>>>>> Josef > >>> >> >>>>>>>>>>>> > >>> >> >>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>> A lot quicker...! > >>> >> >>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>> Martin > >>> >> >>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>> josef.pktd wrote: > >>> >> >>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>> On Fri, May 21, 2010 at 8:59 AM, mdekauwe > >>> >> >>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>> wrote: > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>>> Hi, > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>>> I am trying to extract data from a 4D array and > store > >>> it > >>> >> in > >>> >> >>>>>>>>>>>>>>> a > >>> >> >>>>>>>>>>>>>>> 2D > >>> >> >>>>>>>>>>>>>>> array, > >>> >> >>>>>>>>>>>>>>> but > >>> >> >>>>>>>>>>>>>>> avoid my current usage of the for loops for speed, > as > >>> in > >>> >> >>>>>>>>>>>>>>> reality > >>> >> >>>>>>>>>>>>>>> the > >>> >> >>>>>>>>>>>>>>> arrays > >>> >> >>>>>>>>>>>>>>> sizes are quite big. Could someone also try and > >>> explain > >>> >> the > >>> >> >>>>>>>>>>>>>>> solution > >>> >> >>>>>>>>>>>>>>> as > >>> >> >>>>>>>>>>>>>>> well > >>> >> >>>>>>>>>>>>>>> if they have a spare moment as I am still finding it > >>> >> quite > >>> >> >>>>>>>>>>>>>>> difficult > >>> >> >>>>>>>>>>>>>>> to > >>> >> >>>>>>>>>>>>>>> get > >>> >> >>>>>>>>>>>>>>> over the habit of using loops (C convert for my > >>> sins). > >>> I > >>> >> get > >>> >> >>>>>>>>>>>>>>> that > >>> >> >>>>>>>>>>>>>>> one > >>> >> >>>>>>>>>>>>>>> could > >>> >> >>>>>>>>>>>>>>> precompute the indices's i and j i.e. > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>>> i = np.arange(tsteps) > >>> >> >>>>>>>>>>>>>>> j = np.arange(numpts) > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>>> but just can't get my head round how i then use > >>> them... > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>>> Thanks, > >>> >> >>>>>>>>>>>>>>> Martin > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>>> import numpy as np > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>>> numpts=10 > >>> >> >>>>>>>>>>>>>>> tsteps = 12 > >>> >> >>>>>>>>>>>>>>> vari = 22 > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>>> data = np.random.random((tsteps, vari, numpts, 1)) > >>> >> >>>>>>>>>>>>>>> new_data = np.zeros((tsteps, numpts), > >>> dtype=np.float32) > >>> >> >>>>>>>>>>>>>>> index = np.arange(numpts) > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>>> for i in xrange(tsteps): > >>> >> >>>>>>>>>>>>>>> for j in xrange(numpts): > >>> >> >>>>>>>>>>>>>>> new_data[i,j] = data[i,5,index[j],0] > >>> >> >>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>> The index arrays need to be broadcastable against > each > >>> >> other. > >>> >> >>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>> I think this should do it > >>> >> >>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>> new_data = data[np.arange(tsteps)[:,None], 5, > >>> >> >>>>>>>>>>>>>> np.arange(numpts), > >>> >> >>>>>>>>>>>>>> 0] > >>> >> >>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>> Josef > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>>> -- > >>> >> >>>>>>>>>>>>>>> View this message in context: > >>> >> >>>>>>>>>>>>>>> > >>> >> > http://old.nabble.com/removing-for-loops...-tp28633477p28633477.html > >>> >> >>>>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at > >>> >> Nabble.com. > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>>> _______________________________________________ > >>> >> >>>>>>>>>>>>>>> SciPy-User mailing list > >>> >> >>>>>>>>>>>>>>> SciPy-User at scipy.org > >>> >> >>>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >>>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>> _______________________________________________ > >>> >> >>>>>>>>>>>>>> SciPy-User mailing list > >>> >> >>>>>>>>>>>>>> SciPy-User at scipy.org > >>> >> >>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>> -- > >>> >> >>>>>>>>>>>>> View this message in context: > >>> >> >>>>>>>>>>>>> > >>> >> > http://old.nabble.com/removing-for-loops...-tp28633477p28634924.html > >>> >> >>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at > >>> >> Nabble.com. > >>> >> >>>>>>>>>>>>> > >>> >> >>>>>>>>>>>>> _______________________________________________ > >>> >> >>>>>>>>>>>>> SciPy-User mailing list > >>> >> >>>>>>>>>>>>> SciPy-User at scipy.org > >>> >> >>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >>>>>>>>>>>>> > >>> >> >>>>>>>>>>>> _______________________________________________ > >>> >> >>>>>>>>>>>> SciPy-User mailing list > >>> >> >>>>>>>>>>>> SciPy-User at scipy.org > >>> >> >>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >>>>>>>>>>>> > >>> >> >>>>>>>>>>>> > >>> >> >>>>>>>>>>> > >>> >> >>>>>>>>>>> > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> -- > >>> >> >>>>>>>>>> View this message in context: > >>> >> >>>>>>>>>> > >>> >> > http://old.nabble.com/removing-for-loops...-tp28633477p28640656.html > >>> >> >>>>>>>>>> Sent from the Scipy-User mailing list archive at > >>> Nabble.com. > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>>> _______________________________________________ > >>> >> >>>>>>>>>> SciPy-User mailing list > >>> >> >>>>>>>>>> SciPy-User at scipy.org > >>> >> >>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >>>>>>>>>> > >>> >> >>>>>>>>> _______________________________________________ > >>> >> >>>>>>>>> SciPy-User mailing list > >>> >> >>>>>>>>> SciPy-User at scipy.org > >>> >> >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >>>>>>>>> > >>> >> >>>>>>>>> > >>> >> >>>>>>>> > >>> >> >>>>>>>> -- > >>> >> >>>>>>>> View this message in context: > >>> >> >>>>>>>> > >>> >> > http://old.nabble.com/removing-for-loops...-tp28633477p28642434.html > >>> >> >>>>>>>> Sent from the Scipy-User mailing list archive at > Nabble.com. > >>> >> >>>>>>>> > >>> >> >>>>>>>> _______________________________________________ > >>> >> >>>>>>>> SciPy-User mailing list > >>> >> >>>>>>>> SciPy-User at scipy.org > >>> >> >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >>>>>>>> > >>> >> >>>>>>> _______________________________________________ > >>> >> >>>>>>> SciPy-User mailing list > >>> >> >>>>>>> SciPy-User at scipy.org > >>> >> >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >>>>>>> > >>> >> >>>>>>> > >>> >> >>>>>> > >>> >> >>>>>> -- > >>> >> >>>>>> View this message in context: > >>> >> >>>>>> > >>> >> > http://old.nabble.com/removing-for-loops...-tp28633477p28686356.html > >>> >> >>>>>> Sent from the Scipy-User mailing list archive at Nabble.com. > >>> >> >>>>>> > >>> >> >>>>>> _______________________________________________ > >>> >> >>>>>> SciPy-User mailing list > >>> >> >>>>>> SciPy-User at scipy.org > >>> >> >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >>>>>> > >>> >> >>>>> _______________________________________________ > >>> >> >>>>> SciPy-User mailing list > >>> >> >>>>> SciPy-User at scipy.org > >>> >> >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >>>>> > >>> >> >>>>> > >>> >> >>>> > >>> >> >>>> -- > >>> >> >>>> View this message in context: > >>> >> >>>> > >>> http://old.nabble.com/removing-for-loops...-tp28633477p28711249.html > >>> >> >>>> Sent from the Scipy-User mailing list archive at Nabble.com. > >>> >> >>>> > >>> >> >>>> _______________________________________________ > >>> >> >>>> SciPy-User mailing list > >>> >> >>>> SciPy-User at scipy.org > >>> >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >>>> > >>> >> >>> _______________________________________________ > >>> >> >>> SciPy-User mailing list > >>> >> >>> SciPy-User at scipy.org > >>> >> >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >>> > >>> >> >>> > >>> >> >> > >>> >> >> -- > >>> >> >> View this message in context: > >>> >> >> > >>> http://old.nabble.com/removing-for-loops...-tp28633477p28711444.html > >>> >> >> Sent from the Scipy-User mailing list archive at Nabble.com. > >>> >> >> > >>> >> >> _______________________________________________ > >>> >> >> SciPy-User mailing list > >>> >> >> SciPy-User at scipy.org > >>> >> >> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> >> > >>> >> > _______________________________________________ > >>> >> > SciPy-User mailing list > >>> >> > SciPy-User at scipy.org > >>> >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> > > >>> >> > > >>> >> > >>> >> -- > >>> >> View this message in context: > >>> >> > http://old.nabble.com/removing-for-loops...-tp28633477p28711581.html > >>> >> Sent from the Scipy-User mailing list archive at Nabble.com. > >>> >> > >>> >> _______________________________________________ > >>> >> SciPy-User mailing list > >>> >> SciPy-User at scipy.org > >>> >> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> >> > >>> > > >>> > _______________________________________________ > >>> > SciPy-User mailing list > >>> > SciPy-User at scipy.org > >>> > http://mail.scipy.org/mailman/listinfo/scipy-user > >>> > > >>> > > >>> > >>> -- > >>> View this message in context: > >>> http://old.nabble.com/removing-for-loops...-tp28633477p28824023.html > >>> Sent from the Scipy-User mailing list archive at Nabble.com. > >>> > >>> _______________________________________________ > >>> SciPy-User mailing list > >>> SciPy-User at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> > >> > >> > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > View this message in context: > http://old.nabble.com/removing-for-loops...-tp28633477p28846602.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From seb.haase at gmail.com Thu Jun 10 14:58:49 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Thu, 10 Jun 2010 20:58:49 +0200 Subject: [SciPy-User] Global Curve Fitting of 2 functions to 2 sets of data-curves In-Reply-To: References: Message-ID: On Thu, Jun 10, 2010 at 8:27 PM, wrote: > On Thu, Jun 10, 2010 at 4:05 AM, Sebastian Haase wrote: >> Hi, >> >> so far I have been using scipy.optimize.leastsq to satisfy all my >> curve fitting needs. >> But now I am thinking about "global fitting" - i.e. fitting multiple >> dataset with shared parameters >> (e.g. ref here: >> http://www.originlab.com/index.aspx?go=Products/Origin/DataAnalysis/CurveFitting/GlobalFitting) >> >> I have looked here (http://www.scipy.org/Cookbook/FittingData) and here >> (http://docs.scipy.org/doc/scipy/reference/optimize.html) >> >> Can someone provide an example ?? Which of the routines of >> scipy.optimize are "easiest" to use ? >> >> Finally, I'm thinking about a "much more" complicated fitting task: >> fitting two sets of datasets with two types of functions. >> In total I have 10 datasets to be fit with a function f1, and 10 more >> to be fit with function f2. Each function depends on 6 parameters >> A1,A2,A3, r1,r2,r3. >> A1,A2,A3 should be identical ("shared") between all 20 sets, while >> r1,r2,r3 should be shared between the i-th set of type f1 and the i-th >> set of f2. >> Last but not least it would be nice if one could specify constrains >> such that r1,r2,r3 >0 and A1+A2+A3 == 1 and 0<=Ai<=1. >> >> ;-) ?Is this too much ? >> >> Thanks for any help or hints, > > Assuming your noise or error terms are uncorrelated, I would still use > optimize.leastsq or optimize.curve_fit using a function that stacks > all the errors in one 1-d array. If there are differences in the noise > variance, then weights/sigma per function as in curve_fit can be used. > > common parameter restrictions across functions can be encoded by using > the same parameter in several (sub-)functions. > > In this case, I would impose the constraints through reparameterization, e.g > r1 = exp(r1_), ... > A1 = exp(A1_)/(exp(A1_) + exp(A2_) + 1) > A1 = exp(A2_)/(exp(A1_) + exp(A2_) + 1) > A1 = 1/(exp(A1_) + exp(A2_) + 1) > > (maybe it's more tricky to get the standard deviation of the original > parameter estimate) > > or as an alternative, calculate the total weighted sum of squared > errors and use one of the constraint fmin in optimize. > > Josef Thanks for the reply, I will have to think about implementing my constrains by redefining vars using those kinds of tricks with exp -- are you sure they don't mess up convergence ? I'm just thinking of the optimization steps being so different depending on the current parameter value during the iteration (i.e. the derivative of exp is very non-linear) What are those other functions in http://docs.scipy.org/doc/scipy/reference/optimize.html for ? (Once, long time ago, I did use fmin_cobyla ... but don't remember why I choose it. Maybe something like one-sided constrains !?) Thanks, Sebastian From josef.pktd at gmail.com Thu Jun 10 15:34:42 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 10 Jun 2010 15:34:42 -0400 Subject: [SciPy-User] Global Curve Fitting of 2 functions to 2 sets of data-curves In-Reply-To: References: Message-ID: On Thu, Jun 10, 2010 at 2:58 PM, Sebastian Haase wrote: > On Thu, Jun 10, 2010 at 8:27 PM, ? wrote: >> On Thu, Jun 10, 2010 at 4:05 AM, Sebastian Haase wrote: >>> Hi, >>> >>> so far I have been using scipy.optimize.leastsq to satisfy all my >>> curve fitting needs. >>> But now I am thinking about "global fitting" - i.e. fitting multiple >>> dataset with shared parameters >>> (e.g. ref here: >>> http://www.originlab.com/index.aspx?go=Products/Origin/DataAnalysis/CurveFitting/GlobalFitting) >>> >>> I have looked here (http://www.scipy.org/Cookbook/FittingData) and here >>> (http://docs.scipy.org/doc/scipy/reference/optimize.html) >>> >>> Can someone provide an example ?? Which of the routines of >>> scipy.optimize are "easiest" to use ? >>> >>> Finally, I'm thinking about a "much more" complicated fitting task: >>> fitting two sets of datasets with two types of functions. >>> In total I have 10 datasets to be fit with a function f1, and 10 more >>> to be fit with function f2. Each function depends on 6 parameters >>> A1,A2,A3, r1,r2,r3. >>> A1,A2,A3 should be identical ("shared") between all 20 sets, while >>> r1,r2,r3 should be shared between the i-th set of type f1 and the i-th >>> set of f2. >>> Last but not least it would be nice if one could specify constrains >>> such that r1,r2,r3 >0 and A1+A2+A3 == 1 and 0<=Ai<=1. >>> >>> ;-) ?Is this too much ? >>> >>> Thanks for any help or hints, >> >> Assuming your noise or error terms are uncorrelated, I would still use >> optimize.leastsq or optimize.curve_fit using a function that stacks >> all the errors in one 1-d array. If there are differences in the noise >> variance, then weights/sigma per function as in curve_fit can be used. >> >> common parameter restrictions across functions can be encoded by using >> the same parameter in several (sub-)functions. >> >> In this case, I would impose the constraints through reparameterization, e.g >> r1 = exp(r1_), ... >> A1 = exp(A1_)/(exp(A1_) + exp(A2_) + 1) >> A1 = exp(A2_)/(exp(A1_) + exp(A2_) + 1) >> A1 = 1/(exp(A1_) + exp(A2_) + 1) >> >> (maybe it's more tricky to get the standard deviation of the original >> parameter estimate) >> >> or as an alternative, calculate the total weighted sum of squared >> errors and use one of the constraint fmin in optimize. >> >> Josef > > Thanks for the reply, > I will have to think about implementing my constrains by redefining > vars using those kinds of tricks with exp -- are you sure they don't > mess up convergence ? I'm just thinking of the optimization steps > being so different depending on the current parameter value during the > iteration (i.e. the derivative of exp is very non-linear) > > What are those other functions in > http://docs.scipy.org/doc/scipy/reference/optimize.html for ? > (Once, long time ago, I did use fmin_cobyla ... but don't remember why > I choose it. Maybe something like one-sided constrains !?) fmin_slsqp is the most flexible for constraints, but so far I have use the constraint maximizers only for toy examples, and don't know how robust they are. Imposing constraints by reparamaterization or penalization is in my experience not much of a problem, except getting a slightly interior solution instead of exact boundary value. The multinomial logit parameterization for A1, A2, A3 is pretty common in econometrics, I'm not sure what's the most common for non-negativity constraints. If you have analytical gradients, then one of the other optimizers might be better than leastsq. (These are just my impression from my use cases.) Josef > Thanks, > Sebastian > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From mdekauwe at gmail.com Thu Jun 10 16:36:22 2010 From: mdekauwe at gmail.com (mdekauwe) Date: Thu, 10 Jun 2010 13:36:22 -0700 (PDT) Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28711581.post@talk.nabble.com> <28824023.post@talk.nabble.com> <28846602.post@talk.nabble.com> Message-ID: <28848191.post@talk.nabble.com> OK I think it is clear now!! Although what does the -1 bit do, this is surely the same as saying 11, 12 or numyears, nummonths? thanks. Benjamin Root-2 wrote: > > Well, let's try a more direct example. I am going to create a 4d array of > random values to illustrate. I know the length of the dimensions won't be > exactly the same as yours, but the example will still be valid. > > In this example, I will be able to calculate *all* of the monthly averages > for *all* of the variables for *all* of the grid points without a single > loop. > >> jules = np.random.random((132, 10, 50, 3)) >> print jules.shape > (132, 10, 50, 3) > >> jules_5d = np.reshape(jules, (-1, 12) + jules.shape[1:]) >> print jules_5d.shape > (11, 12, 10, 50, 3) > >> jules_5d = np.ma.masked_array(jules_5d, mask=jules_5d < 0.0) > >> jules_means = np.mean(jules_5d, axis=0) >> print jules_means.shape > (12, 10, 50, 3) > > voila! This matrix has a mean for each month across all eleven years for > each datapoint in each of the 10 variables at each (I am assuming) level > in > the atmosphere. > > So, if you want to operate on a subset of your jules matrix (for example, > you need to do special masking for each variable), then you can just work > off of a slice of the original matrix, and many of these same concepts in > this example and the previous example still applies. > > Ben Root > > > On Thu, Jun 10, 2010 at 1:08 PM, mdekauwe wrote: > >> >> Hi, >> >> No if I am honest I am a little confused how what you are suggesting >> would >> work. As I see it the array I am trying to average from has dims >> jules[(numyears * nummonths),1,numpts,0]. Where the first dimension (132) >> is >> 12 months x 11 years. And as I said before I would like to average the >> jan >> from the first, second, third years etc. Then the same for the feb and so >> on. >> >> So I don't see how you get to your 2d array that you mention in the first >> line? I thought what you were suggesting was I could precompute the step >> that builds the index for the months e.g >> >> mth_index = np.zeros(0) >> for month in xrange(nummonths): >> mth_index = np.append(mth_index, np.arange(month, numyears * >> nummonths, >> nummonths)) >> >> and use this as my index to skip the for loop. Though I still have a for >> loop I guess! >> >> >> >> >> >> >> Benjamin Root-2 wrote: >> > >> > Correction for me as well. To mask out the negative values, use masked >> > arrays. So we will turn jules_2d into a masked array (second line), >> then >> > all subsequent commands will still work as expected. It is very >> similar >> > to >> > replacing negative values with nans and using nanmin(). >> > >> >> jules_2d = jules.reshape((-1, 12)) >> >> jules_2d = np.ma.masked_array(jules_2d, mask=jules_2d < 0.0) >> >> jules_monthly = np.mean(jules_2d, axis=0) >> >> print jules_monthly.shape >> > (12,) >> > >> > Ben Root >> > >> > On Tue, Jun 8, 2010 at 7:49 PM, Benjamin Root wrote: >> > >> >> The np.mod in my example caused the data points to stay within [0, 11] >> in >> >> order to illustrate that these are months. In my example, months are >> >> column, years are rows. In your desired output, months are rows and >> >> years >> >> are columns. It makes very little difference which way you have it. >> >> >> >> Anyway, let's imagine that we have a time series of data "jules". We >> can >> >> easily reshape this like so: >> >> >> >> > jules_2d = jules.reshape((-1, 12)) >> >> > jules_monthly = np.mean(jules_2d, axis=0) >> >> > print jules_monthly.shape >> >> (12,) >> >> >> >> voila! You have 12 values in jules_monthly which are means for that >> >> month >> >> across all years. >> >> >> >> protip - if you want yearly averages just change the ax parameter in >> >> np.mean(): >> >> > jules_yearly = np.mean(jules_2d, axis=1) >> >> >> >> I hope that makes my previous explanation clearer. >> >> >> >> Ben Root >> >> >> >> >> >> On Tue, Jun 8, 2010 at 5:41 PM, mdekauwe wrote: >> >> >> >>> >> >>> OK... >> >>> >> >>> but if I do... >> >>> >> >>> In [28]: np.mod(np.arange(nummonths*numyears), >> nummonths).reshape((-1, >> >>> nummonths)) >> >>> Out[28]: >> >>> array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) >> >>> >> >>> When really I would be after something like this I think? >> >>> >> >>> array([ 0, 12, 24, 36, 48, 60, 72, 84, 96, 108, 120], >> >>> [ 1, 13, 25, 37, 49, 61, 73, 85, 97, 109, 121], >> >>> [ 2, 14, 26, 38, 50, 62, 74, 86, 98, 110, 122] >> >>> etc, etc >> >>> >> >>> i.e. so for each month jump across the years. >> >>> >> >>> Not quite sure of this example...this is what I currently have which >> >>> does >> >>> seem to work, though I guess not completely efficiently. >> >>> >> >>> for month in xrange(nummonths): >> >>> tmp = jules[xrange(0, numyears * nummonths, >> nummonths),VAR,:,0] >> >>> tmp[tmp < 0.0] = np.nan >> >>> data[month,:] = np.mean(tmp, axis=0) >> >>> >> >>> >> >>> >> >>> >> >>> Benjamin Root-2 wrote: >> >>> > >> >>> > If you want an average for each month from your timeseries, then >> the >> >>> > sneaky >> >>> > way would be to reshape your array so that the time dimension is >> split >> >>> > into >> >>> > two (month, year) dimensions. >> >>> > >> >>> > For a 1-D array, this would be: >> >>> > >> >>> >> dataarray = numpy.mod(numpy.arange(36), 12) >> >>> >> print dataarray >> >>> > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, >> 3, >> >>> 4, >> >>> > 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, 4, 5, 6, 7, >> 8, >> >>> 9, >> >>> > 10, 11]) >> >>> >> datamatrix = dataarray.reshape((-1, 12)) >> >>> >> print datamatrix >> >>> > array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> >>> > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], >> >>> > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) >> >>> > >> >>> > Hope that helps. >> >>> > >> >>> > Ben Root >> >>> > >> >>> > >> >>> > On Fri, May 28, 2010 at 3:28 PM, mdekauwe >> wrote: >> >>> > >> >>> >> >> >>> >> OK so I just need to have a quick loop across the 12 months then, >> >>> that >> >>> is >> >>> >> fine, just thought there might have been a sneaky way! >> >>> >> >> >>> >> Really appreciated, getting there slowly! >> >>> >> >> >>> >> >> >>> >> >> >>> >> josef.pktd wrote: >> >>> >> > >> >>> >> > On Fri, May 28, 2010 at 4:14 PM, mdekauwe >> >>> wrote: >> >>> >> >> >> >>> >> >> ok - something like this then...but how would i get the index >> for >> >>> the >> >>> >> >> month >> >>> >> >> for the data array (where month is 0, 1, 2, 4 ... 11)? >> >>> >> >> >> >>> >> >> data[month,:] = array[xrange(0, numyears * nummonths, >> >>> >> nummonths),VAR,:,0] >> >>> >> > >> >>> >> > you would still need to start at the right month >> >>> >> > data[month,:] = array[xrange(month, numyears * nummonths, >> >>> >> > nummonths),VAR,:,0] >> >>> >> > or >> >>> >> > data[month,:] = array[month: numyears * nummonths : >> >>> nummonths),VAR,:,0] >> >>> >> > >> >>> >> > an alternative would be a reshape with an extra month dimension >> and >> >>> >> > then sum only once over the year axis. this might be faster but >> >>> >> > trickier to get the correct reshape . >> >>> >> > >> >>> >> > Josef >> >>> >> > >> >>> >> >> >> >>> >> >> and would that be quicker than making an array months... >> >>> >> >> >> >>> >> >> months = np.arange(numyears * nummonths) >> >>> >> >> >> >>> >> >> and you that instead like you suggested x[start:end:12,:]? >> >>> >> >> >> >>> >> >> Many thanks again... >> >>> >> >> >> >>> >> >> >> >>> >> >> josef.pktd wrote: >> >>> >> >>> >> >>> >> >>> On Fri, May 28, 2010 at 3:53 PM, mdekauwe >> >>> wrote: >> >>> >> >>>> >> >>> >> >>>> Ok thanks...I'll take a look. >> >>> >> >>>> >> >>> >> >>>> Back to my loops issue. What if instead this time I wanted to >> >>> take >> >>> >> an >> >>> >> >>>> average so every march in 11 years, is there a quicker way to >> go >> >>> >> about >> >>> >> >>>> doing >> >>> >> >>>> that than my current method? >> >>> >> >>>> >> >>> >> >>>> nummonths = 12 >> >>> >> >>>> numyears = 11 >> >>> >> >>>> >> >>> >> >>>> for month in xrange(nummonths): >> >>> >> >>>> for i in xrange(numpts): >> >>> >> >>>> for ym in xrange(month, numyears * nummonths, >> nummonths): >> >>> >> >>>> data[month, i] += array[ym, VAR, >> land_pts_index[i], >> >>> 0] >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> x[start:end:12,:] gives you every 12th row of an array x >> >>> >> >>> >> >>> >> >>> something like this should work to get rid of the inner loop, >> or >> >>> you >> >>> >> >>> could directly put >> >>> >> >>> range(month, numyears * nummonths, nummonths) into the array >> >>> instead >> >>> >> >>> of ym and sum() >> >>> >> >>> >> >>> >> >>> Josef >> >>> >> >>> >> >>> >> >>> >> >>> >> >>>> >> >>> >> >>>> so for each point in the array for a given month i am jumping >> >>> >> through >> >>> >> >>>> and >> >>> >> >>>> getting the next years month and so on, summing it. >> >>> >> >>>> >> >>> >> >>>> Thanks... >> >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> josef.pktd wrote: >> >>> >> >>>>> >> >>> >> >>>>> On Wed, May 26, 2010 at 5:03 PM, mdekauwe >> > > >> >>> >> wrote: >> >>> >> >>>>>> >> >>> >> >>>>>> Could you possibly if you have time explain further your >> >>> comment >> >>> >> re >> >>> >> >>>>>> the >> >>> >> >>>>>> p-values, your suggesting I am misusing them? >> >>> >> >>>>> >> >>> >> >>>>> Depends on your use and interpretation >> >>> >> >>>>> >> >>> >> >>>>> test statistics, p-values are random variables, if you look >> at >> >>> >> several >> >>> >> >>>>> tests at the same time, some p-values will be large just by >> >>> chance. >> >>> >> >>>>> If, for example you just look at the largest test statistic, >> >>> then >> >>> >> the >> >>> >> >>>>> distribution for the max of several test statistics is not >> the >> >>> same >> >>> >> as >> >>> >> >>>>> the distribution for a single test statistic >> >>> >> >>>>> >> >>> >> >>>>> http://en.wikipedia.org/wiki/Multiple_comparisons >> >>> >> >>>>> >> http://www.itl.nist.gov/div898/handbook/prc/section4/prc47.htm >> >>> >> >>>>> >> >>> >> >>>>> we also just had a related discussion for ANOVA post-hoc >> tests >> >>> on >> >>> >> the >> >>> >> >>>>> pystatsmodels group. >> >>> >> >>>>> >> >>> >> >>>>> Josef >> >>> >> >>>>>> >> >>> >> >>>>>> Thanks. >> >>> >> >>>>>> >> >>> >> >>>>>> >> >>> >> >>>>>> josef.pktd wrote: >> >>> >> >>>>>>> >> >>> >> >>>>>>> On Sat, May 22, 2010 at 6:21 AM, mdekauwe >> >>> >> >>> >> >>>>>>> wrote: >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> Sounds like I am stuck with the loop as I need to do the >> >>> >> comparison >> >>> >> >>>>>>>> for >> >>> >> >>>>>>>> each >> >>> >> >>>>>>>> pixel of the world and then I have a basemap function >> call >> >>> which >> >>> >> I >> >>> >> >>>>>>>> guess >> >>> >> >>>>>>>> slows it down further...hmm >> >>> >> >>>>>>> >> >>> >> >>>>>>> I don't see much that could be done differently, after a >> >>> brief >> >>> >> look. >> >>> >> >>>>>>> >> >>> >> >>>>>>> stats.pearsonr could be replaced by an array version using >> >>> >> directly >> >>> >> >>>>>>> the formula for correlation even with nans. wilcoxon looks >> >>> slow, >> >>> >> and >> >>> >> >>>>>>> I >> >>> >> >>>>>>> never tried or seen a faster version. >> >>> >> >>>>>>> >> >>> >> >>>>>>> just a reminder, the p-values are for a single test, when >> you >> >>> >> have >> >>> >> >>>>>>> many of them, then they don't have the right >> size/confidence >> >>> >> level >> >>> >> >>>>>>> for >> >>> >> >>>>>>> an overall or joint test. (some packages report a >> Bonferroni >> >>> >> >>>>>>> correction in this case) >> >>> >> >>>>>>> >> >>> >> >>>>>>> Josef >> >>> >> >>>>>>> >> >>> >> >>>>>>> >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> i.e. >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> def compareSnowData(jules_var): >> >>> >> >>>>>>>> # Extract the 11 years of snow data and return >> >>> >> >>>>>>>> outrows = 180 >> >>> >> >>>>>>>> outcols = 360 >> >>> >> >>>>>>>> numyears = 11 >> >>> >> >>>>>>>> nummonths = 12 >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> # Read various files >> >>> >> >>>>>>>> fname="world_valid_jules_pts.ascii" >> >>> >> >>>>>>>> (numpts, land_pts_index, latitude, longitude, rows, >> cols) >> >>> = >> >>> >> >>>>>>>> jo.read_land_points_ascii(fname, 1.0) >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax0.mon.gra" >> >>> >> >>>>>>>> jules_data1 = jo.readJulesOutBinary(fname, >> numrows=15238, >> >>> >> >>>>>>>> numcols=1, >> >>> >> >>>>>>>> \ >> >>> >> >>>>>>>> timesteps=132, numvars=26) >> >>> >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax3.mon.gra" >> >>> >> >>>>>>>> jules_data2 = jo.readJulesOutBinary(fname, >> numrows=15238, >> >>> >> >>>>>>>> numcols=1, >> >>> >> >>>>>>>> \ >> >>> >> >>>>>>>> timesteps=132, numvars=26) >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> # grab some space >> >>> >> >>>>>>>> data1_snow = np.zeros((nummonths * numyears, numpts), >> >>> >> >>>>>>>> dtype=np.float32) >> >>> >> >>>>>>>> data2_snow = np.zeros((nummonths * numyears, numpts), >> >>> >> >>>>>>>> dtype=np.float32) >> >>> >> >>>>>>>> pearsonsr_snow = np.ones((outrows, outcols), >> >>> >> dtype=np.float32) >> >>> >> * >> >>> >> >>>>>>>> np.nan >> >>> >> >>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), >> >>> >> dtype=np.float32) >> >>> >> >>>>>>>> * >> >>> >> >>>>>>>> np.nan >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> # extract the data >> >>> >> >>>>>>>> data1_snow = jules_data1[:,jules_var,:,0] >> >>> >> >>>>>>>> data2_snow = jules_data2[:,jules_var,:,0] >> >>> >> >>>>>>>> data1_snow = np.where(data1_snow < 0.0, np.nan, >> >>> data1_snow) >> >>> >> >>>>>>>> data2_snow = np.where(data2_snow < 0.0, np.nan, >> >>> data2_snow) >> >>> >> >>>>>>>> #for month in xrange(numyears * nummonths): >> >>> >> >>>>>>>> # for i in xrange(numpts): >> >>> >> >>>>>>>> # data1 = >> >>> >> >>>>>>>> jules_data1[month,jules_var,land_pts_index[i],0] >> >>> >> >>>>>>>> # data2 = >> >>> >> >>>>>>>> jules_data2[month,jules_var,land_pts_index[i],0] >> >>> >> >>>>>>>> # if data1 >= 0.0: >> >>> >> >>>>>>>> # data1_snow[month,i] = data1 >> >>> >> >>>>>>>> # else: >> >>> >> >>>>>>>> # data1_snow[month,i] = np.nan >> >>> >> >>>>>>>> # if data2 > 0.0: >> >>> >> >>>>>>>> # data2_snow[month,i] = data2 >> >>> >> >>>>>>>> # else: >> >>> >> >>>>>>>> # data2_snow[month,i] = np.nan >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> # exclude any months from *both* arrays where we have >> >>> dodgy >> >>> >> >>>>>>>> data, >> >>> >> >>>>>>>> else >> >>> >> >>>>>>>> we >> >>> >> >>>>>>>> # can't do the correlations correctly!! >> >>> >> >>>>>>>> data1_snow = np.where(np.isnan(data2_snow), np.nan, >> >>> >> data1_snow) >> >>> >> >>>>>>>> data2_snow = np.where(np.isnan(data1_snow), np.nan, >> >>> >> data1_snow) >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> # put data on a regular grid... >> >>> >> >>>>>>>> print 'regridding landpts...' >> >>> >> >>>>>>>> for i in xrange(numpts): >> >>> >> >>>>>>>> # exclude the NaN, note masking them doesn't work >> in >> >>> the >> >>> >> >>>>>>>> stats >> >>> >> >>>>>>>> func >> >>> >> >>>>>>>> x = data1_snow[:,i] >> >>> >> >>>>>>>> x = x[np.isfinite(x)] >> >>> >> >>>>>>>> y = data2_snow[:,i] >> >>> >> >>>>>>>> y = y[np.isfinite(y)] >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> # r^2 >> >>> >> >>>>>>>> # exclude v.small arrays, i.e. we need just less >> over >> >>> 4 >> >>> >> >>>>>>>> years >> >>> >> >>>>>>>> of >> >>> >> >>>>>>>> data >> >>> >> >>>>>>>> if len(x) and len(y) > 50: >> >>> >> >>>>>>>> >> pearsonsr_snow[((180-1)-(rows[i]-1)),cols[i]-1] >> = >> >>> >> >>>>>>>> (stats.pearsonr(x, y)[0])**2 >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> # wilcox signed rank test >> >>> >> >>>>>>>> # make sure we have enough samples to do the test >> >>> >> >>>>>>>> d = x - y >> >>> >> >>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # >> Keep >> >>> all >> >>> >> >>>>>>>> non-zero >> >>> >> >>>>>>>> differences >> >>> >> >>>>>>>> count = len(d) >> >>> >> >>>>>>>> if count > 10: >> >>> >> >>>>>>>> z, pval = stats.wilcoxon(x, y) >> >>> >> >>>>>>>> # only map out sign different data >> >>> >> >>>>>>>> if pval < 0.05: >> >>> >> >>>>>>>> >> >>> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] >> >>> >> = >> >>> >> >>>>>>>> np.mean(x - y) >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> return (pearsonsr_snow, wilcoxStats_snow) >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> josef.pktd wrote: >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>>> On Fri, May 21, 2010 at 10:14 PM, mdekauwe < >> >>> mdekauwe at gmail.com> >> >>> >> >>>>>>>>> wrote: >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> Also I then need to remap the 2D array I make onto >> another >> >>> >> grid >> >>> >> >>>>>>>>>> (the >> >>> >> >>>>>>>>>> world in >> >>> >> >>>>>>>>>> this case). Which again I had am doing with a loop >> (note >> >>> >> numpts >> >>> >> >>>>>>>>>> is >> >>> >> >>>>>>>>>> a >> >>> >> >>>>>>>>>> lot >> >>> >> >>>>>>>>>> bigger than my example above). >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), >> >>> >> dtype=np.float32) >> >>> >> >>>>>>>>>> * >> >>> >> >>>>>>>>>> np.nan >> >>> >> >>>>>>>>>> for i in xrange(numpts): >> >>> >> >>>>>>>>>> # exclude the NaN, note masking them doesn't >> work >> >>> in >> >>> >> the >> >>> >> >>>>>>>>>> stats >> >>> >> >>>>>>>>>> func >> >>> >> >>>>>>>>>> x = data1_snow[:,i] >> >>> >> >>>>>>>>>> x = x[np.isfinite(x)] >> >>> >> >>>>>>>>>> y = data2_snow[:,i] >> >>> >> >>>>>>>>>> y = y[np.isfinite(y)] >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> # wilcox signed rank test >> >>> >> >>>>>>>>>> # make sure we have enough samples to do the >> test >> >>> >> >>>>>>>>>> d = x - y >> >>> >> >>>>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # >> >>> Keep >> >>> >> all >> >>> >> >>>>>>>>>> non-zero >> >>> >> >>>>>>>>>> differences >> >>> >> >>>>>>>>>> count = len(d) >> >>> >> >>>>>>>>>> if count > 10: >> >>> >> >>>>>>>>>> z, pval = stats.wilcoxon(x, y) >> >>> >> >>>>>>>>>> # only map out sign different data >> >>> >> >>>>>>>>>> if pval < 0.05: >> >>> >> >>>>>>>>>> >> >>> >> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] >> >>> >> >>>>>>>>>> = >> >>> >> >>>>>>>>>> np.mean(x - y) >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> Now I think I can push the data in one move into the >> >>> >> >>>>>>>>>> wilcoxStats_snow >> >>> >> >>>>>>>>>> array >> >>> >> >>>>>>>>>> by removing the index, >> >>> >> >>>>>>>>>> but I can't see how I will get the individual x and y >> pts >> >>> for >> >>> >> >>>>>>>>>> each >> >>> >> >>>>>>>>>> array >> >>> >> >>>>>>>>>> member correctly without the loop, this was my attempt >> >>> which >> >>> >> of >> >>> >> >>>>>>>>>> course >> >>> >> >>>>>>>>>> doesn't work! >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> x = data1_snow[:,:] >> >>> >> >>>>>>>>>> x = x[np.isfinite(x)] >> >>> >> >>>>>>>>>> y = data2_snow[:,:] >> >>> >> >>>>>>>>>> y = y[np.isfinite(y)] >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> # r^2 >> >>> >> >>>>>>>>>> # exclude v.small arrays, i.e. we need just less over 4 >> >>> years >> >>> >> of >> >>> >> >>>>>>>>>> data >> >>> >> >>>>>>>>>> if len(x) and len(y) > 50: >> >>> >> >>>>>>>>>> pearsonsr_snow[((180-1)-(rows-1)),cols-1] = >> >>> >> (stats.pearsonr(x, >> >>> >> >>>>>>>>>> y)[0])**2 >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>>> If you want to do pairwise comparisons with >> stats.wilcoxon, >> >>> >> then >> >>> >> >>>>>>>>> you >> >>> >> >>>>>>>>> might be stuck with the loop, since wilcoxon takes only >> two >> >>> 1d >> >>> >> >>>>>>>>> arrays >> >>> >> >>>>>>>>> at a time (if I read the help correctly). >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>>> Also the presence of nans might force the use a loop. >> >>> >> stats.mstats >> >>> >> >>>>>>>>> has >> >>> >> >>>>>>>>> masked array versions, but I didn't see wilcoxon in the >> >>> list. >> >>> >> >>>>>>>>> (Even >> >>> >> >>>>>>>>> when vectorized operations would work with regular >> arrays, >> >>> nan >> >>> >> or >> >>> >> >>>>>>>>> masked array versions still have to loop in many cases.) >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>>> If you have many columns with count <= 10, so that >> wilcoxon >> >>> is >> >>> >> not >> >>> >> >>>>>>>>> calculated then it might be worth to use only array >> >>> operations >> >>> >> up >> >>> >> >>>>>>>>> to >> >>> >> >>>>>>>>> that point. If wilcoxon is calculated most of the time, >> >>> then >> >>> >> it's >> >>> >> >>>>>>>>> not >> >>> >> >>>>>>>>> worth thinking too hard about this. >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>>> Josef >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> thanks. >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> mdekauwe wrote: >> >>> >> >>>>>>>>>>> >> >>> >> >>>>>>>>>>> Yes as Zachary said index is only 0 to 15237, so both >> >>> methods >> >>> >> >>>>>>>>>>> work. >> >>> >> >>>>>>>>>>> >> >>> >> >>>>>>>>>>> I don't quite get what you mean about slicing with >> axis >> > >> >>> 3. >> >>> >> Is >> >>> >> >>>>>>>>>>> there >> >>> >> >>>>>>>>>>> a >> >>> >> >>>>>>>>>>> link you can recommend I should read? Does that mean >> >>> given >> >>> I >> >>> >> >>>>>>>>>>> have >> >>> >> >>>>>>>>>>> 4dims >> >>> >> >>>>>>>>>>> that Josef's suggestion would be more advised in this >> >>> case? >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>>> There were several discussions on the mailing lists >> (fancy >> >>> >> slicing >> >>> >> >>>>>>>>> and >> >>> >> >>>>>>>>> indexing). Your case is safe, but if you run in future >> into >> >>> >> funny >> >>> >> >>>>>>>>> shapes, you can look up the details. >> >>> >> >>>>>>>>> when in doubt, I use np.arange(...) >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>>> Josef >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>>>>> >> >>> >> >>>>>>>>>>> Thanks. >> >>> >> >>>>>>>>>>> >> >>> >> >>>>>>>>>>> >> >>> >> >>>>>>>>>>> >> >>> >> >>>>>>>>>>> josef.pktd wrote: >> >>> >> >>>>>>>>>>>> >> >>> >> >>>>>>>>>>>> On Fri, May 21, 2010 at 10:55 AM, mdekauwe < >> >>> >> mdekauwe at gmail.com> >> >>> >> >>>>>>>>>>>> wrote: >> >>> >> >>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>> Thanks that works... >> >>> >> >>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>> So the way to do it is with >> np.arange(tsteps)[:,None], >> >>> that >> >>> >> >>>>>>>>>>>>> was >> >>> >> >>>>>>>>>>>>> the >> >>> >> >>>>>>>>>>>>> step >> >>> >> >>>>>>>>>>>>> I >> >>> >> >>>>>>>>>>>>> was struggling with, so this forms a 2D array which >> >>> >> replaces >> >>> >> >>>>>>>>>>>>> the >> >>> >> >>>>>>>>>>>>> the >> >>> >> >>>>>>>>>>>>> two >> >>> >> >>>>>>>>>>>>> for >> >>> >> >>>>>>>>>>>>> loops? Do I have that right? >> >>> >> >>>>>>>>>>>> >> >>> >> >>>>>>>>>>>> Yes, but as Zachary showed, if you need the full >> index >> >>> in >> >>> a >> >>> >> >>>>>>>>>>>> dimension, >> >>> >> >>>>>>>>>>>> then you can use slicing. It might be faster. >> >>> >> >>>>>>>>>>>> And a warning, mixing slices and index arrays with 3 >> or >> >>> more >> >>> >> >>>>>>>>>>>> dimensions can have some surprise switching of axes. >> >>> >> >>>>>>>>>>>> >> >>> >> >>>>>>>>>>>> Josef >> >>> >> >>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>> A lot quicker...! >> >>> >> >>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>> Martin >> >>> >> >>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>> josef.pktd wrote: >> >>> >> >>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>> On Fri, May 21, 2010 at 8:59 AM, mdekauwe >> >>> >> >>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>> wrote: >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>>> Hi, >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>>> I am trying to extract data from a 4D array and >> store >> >>> it >> >>> >> in >> >>> >> >>>>>>>>>>>>>>> a >> >>> >> >>>>>>>>>>>>>>> 2D >> >>> >> >>>>>>>>>>>>>>> array, >> >>> >> >>>>>>>>>>>>>>> but >> >>> >> >>>>>>>>>>>>>>> avoid my current usage of the for loops for speed, >> as >> >>> in >> >>> >> >>>>>>>>>>>>>>> reality >> >>> >> >>>>>>>>>>>>>>> the >> >>> >> >>>>>>>>>>>>>>> arrays >> >>> >> >>>>>>>>>>>>>>> sizes are quite big. Could someone also try and >> >>> explain >> >>> >> the >> >>> >> >>>>>>>>>>>>>>> solution >> >>> >> >>>>>>>>>>>>>>> as >> >>> >> >>>>>>>>>>>>>>> well >> >>> >> >>>>>>>>>>>>>>> if they have a spare moment as I am still finding >> it >> >>> >> quite >> >>> >> >>>>>>>>>>>>>>> difficult >> >>> >> >>>>>>>>>>>>>>> to >> >>> >> >>>>>>>>>>>>>>> get >> >>> >> >>>>>>>>>>>>>>> over the habit of using loops (C convert for my >> >>> sins). >> >>> I >> >>> >> get >> >>> >> >>>>>>>>>>>>>>> that >> >>> >> >>>>>>>>>>>>>>> one >> >>> >> >>>>>>>>>>>>>>> could >> >>> >> >>>>>>>>>>>>>>> precompute the indices's i and j i.e. >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>>> i = np.arange(tsteps) >> >>> >> >>>>>>>>>>>>>>> j = np.arange(numpts) >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>>> but just can't get my head round how i then use >> >>> them... >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>>> Thanks, >> >>> >> >>>>>>>>>>>>>>> Martin >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>>> import numpy as np >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>>> numpts=10 >> >>> >> >>>>>>>>>>>>>>> tsteps = 12 >> >>> >> >>>>>>>>>>>>>>> vari = 22 >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>>> data = np.random.random((tsteps, vari, numpts, 1)) >> >>> >> >>>>>>>>>>>>>>> new_data = np.zeros((tsteps, numpts), >> >>> dtype=np.float32) >> >>> >> >>>>>>>>>>>>>>> index = np.arange(numpts) >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>>> for i in xrange(tsteps): >> >>> >> >>>>>>>>>>>>>>> for j in xrange(numpts): >> >>> >> >>>>>>>>>>>>>>> new_data[i,j] = data[i,5,index[j],0] >> >>> >> >>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>> The index arrays need to be broadcastable against >> each >> >>> >> other. >> >>> >> >>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>> I think this should do it >> >>> >> >>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>> new_data = data[np.arange(tsteps)[:,None], 5, >> >>> >> >>>>>>>>>>>>>> np.arange(numpts), >> >>> >> >>>>>>>>>>>>>> 0] >> >>> >> >>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>> Josef >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>>> -- >> >>> >> >>>>>>>>>>>>>>> View this message in context: >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28633477.html >> >>> >> >>>>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at >> >>> >> Nabble.com. >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>>> _______________________________________________ >> >>> >> >>>>>>>>>>>>>>> SciPy-User mailing list >> >>> >> >>>>>>>>>>>>>>> SciPy-User at scipy.org >> >>> >> >>>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>> _______________________________________________ >> >>> >> >>>>>>>>>>>>>> SciPy-User mailing list >> >>> >> >>>>>>>>>>>>>> SciPy-User at scipy.org >> >>> >> >>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>> -- >> >>> >> >>>>>>>>>>>>> View this message in context: >> >>> >> >>>>>>>>>>>>> >> >>> >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28634924.html >> >>> >> >>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at >> >>> >> Nabble.com. >> >>> >> >>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>>> _______________________________________________ >> >>> >> >>>>>>>>>>>>> SciPy-User mailing list >> >>> >> >>>>>>>>>>>>> SciPy-User at scipy.org >> >>> >> >>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>>>>>>>>>>>> >> >>> >> >>>>>>>>>>>> _______________________________________________ >> >>> >> >>>>>>>>>>>> SciPy-User mailing list >> >>> >> >>>>>>>>>>>> SciPy-User at scipy.org >> >>> >> >>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>>>>>>>>>>> >> >>> >> >>>>>>>>>>>> >> >>> >> >>>>>>>>>>> >> >>> >> >>>>>>>>>>> >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> -- >> >>> >> >>>>>>>>>> View this message in context: >> >>> >> >>>>>>>>>> >> >>> >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28640656.html >> >>> >> >>>>>>>>>> Sent from the Scipy-User mailing list archive at >> >>> Nabble.com. >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>>> _______________________________________________ >> >>> >> >>>>>>>>>> SciPy-User mailing list >> >>> >> >>>>>>>>>> SciPy-User at scipy.org >> >>> >> >>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>>>>>>>>> >> >>> >> >>>>>>>>> _______________________________________________ >> >>> >> >>>>>>>>> SciPy-User mailing list >> >>> >> >>>>>>>>> SciPy-User at scipy.org >> >>> >> >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>>> >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> -- >> >>> >> >>>>>>>> View this message in context: >> >>> >> >>>>>>>> >> >>> >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28642434.html >> >>> >> >>>>>>>> Sent from the Scipy-User mailing list archive at >> Nabble.com. >> >>> >> >>>>>>>> >> >>> >> >>>>>>>> _______________________________________________ >> >>> >> >>>>>>>> SciPy-User mailing list >> >>> >> >>>>>>>> SciPy-User at scipy.org >> >>> >> >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>>>>>>> >> >>> >> >>>>>>> _______________________________________________ >> >>> >> >>>>>>> SciPy-User mailing list >> >>> >> >>>>>>> SciPy-User at scipy.org >> >>> >> >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>>>>>> >> >>> >> >>>>>>> >> >>> >> >>>>>> >> >>> >> >>>>>> -- >> >>> >> >>>>>> View this message in context: >> >>> >> >>>>>> >> >>> >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28686356.html >> >>> >> >>>>>> Sent from the Scipy-User mailing list archive at >> Nabble.com. >> >>> >> >>>>>> >> >>> >> >>>>>> _______________________________________________ >> >>> >> >>>>>> SciPy-User mailing list >> >>> >> >>>>>> SciPy-User at scipy.org >> >>> >> >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>>>>> >> >>> >> >>>>> _______________________________________________ >> >>> >> >>>>> SciPy-User mailing list >> >>> >> >>>>> SciPy-User at scipy.org >> >>> >> >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>>>> >> >>> >> >>>>> >> >>> >> >>>> >> >>> >> >>>> -- >> >>> >> >>>> View this message in context: >> >>> >> >>>> >> >>> http://old.nabble.com/removing-for-loops...-tp28633477p28711249.html >> >>> >> >>>> Sent from the Scipy-User mailing list archive at Nabble.com. >> >>> >> >>>> >> >>> >> >>>> _______________________________________________ >> >>> >> >>>> SciPy-User mailing list >> >>> >> >>>> SciPy-User at scipy.org >> >>> >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>>> >> >>> >> >>> _______________________________________________ >> >>> >> >>> SciPy-User mailing list >> >>> >> >>> SciPy-User at scipy.org >> >>> >> >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >>> >> >>> >> >>> >> >>> >> >> >> >>> >> >> -- >> >>> >> >> View this message in context: >> >>> >> >> >> >>> http://old.nabble.com/removing-for-loops...-tp28633477p28711444.html >> >>> >> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >>> >> >> >> >>> >> >> _______________________________________________ >> >>> >> >> SciPy-User mailing list >> >>> >> >> SciPy-User at scipy.org >> >>> >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >> >> >>> >> > _______________________________________________ >> >>> >> > SciPy-User mailing list >> >>> >> > SciPy-User at scipy.org >> >>> >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> > >> >>> >> > >> >>> >> >> >>> >> -- >> >>> >> View this message in context: >> >>> >> >> http://old.nabble.com/removing-for-loops...-tp28633477p28711581.html >> >>> >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >>> >> >> >>> >> _______________________________________________ >> >>> >> SciPy-User mailing list >> >>> >> SciPy-User at scipy.org >> >>> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >> >>> > >> >>> > _______________________________________________ >> >>> > SciPy-User mailing list >> >>> > SciPy-User at scipy.org >> >>> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> > >> >>> > >> >>> >> >>> -- >> >>> View this message in context: >> >>> http://old.nabble.com/removing-for-loops...-tp28633477p28824023.html >> >>> Sent from the Scipy-User mailing list archive at Nabble.com. >> >>> >> >>> _______________________________________________ >> >>> SciPy-User mailing list >> >>> SciPy-User at scipy.org >> >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >>> >> >> >> >> >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> >> -- >> View this message in context: >> http://old.nabble.com/removing-for-loops...-tp28633477p28846602.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/removing-for-loops...-tp28633477p28848191.html Sent from the Scipy-User mailing list archive at Nabble.com. From d.l.goldsmith at gmail.com Thu Jun 10 16:52:08 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 10 Jun 2010 13:52:08 -0700 Subject: [SciPy-User] OT: Stats Q (educate me please) Message-ID: I recently had cause to ponder moving averages, and given my general interest in noise theory, it got me wondering: relative to the PS of the signal, what's the PS of a n-width moving average. After unsuccessfully (though far from exhaustively) looking for some results in the literature, I just started thinking about it myself, and came to realize, both based on what the graphs are saying about the situation and then in light of that, in retrospect, conceptually as well, since the moving average is a smoothing of the signal, it's some kind of low-pass filter (removing power at higher frequencies), which begs the question: what kind of low-pass filter? In particular, is it a truncation filter, completely removing any power from windows smaller than n (the intuitive, though far from obvious, conclusion), or is it an attenuation filter, applying some monotonically decreasing envelope to the PS for frequencies corresponding to windows smaller than n? (Or does it somehow influence even the power of frequencies corresponding to windows larger than n?) Reference/proof? Thanks for the education. DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From jh at physics.ucf.edu Thu Jun 10 17:37:27 2010 From: jh at physics.ucf.edu (Joe Harrington) Date: Thu, 10 Jun 2010 17:37:27 -0400 Subject: [SciPy-User] Global Curve Fitting of 2 functions to 2 sets of data-curves In-Reply-To: (scipy-user-request@scipy.org) References: Message-ID: On Thu, 10 Jun 2010 14:27:13 -0400, josef.pktd at gmail.com wrote: > On Thu, Jun 10, 2010 at 4:05 AM, Sebastian Haase wrote: > > Hi, > > > > so far I have been using scipy.optimize.leastsq to satisfy all my > > curve fitting needs. > > But now I am thinking about "global fitting" - i.e. fitting multiple > > dataset with shared parameters > > (e.g. ref here: > > http://www.originlab.com/index.aspx?go=Products/Origin/DataAnalysis/CurveFitting/GlobalFitting) > > > > I have looked here (http://www.scipy.org/Cookbook/FittingData) and here > > (http://docs.scipy.org/doc/scipy/reference/optimize.html) > > > > Can someone provide an example ?? Which of the routines of > > scipy.optimize are "easiest" to use ? > > > > Finally, I'm thinking about a "much more" complicated fitting task: > > fitting two sets of datasets with two types of functions. > > In total I have 10 datasets to be fit with a function f1, and 10 more > > to be fit with function f2. Each function depends on 6 parameters > > A1,A2,A3, r1,r2,r3. > > A1,A2,A3 should be identical ("shared") between all 20 sets, while > > r1,r2,r3 should be shared between the i-th set of type f1 and the i-th > > set of f2. > > Last but not least it would be nice if one could specify constrains > > such that r1,r2,r3 >0 and A1+A2+A3 == 1 and 0<=Ai<=1. > > > > ;-) ?Is this too much ? > > > > Thanks for any help or hints, > > Assuming your noise or error terms are uncorrelated, I would still use > optimize.leastsq or optimize.curve_fit using a function that stacks > all the errors in one 1-d array. If there are differences in the noise > variance, then weights/sigma per function as in curve_fit can be used. > > common parameter restrictions across functions can be encoded by using > the same parameter in several (sub-)functions. > > In this case, I would impose the constraints through reparameterization, e.g > r1 = exp(r1_), ... > A1 = exp(A1_)/(exp(A1_) + exp(A2_) + 1) > A1 = exp(A2_)/(exp(A1_) + exp(A2_) + 1) > A1 = 1/(exp(A1_) + exp(A2_) + 1) > > (maybe it's more tricky to get the standard deviation of the original > parameter estimate) > > or as an alternative, calculate the total weighted sum of squared > errors and use one of the constraint fmin in optimize. > > Josef > > > > > Sebastian Haase > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > In the end, you have just one function and one set of parameters. Some parameters apply to terms that deal with part of the data and some to terms that deal with all of it. Correlations can be very complex in such fits, and I've heard many investigators swear that there could not be any in their datasets, when in fact there turned out to be many. For example, I look at series of stellar images, and search for the drop and recovery in brightness when a planet passes behind a star. There might be 10000 images such a series, each yielding a single number for the total brightness of the planetary system. The camera has sensitivity variations that depend on the x,y position of the star in the image, and these need to be modeled along with the eclipse parameters (depth of eclipse dip, time, duration) You might think that position in the image and eclipse depth are uncorrelated, since one is in the stellar system, parsecs away, and one is in your camera. But, if there is a periodic motion in x,y space, and it takes about as long to cycle as the duration of the eclipse, you can have a correlation between eclipse depth and the parameters used to take out position-dependent sensitivity. This means eclipse depth can drift up and a positional parameter can drift down without changing chi-squared much, leading to a possible major loss of precision in both, and maybe of accuracy. Least-squares fitters can still do well in finding a minimum chi-squared, but often badly misreport parameter errors because they are correlated and/or non-Gaussian. They tell you nothing about the parameter distributions, which might be very non-Gaussian, even multimodal. This is why you should consider doing a Markov-chain Monte Carlo analysis after you find the minimum (some people use MCMC to find the minimum, but it's not as robust against local minima as a good minimizer). Then, take all the possible pairs of parameters and make a plot for each one from all the MCMC trials (this may be many dozens of plots). Also, look at the parameter histograms. This is, so far as I know, the best and most robust way to do complex model fits. You can also put priors on the MCMC that prevent, say, negative values of a parameter. This will change the error distribution and make it non-Gaussian, and the result you will get from the MCMC will then be much more reliable than from a minimizer. --jh-- From aarchiba at physics.mcgill.ca Thu Jun 10 17:41:51 2010 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Thu, 10 Jun 2010 17:41:51 -0400 Subject: [SciPy-User] OT: Stats Q (educate me please) In-Reply-To: References: Message-ID: On 10 June 2010 16:52, David Goldsmith wrote: > I recently had cause to ponder moving averages, and given my general > interest in noise theory, it got me wondering: relative to the PS of the > signal, what's the PS of a n-width moving average.? After unsuccessfully > (though far from exhaustively) looking for some results in the literature, I > just started thinking about it myself, and came to realize, both based on > what the graphs are saying about the situation and then in light of that, in > retrospect, conceptually as well, since the moving average is a smoothing of > the signal, it's some kind of low-pass filter (removing power at higher > frequencies), which begs the question: what kind of low-pass filter?? In > particular, is it a truncation filter, completely removing any power from > windows smaller than n (the intuitive, though far from obvious, conclusion), > or is it an attenuation filter, applying some monotonically decreasing > envelope to the PS for frequencies corresponding to windows smaller than n? > (Or does it somehow influence even the power of frequencies corresponding to > windows larger than n?)? Reference/proof?? Thanks for the education. An n-width moving average is (I'm assuming equally-spaced data points) convolution by a boxcar of width n. So its effect on the power spectrum is multiplication by a sinc function whose first zero is at a period of n samples and whose amplitude at zero frequency is 1. (If you have a finite-length data set and are doing circular convolution, for "sinc" read the Dirichlet kernel.) Anne > > DG > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From d.l.goldsmith at gmail.com Thu Jun 10 19:29:05 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 10 Jun 2010 16:29:05 -0700 Subject: [SciPy-User] OT: Stats Q (educate me please) In-Reply-To: References: Message-ID: On Thu, Jun 10, 2010 at 2:41 PM, Anne Archibald wrote: > On 10 June 2010 16:52, David Goldsmith wrote: > > I recently had cause to ponder moving averages, and given my general > > interest in noise theory, it got me wondering: relative to the PS of the > > signal, what's the PS of a n-width moving average. After unsuccessfully > > (though far from exhaustively) looking for some results in the > literature, I > > just started thinking about it myself, and came to realize, both based on > > what the graphs are saying about the situation and then in light of that, > in > > retrospect, conceptually as well, since the moving average is a smoothing > of > > the signal, it's some kind of low-pass filter (removing power at higher > > frequencies), which begs the question: what kind of low-pass filter? In > > particular, is it a truncation filter, completely removing any power from > > windows smaller than n (the intuitive, though far from obvious, > conclusion), > > or is it an attenuation filter, applying some monotonically decreasing > > envelope to the PS for frequencies corresponding to windows smaller than > n? > > (Or does it somehow influence even the power of frequencies corresponding > to > > windows larger than n?) Reference/proof? Thanks for the education. > > An n-width moving average is (I'm assuming equally-spaced data points) > convolution by a boxcar of width n. So its effect on the power > spectrum is multiplication by a sinc function whose first zero is at a > period of n samples and whose amplitude at zero frequency is 1. (If > you have a finite-length data set and are doing circular convolution, > for "sinc" read the Dirichlet kernel.) > Excellent, Anne, thanks! DG > > Anne > > > > > > DG > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Jun 10 20:48:32 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 10 Jun 2010 18:48:32 -0600 Subject: [SciPy-User] OT: Stats Q (educate me please) In-Reply-To: References: Message-ID: On Thu, Jun 10, 2010 at 3:41 PM, Anne Archibald wrote: > On 10 June 2010 16:52, David Goldsmith wrote: > > I recently had cause to ponder moving averages, and given my general > > interest in noise theory, it got me wondering: relative to the PS of the > > signal, what's the PS of a n-width moving average. After unsuccessfully > > (though far from exhaustively) looking for some results in the > literature, I > > just started thinking about it myself, and came to realize, both based on > > what the graphs are saying about the situation and then in light of that, > in > > retrospect, conceptually as well, since the moving average is a smoothing > of > > the signal, it's some kind of low-pass filter (removing power at higher > > frequencies), which begs the question: what kind of low-pass filter? In > > particular, is it a truncation filter, completely removing any power from > > windows smaller than n (the intuitive, though far from obvious, > conclusion), > > or is it an attenuation filter, applying some monotonically decreasing > > envelope to the PS for frequencies corresponding to windows smaller than > n? > > (Or does it somehow influence even the power of frequencies corresponding > to > > windows larger than n?) Reference/proof? Thanks for the education. > > An n-width moving average is (I'm assuming equally-spaced data points) > convolution by a boxcar of width n. So its effect on the power > spectrum is multiplication by a sinc function whose first zero is at a > period of n samples and whose amplitude at zero frequency is 1. (If > you have a finite-length data set and are doing circular convolution, > for "sinc" read the Dirichlet kernel.) > > The sinc needs to be squared for the power spectrum... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Jun 10 21:28:47 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 10 Jun 2010 19:28:47 -0600 Subject: [SciPy-User] Global Curve Fitting of 2 functions to 2 sets of data-curves In-Reply-To: References: Message-ID: On Thu, Jun 10, 2010 at 12:27 PM, wrote: > On Thu, Jun 10, 2010 at 4:05 AM, Sebastian Haase > wrote: > > Hi, > > > > so far I have been using scipy.optimize.leastsq to satisfy all my > > curve fitting needs. > > But now I am thinking about "global fitting" - i.e. fitting multiple > > dataset with shared parameters > > (e.g. ref here: > > > http://www.originlab.com/index.aspx?go=Products/Origin/DataAnalysis/CurveFitting/GlobalFitting > ) > > > > I have looked here (http://www.scipy.org/Cookbook/FittingData) and here > > (http://docs.scipy.org/doc/scipy/reference/optimize.html) > > > > Can someone provide an example ? Which of the routines of > > scipy.optimize are "easiest" to use ? > > > > Finally, I'm thinking about a "much more" complicated fitting task: > > fitting two sets of datasets with two types of functions. > > In total I have 10 datasets to be fit with a function f1, and 10 more > > to be fit with function f2. Each function depends on 6 parameters > > A1,A2,A3, r1,r2,r3. > > A1,A2,A3 should be identical ("shared") between all 20 sets, while > > r1,r2,r3 should be shared between the i-th set of type f1 and the i-th > > set of f2. > > Last but not least it would be nice if one could specify constrains > > such that r1,r2,r3 >0 and A1+A2+A3 == 1 and 0<=Ai<=1. > > > > ;-) Is this too much ? > > > > Thanks for any help or hints, > > Assuming your noise or error terms are uncorrelated, I would still use > optimize.leastsq or optimize.curve_fit using a function that stacks > all the errors in one 1-d array. If there are differences in the noise > variance, then weights/sigma per function as in curve_fit can be used. > > Yep, I just did that today for 1024 data sets of ~800 points, sharing 9 parameters and having 7 parameters unique to each data set. I was able to simply it a bit because I was only interested in the 9 parameters and they were also the only ones that entered in non-linearly. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Thu Jun 10 21:51:32 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 10 Jun 2010 18:51:32 -0700 Subject: [SciPy-User] OT: Stats Q (educate me please) In-Reply-To: References: Message-ID: On Thu, Jun 10, 2010 at 5:48 PM, Charles R Harris wrote: > > On Thu, Jun 10, 2010 at 3:41 PM, Anne Archibald < > aarchiba at physics.mcgill.ca> wrote: > >> On 10 June 2010 16:52, David Goldsmith wrote: >> > I recently had cause to ponder moving averages, and given my general >> > interest in noise theory, it got me wondering: relative to the PS of the >> > signal, what's the PS of a n-width moving average. After unsuccessfully >> > (though far from exhaustively) looking for some results in the >> literature, I >> > just started thinking about it myself, and came to realize, both based >> on >> > what the graphs are saying about the situation and then in light of >> that, in >> > retrospect, conceptually as well, since the moving average is a >> smoothing of >> > the signal, it's some kind of low-pass filter (removing power at higher >> > frequencies), which begs the question: what kind of low-pass filter? In >> > particular, is it a truncation filter, completely removing any power >> from >> > windows smaller than n (the intuitive, though far from obvious, >> conclusion), >> > or is it an attenuation filter, applying some monotonically decreasing >> > envelope to the PS for frequencies corresponding to windows smaller than >> n? >> > (Or does it somehow influence even the power of frequencies >> corresponding to >> > windows larger than n?) Reference/proof? Thanks for the education. >> >> An n-width moving average is (I'm assuming equally-spaced data points) >> convolution by a boxcar of width n. So its effect on the power >> spectrum is multiplication by a sinc function whose first zero is at a >> period of n samples and whose amplitude at zero frequency is 1. (If >> you have a finite-length data set and are doing circular convolution, >> for "sinc" read the Dirichlet kernel.) >> > > The sinc needs to be squared for the power spectrum... Chuck > Right, thanks. (I was wondering 'bout that: I didn't immediately run and check my Abromowitz, but I didn't _think_ sinc was strictly non-negative, as a PS envelope must be if the result is to remain a PS.) DG > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinauser at libero.it Fri Jun 11 07:44:48 2010 From: tinauser at libero.it (tinauser) Date: Fri, 11 Jun 2010 04:44:48 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] numpy and C In-Reply-To: <219460.24132.qm@web33001.mail.mud.yahoo.com> References: <28767579.post@talk.nabble.com> <28829120.post@talk.nabble.com> <28831237.post@talk.nabble.com> <219460.24132.qm@web33001.mail.mud.yahoo.com> Message-ID: <28854111.post@talk.nabble.com> Dear David, thanks for your suggestions. I have however some doubts, probably coming from my unexpertice in C. David Baddeley wrote: > > If your description holds, what you're doing is allocating a block of > memory (with PyArray_SimpleNew), > then changing the pointer so that it points to your camera buffer, > that's right David Baddeley wrote: > > without ever using the memory you allocated. The original memory allocated > with PyArray_SimpleNew will get leaked at this point. When Python comes to > garbage collect your array, the camera buffer will be dealloced instead of > the original block of memory. This sounds all BAD!!! > This I can't understand. I allocate only once (at initiation time) a PyArray (2bytes);at running time I'm just updating the value of the data pointer each time I want to get a frame.Python is always going to use this PyArray, that is always at the same address,and look for the data in a different section of the buffer,according to the update value of the "data" field. Am I missing something? David Baddeley wrote: > > I have a feeling that PyArray_SimpleNew also sets the reference count to 1 > so there's no need to incref it (although you'd be well advised to check > up on this). If this is the case, increfing effectively ensures that the > array will never be garbage collected and creates a memory leak. > I'll check that David Baddeley wrote: > > depending on how the data gets from the camera into the buffer you've got > a few options - is it a preallocated buffer which gets constantly > refreshed by the camera, or is it a buffer allocated on the fly to hold > the results of a command such as camera_get_frame(*buffer). > it is the first. The buffer is preallocated and the command is camera_get_frame(*frame). This command gives me the pointer to the frame (which is within the preallocated buffer) David Baddeley wrote: > > If it's the first you could either ... > > Use PyArray_SimpleNewFromData on your camera buffer, with the caveat that > the values in the resulting array will be constantly refreshed from the > camera. > > or, use memcopy to copy the contents of the buffer to your newly allocated > (with PyArray_SimpleNew) array - this way the python array won't change as > the camera takes another frame. This also has the advantage that the c > code doesn't need to worry about whether python is still using the > original buffer before deleting it. > I don't think I can use the first solution ;I'm using a buffer because while I need to record all the frames, I can accept to miss some frames for painting the wiget. Therefore, when I'm asking for a frame, the recording camera is locking the frame and I can use that memory without limitation of time. I avoided to use memcopy because I thought was quite slow with respect to just pass a pointer. Is there a way to check if I'm really leaking memory? Thank you again Lorenzo David Baddeley wrote: > > If it's the second the buffer contents won't be changing with time and I'd > either use PyArray_SimpleNewFromData, or preferably, as this means you can > let python handle the garbage collection for the frame, use > PyArray_SimpleNew to allocate an array and pass the data pointer of this > array to your camera_get_frame(*buffer) method. If you are stuck with a > pre-allocated array and want to keep the python an c memory management as > separate as possible, you could also use the memcopy route. > > cheers, > David > > > > ----- Original Message ---- > From: tinauser > To: scipy-user at scipy.org > Sent: Thu, 10 June, 2010 2:35:09 AM > Subject: Re: [SciPy-User] [SciPy-user] numpy and C > > > Dear Charles, > > thanks again for the replies. > Why do you say that is difficoult to free memory? > What I do is to allocate the memory(pyincref) before calling the Python > script. The Python script uses then a timer to call a C function to which > the allocated PyArrayObject (created with PyArray SimpleNew) is passed. In > C, the pointer of the PyArray is assigned to a pointer that points to a > sort > of data buffer that is filled from a camera. The data buffer is allocated > elsewhere. > When the python GUI is closed, I just decref my PyArrayObject, that I'm > basically using just to pass pointer values. > > > > Charles R Harris wrote: >> >> On Wed, Jun 9, 2010 at 7:46 AM, Charles R Harris >> wrote: >> >>> >>> >>> On Wed, Jun 9, 2010 at 5:38 AM, tinauser wrote: >>> >>>> >>>> Dear Charles, >>>> thanks for the reply. >>>> The part of code causing the problem was exactly this >>>> >>>> Pymatout_img->data= cam_frame->data; >>>> where Pymatout is a PyArrayObject and cam_frame is a structure having a >>>> pointer to undefined char data. >>>> >>>> The code works all right if I recast in this way >>>> >>>> Pymatout_img->data= (char*)cam_frame->data; >>>> >>>> I'm not sure if this is allowed;I guessed it works because even if >>>> Pymatout_img->data is always a pointer to char, the PyArrayObject looks >>>> in >>>> ->descr->type_num to see what is the data type. >>>> >>>> >>> Numpy uses char* all over the place and later casts to the needed type, >>> it's the old way of doing void*. So your explicit cast is fine. For some >>> compilers, gcc for example, you also need to use a compiler flag to let >>> the >>> compiler know that you are going to do such things. In gcc the flag is >>> -fno-strict-aliasing but I don't think you need to worry about this in >>> VC. >>> >>> >>> >>> >> That said, managing the data in this way can be problematic as you need >> to >> track alignment and worry about freeing of memory. You might want to look >> at >> PyArray SimpleNewFromData. >> >> Chuck >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > -- > View this message in context: > http://old.nabble.com/numpy-and-C-tp28767579p28831237.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/numpy-and-C-tp28767579p28854111.html Sent from the Scipy-User mailing list archive at Nabble.com. From tinauser at libero.it Fri Jun 11 07:50:33 2010 From: tinauser at libero.it (tinauser) Date: Fri, 11 Jun 2010 04:50:33 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] numpy and C In-Reply-To: References: <28767579.post@talk.nabble.com> <28829120.post@talk.nabble.com> <28831237.post@talk.nabble.com> Message-ID: <28854160.post@talk.nabble.com> Charles R Harris wrote: > > I don't know the details of your larger design, so perhaps my concerns are > irrelevant. The virtue of PyArray_SimpleNewFromData is that the array can > be > deallocated without affecting the buffer memory. > > PyArray SimpleNewFromData (PyObject*) (int nd, npy intp* dims,int typenum, > void* data) > > Sometimes, you want to wrap memory allocated elsewhere into an ndarray > object > for downstream use. This routine makes it straightforward to do that. The > first three arguments are the same as in PyArray SimpleNew, the final > argument is a pointer to a block of contiguous memory that the ndarray > should use as it?s data-buffer which will be interpreted in C-style > contiguous > fashion. A new reference to an ndarray is returned, but the ndarray will > not > own its data. When this ndarray is deallocated, the pointer will not be > freed. > You should ensure that the provided memory is not freed while the returned > array is in existence. The easiest way to handle this is if data comes > from > another reference-counted Python object. The reference count on this > object > should be increased after the pointer is passed in, and the base member of > the returned ndarray should point to the Python object that owns the data. > Then, when the ndarray is deallocated, the base-member will be DECREF?d > appropriately. If you want the memory to be freed as soon as the ndarray > is > deallocated then simply set the OWNDATA flag on the returned ndarray. > > Chuck > > > Dear Charles, the point is that I'm using the PyArray just as an "holder" of a pointer (the char* data).I allocate this only at the beginning and change the its value all the time. If I understood it correctly, using SimpleNewFromData I'm just setting the value of "data", that I have anyway to change everytime I'm getting a new frame. The point is that I want to avoid to allocate at run time. Cheers Lorenzo -- View this message in context: http://old.nabble.com/numpy-and-C-tp28767579p28854160.html Sent from the Scipy-User mailing list archive at Nabble.com. From R.Springuel at umit.maine.edu Fri Jun 11 14:12:27 2010 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Fri, 11 Jun 2010 14:12:27 -0400 Subject: [SciPy-User] Picking a random element of an array with conditions Message-ID: <4C127C8B.1070804@umit.maine.edu> I'd like to pick the random element of an array from those elements which meet a certain condition. I.e. pick an element of a for which a == value is True. Without the condition, I'd phrase the command like this: a[random.randint(len(a))] Is there some similar thing that I can do to pick with the condition in an efficient manner? So far all I've come up with involves looping over the array to construct an array of indecies so that I can write: a[indecies[random.randint(len(indecies))]] -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By Appointment Only From kwgoodman at gmail.com Fri Jun 11 14:16:26 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 11 Jun 2010 11:16:26 -0700 Subject: [SciPy-User] Picking a random element of an array with conditions In-Reply-To: <4C127C8B.1070804@umit.maine.edu> References: <4C127C8B.1070804@umit.maine.edu> Message-ID: On Fri, Jun 11, 2010 at 11:12 AM, R. Padraic Springuel wrote: > I'd like to pick the random element of an array from those elements > which meet a certain condition. ?I.e. pick an element of a for which a > == value is True. > > Without the condition, I'd phrase the command like this: > a[random.randint(len(a))] > > Is there some similar thing that I can do to pick with the condition in > an efficient manner? ?So far all I've come up with involves looping over > the array to construct an array of indecies so that I can write: > a[indecies[random.randint(len(indecies))]] How about: >> a = np.random.rand(10) >> idx = a > 0.5 >> a[idx[np.random.randint(10)]] 0.58803647603961251 From kwgoodman at gmail.com Fri Jun 11 14:21:02 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 11 Jun 2010 11:21:02 -0700 Subject: [SciPy-User] Picking a random element of an array with conditions In-Reply-To: References: <4C127C8B.1070804@umit.maine.edu> Message-ID: On Fri, Jun 11, 2010 at 11:16 AM, Keith Goodman wrote: > On Fri, Jun 11, 2010 at 11:12 AM, R. Padraic Springuel > wrote: >> I'd like to pick the random element of an array from those elements >> which meet a certain condition. ?I.e. pick an element of a for which a >> == value is True. >> >> Without the condition, I'd phrase the command like this: >> a[random.randint(len(a))] >> >> Is there some similar thing that I can do to pick with the condition in >> an efficient manner? ?So far all I've come up with involves looping over >> the array to construct an array of indecies so that I can write: >> a[indecies[random.randint(len(indecies))]] > > How about: > >>> a = np.random.rand(10) >>> idx = a > 0.5 >>> a[idx[np.random.randint(10)]] > ? 0.58803647603961251 Oh, sorry, that doesn't work since idx is bool. How about: >> a = np.random.rand(10) >> idx = np.where(a > 0.5)[0] >> a[idx[np.random.randint(idx.size)]] 0.94308730304099841 From kwgoodman at gmail.com Fri Jun 11 14:29:04 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 11 Jun 2010 11:29:04 -0700 Subject: [SciPy-User] Picking a random element of an array with conditions In-Reply-To: References: <4C127C8B.1070804@umit.maine.edu> Message-ID: On Fri, Jun 11, 2010 at 11:21 AM, Keith Goodman wrote: > On Fri, Jun 11, 2010 at 11:16 AM, Keith Goodman wrote: >> On Fri, Jun 11, 2010 at 11:12 AM, R. Padraic Springuel >> wrote: >>> I'd like to pick the random element of an array from those elements >>> which meet a certain condition. ?I.e. pick an element of a for which a >>> == value is True. >>> >>> Without the condition, I'd phrase the command like this: >>> a[random.randint(len(a))] >>> >>> Is there some similar thing that I can do to pick with the condition in >>> an efficient manner? ?So far all I've come up with involves looping over >>> the array to construct an array of indecies so that I can write: >>> a[indecies[random.randint(len(indecies))]] >> >> How about: >> >>>> a = np.random.rand(10) >>>> idx = a > 0.5 >>>> a[idx[np.random.randint(10)]] >> ? 0.58803647603961251 > > Oh, sorry, that doesn't work since idx is bool. How about: > >>> a = np.random.rand(10) >>> idx = np.where(a > 0.5)[0] >>> a[idx[np.random.randint(idx.size)]] > ? 0.94308730304099841 And here's the nd case: >> a = np.random.rand(10,10) >> idx = np.where(a.flat > 0.5)[0] >> a.flat[idx[np.random.randint(idx.size)]] 0.6073571170281532 From aarchiba at physics.mcgill.ca Fri Jun 11 14:31:44 2010 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Fri, 11 Jun 2010 14:31:44 -0400 Subject: [SciPy-User] Picking a random element of an array with conditions In-Reply-To: <4C127C8B.1070804@umit.maine.edu> References: <4C127C8B.1070804@umit.maine.edu> Message-ID: On 11 June 2010 14:12, R. Padraic Springuel wrote: > I'd like to pick the random element of an array from those elements > which meet a certain condition. I.e. pick an element of a for which a > == value is True. > > Without the condition, I'd phrase the command like this: > a[random.randint(len(a))] > > Is there some similar thing that I can do to pick with the condition in > an efficient manner? So far all I've come up with involves looping over > the array to construct an array of indecies so that I can write: > a[indecies[random.randint(len(indecies))]] If all you need is an element, then the easiest thing to do is pull out those matching the condition: b = a[a>3] c = b[random.randint(len(b))] (or a minor variation on this if a is not one-dimensional) If you need the location in the original array of a random element matching the condition, the easiest thing to do is build an index array: ix = np.nonzero(a>3) Now ix is a tuple of m index arrays if a is m-dimensional. i = random.randint(len(ix[0])) e = tuple(ia[i] for ia in ix) Now e is a tuple pointing to a random element meeting the condition. Anne > -- > > R. Padraic Springuel > Research Assistant > Department of Physics and Astronomy > University of Maine > Bennett 309 > Office Hours: By Appointment Only > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From kwgoodman at gmail.com Fri Jun 11 14:32:20 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 11 Jun 2010 11:32:20 -0700 Subject: [SciPy-User] Picking a random element of an array with conditions In-Reply-To: References: <4C127C8B.1070804@umit.maine.edu> Message-ID: On Fri, Jun 11, 2010 at 11:29 AM, Keith Goodman wrote: > On Fri, Jun 11, 2010 at 11:21 AM, Keith Goodman wrote: >> On Fri, Jun 11, 2010 at 11:16 AM, Keith Goodman wrote: >>> On Fri, Jun 11, 2010 at 11:12 AM, R. Padraic Springuel >>> wrote: >>>> I'd like to pick the random element of an array from those elements >>>> which meet a certain condition. ?I.e. pick an element of a for which a >>>> == value is True. >>>> >>>> Without the condition, I'd phrase the command like this: >>>> a[random.randint(len(a))] >>>> >>>> Is there some similar thing that I can do to pick with the condition in >>>> an efficient manner? ?So far all I've come up with involves looping over >>>> the array to construct an array of indecies so that I can write: >>>> a[indecies[random.randint(len(indecies))]] >>> >>> How about: >>> >>>>> a = np.random.rand(10) >>>>> idx = a > 0.5 >>>>> a[idx[np.random.randint(10)]] >>> ? 0.58803647603961251 >> >> Oh, sorry, that doesn't work since idx is bool. How about: >> >>>> a = np.random.rand(10) >>>> idx = np.where(a > 0.5)[0] >>>> a[idx[np.random.randint(idx.size)]] >> ? 0.94308730304099841 > > And here's the nd case: > >>> a = np.random.rand(10,10) >>> idx = np.where(a.flat > 0.5)[0] >>> a.flat[idx[np.random.randint(idx.size)]] > ? 0.6073571170281532 Oh, I guess np.where is slow. So I'd just do it the easy and faster way (second method below): >> a = np.random.rand(10000) >> timeit idx = np.where(a > 0.5)[0]; a[np.random.randint(idx.size)] 10000 loops, best of 3: 168 us per loop >> timeit b = a[a > 0.5]; b[np.random.randint(b.size)] 10000 loops, best of 3: 119 us per loop Sorry for all the mail. From ndrukelly at gmail.com Fri Jun 11 17:25:16 2010 From: ndrukelly at gmail.com (Andrew Kelly) Date: Fri, 11 Jun 2010 14:25:16 -0700 Subject: [SciPy-User] Import Error - numpy\linalg\lapack_lite.pyo Message-ID: I seemed to have stumbled upon a strange issue: I recently compiled my program with py2exe. It works on several of my windows machines but am getting an odd error on one of the Windows Server 2003 machines (strangely it works on some but not all): *File "numpy\linalg\lapack_lite.pyo", line10, in __load* *ImportError: DLL load failed: The specified module could not be found* If I take a look at the offending line I think it may just be looking in the wrong place (perhaps because of py2exe?): *mod = imp.load_dynamic(__name__, path)* where path is either: *os.path.dirname(__loader__.archive)+'numpy.linalg.lapack_lite.pyd'* or *sys.prefix+'numpy.linalg.lapack_lite.pyd'* Is py2exe breaking these paths? -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Fri Jun 11 21:11:47 2010 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Fri, 11 Jun 2010 18:11:47 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] numpy and C In-Reply-To: <28854111.post@talk.nabble.com> References: <28767579.post@talk.nabble.com> <28829120.post@talk.nabble.com> <28831237.post@talk.nabble.com> <219460.24132.qm@web33001.mail.mud.yahoo.com> <28854111.post@talk.nabble.com> Message-ID: <562822.44517.qm@web33007.mail.mud.yahoo.com> Hi Lorenzo, In C you have to explicitly free up any memory you allocate. In vanilla C you do this with the malloc & free commands. If you allocate memory and don't free it you get a memory leak, thus you've got to make sure you have a free for every malloc. free needs the original pointer in order to free up the correct data. In python (and to some extent when using the python c-api to allocate python objects), this is taken care of by a reference counting scheme, whereby python keeps a list of objects and the number of references each has, and then calls free for you when the reference count drops to zero. With your code, the pre-existing buffer will presumably have been malloced in your initialisation code, and will presumably get freeed in some cleanup code. The array you allocated with PyArray_SimpleNew, however, will be managed by pythons reference counting and it will be automatically freeed when its reference count goes to zero (i.e. when it goes out of scope in python, or you decref it). Your approach has two consequences .... when reassigning the pointer, python will no longer know which data it's supposed to free. When the array goes out of scope and is garbage collected, python will try and free the data the pointer is currently pointing to (ie your buffer, which should be being handled by you cleanup code). As there is no longer a pointer to the data allocated by the PyArray_SimpleNew call, this data cannot be deallocated and is thus leaked. Now to the other points, memcopying is certainly slower than passing pointers round, but my guess is that it won't be slow enough to severely impact performance on modern hardware (I've got code with a memcopy in it which reads a camera out at ~70hz, spools to disk, and displays at ~10hz - the memcopy is by no means the bottleneck). If you want to just use a pointer to the data in the buffer though PyArray_SimpleNewFromData is definitely your function. All it does is fashion an array descriptor (small, fast) around pre-existing data, without doing any extra memory allocation or copying. Notably it also flags the underlying data in such a way that pythons garbage collection will not try and free it. You could have your initialisation and cleanup functions which allocate and clean up the buffer, and then a get frame function which executed your cameras get_frame command & then created a PyArray_SimpleNewFromData using the pointer this returns. I'm not really an expert at detecting memory leaks - the easiest (and probably least reliable) way is just to watch your programs memory usage - if it keeps going up you're in trouble. If you only allocate the PyArray once, and then keep messing with the pointer, your approach is more likely to generate segfaults & other nastiness though. cheers, David ----- Original Message ---- From: tinauser To: scipy-user at scipy.org Sent: Fri, 11 June, 2010 11:44:48 PM Subject: Re: [SciPy-User] [SciPy-user] numpy and C Dear David, thanks for your suggestions. I have however some doubts, probably coming from my unexpertice in C. David Baddeley wrote: > > If your description holds, what you're doing is allocating a block of > memory (with PyArray_SimpleNew), > then changing the pointer so that it points to your camera buffer, > that's right David Baddeley wrote: > > without ever using the memory you allocated. The original memory allocated > with PyArray_SimpleNew will get leaked at this point. When Python comes to > garbage collect your array, the camera buffer will be dealloced instead of > the original block of memory. This sounds all BAD!!! > This I can't understand. I allocate only once (at initiation time) a PyArray (2bytes);at running time I'm just updating the value of the data pointer each time I want to get a frame.Python is always going to use this PyArray, that is always at the same address,and look for the data in a different section of the buffer,according to the update value of the "data" field. Am I missing something? David Baddeley wrote: > > I have a feeling that PyArray_SimpleNew also sets the reference count to 1 > so there's no need to incref it (although you'd be well advised to check > up on this). If this is the case, increfing effectively ensures that the > array will never be garbage collected and creates a memory leak. > I'll check that David Baddeley wrote: > > depending on how the data gets from the camera into the buffer you've got > a few options - is it a preallocated buffer which gets constantly > refreshed by the camera, or is it a buffer allocated on the fly to hold > the results of a command such as camera_get_frame(*buffer). > it is the first. The buffer is preallocated and the command is camera_get_frame(*frame). This command gives me the pointer to the frame (which is within the preallocated buffer) David Baddeley wrote: > > If it's the first you could either ... > > Use PyArray_SimpleNewFromData on your camera buffer, with the caveat that > the values in the resulting array will be constantly refreshed from the > camera. > > or, use memcopy to copy the contents of the buffer to your newly allocated > (with PyArray_SimpleNew) array - this way the python array won't change as > the camera takes another frame. This also has the advantage that the c > code doesn't need to worry about whether python is still using the > original buffer before deleting it. > I don't think I can use the first solution ;I'm using a buffer because while I need to record all the frames, I can accept to miss some frames for painting the wiget. Therefore, when I'm asking for a frame, the recording camera is locking the frame and I can use that memory without limitation of time. I avoided to use memcopy because I thought was quite slow with respect to just pass a pointer. Is there a way to check if I'm really leaking memory? Thank you again Lorenzo David Baddeley wrote: > > If it's the second the buffer contents won't be changing with time and I'd > either use PyArray_SimpleNewFromData, or preferably, as this means you can > let python handle the garbage collection for the frame, use > PyArray_SimpleNew to allocate an array and pass the data pointer of this > array to your camera_get_frame(*buffer) method. If you are stuck with a > pre-allocated array and want to keep the python an c memory management as > separate as possible, you could also use the memcopy route. > > cheers, > David > > > > ----- Original Message ---- > From: tinauser > To: scipy-user at scipy.org > Sent: Thu, 10 June, 2010 2:35:09 AM > Subject: Re: [SciPy-User] [SciPy-user] numpy and C > > > Dear Charles, > > thanks again for the replies. > Why do you say that is difficoult to free memory? > What I do is to allocate the memory(pyincref) before calling the Python > script. The Python script uses then a timer to call a C function to which > the allocated PyArrayObject (created with PyArray SimpleNew) is passed. In > C, the pointer of the PyArray is assigned to a pointer that points to a > sort > of data buffer that is filled from a camera. The data buffer is allocated > elsewhere. > When the python GUI is closed, I just decref my PyArrayObject, that I'm > basically using just to pass pointer values. > > > > Charles R Harris wrote: >> >> On Wed, Jun 9, 2010 at 7:46 AM, Charles R Harris >> wrote: >> >>> >>> >>> On Wed, Jun 9, 2010 at 5:38 AM, tinauser wrote: >>> >>>> >>>> Dear Charles, >>>> thanks for the reply. >>>> The part of code causing the problem was exactly this >>>> >>>> Pymatout_img->data= cam_frame->data; >>>> where Pymatout is a PyArrayObject and cam_frame is a structure having a >>>> pointer to undefined char data. >>>> >>>> The code works all right if I recast in this way >>>> >>>> Pymatout_img->data= (char*)cam_frame->data; >>>> >>>> I'm not sure if this is allowed;I guessed it works because even if >>>> Pymatout_img->data is always a pointer to char, the PyArrayObject looks >>>> in >>>> ->descr->type_num to see what is the data type. >>>> >>>> >>> Numpy uses char* all over the place and later casts to the needed type, >>> it's the old way of doing void*. So your explicit cast is fine. For some >>> compilers, gcc for example, you also need to use a compiler flag to let >>> the >>> compiler know that you are going to do such things. In gcc the flag is >>> -fno-strict-aliasing but I don't think you need to worry about this in >>> VC. >>> >>> >>> >>> >> That said, managing the data in this way can be problematic as you need >> to >> track alignment and worry about freeing of memory. You might want to look >> at >> PyArray SimpleNewFromData. >> >> Chuck >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > -- > View this message in context: > http://old.nabble.com/numpy-and-C-tp28767579p28831237.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/numpy-and-C-tp28767579p28854111.html Sent from the Scipy-User mailing list archive at Nabble.com. _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From josephsmidt at gmail.com Sat Jun 12 12:55:59 2010 From: josephsmidt at gmail.com (Joseph Smidt) Date: Sat, 12 Jun 2010 09:55:59 -0700 Subject: [SciPy-User] How To Use Loadtxt For Floats And Strings? Message-ID: Hello, I have a file with the following data: 0.227045E-01 0.610229E-03 \Omega_b h^2 0.110213E+00 0.550143E-02 \Omega_{DM} h^2 0.103980E+01 0.263806E-02 \theta 0.893775E-01 0.147515E-01 \tau I would like to load all three columns into three arrays. (The first two being floats and the third a string,) Can't I use loadtxt for this? I tried using this command: a, b, c = loadtxt('myfile.txt',unpack=True,dtype=(float,float,'S16')) I must be doing something wrong. What is the proper expression for loadtxt? Or must I use something else? Thanks! -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From pgmdevlist at gmail.com Sat Jun 12 13:31:40 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sat, 12 Jun 2010 13:31:40 -0400 Subject: [SciPy-User] How To Use Loadtxt For Floats And Strings? In-Reply-To: References: Message-ID: On Jun 12, 2010, at 12:55 PM, Joseph Smidt wrote: > a, b, c = loadtxt('myfile.txt',unpack=True,dtype=(float,float,'S16')) > > I must be doing something wrong. What is the proper expression for > loadtxt? Or must I use something else? Thanks! The right side outputs 1 structured array with field names 'f0','f1' and 'f2' by default. You can access individual columns by >>> tmp = loadtxt('myfile.txt',unpack=True,dtype=(float,float,'S16')) >>> (a,b,c) = [tmp["f%i" % i] for i in (0,1,2)] or something like that From vincent at vincentdavis.net Sat Jun 12 13:57:33 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Sat, 12 Jun 2010 11:57:33 -0600 Subject: [SciPy-User] How To Use Loadtxt For Floats And Strings? In-Reply-To: References: Message-ID: On Sat, Jun 12, 2010 at 11:31 AM, Pierre GM wrote: > On Jun 12, 2010, at 12:55 PM, Joseph Smidt wrote: >> a, b, c = loadtxt('myfile.txt',unpack=True,dtype=(float,float,'S16')) >> >> I must be doing something wrong. ?What is the proper expression for >> loadtxt? ?Or must I use something else? ?Thanks! You can run what is below to better see what is happening but one problem is that loadtxt used any white space as a delimiter. This is a problem for "\Omega_b h^2" as "h^2" will be in another column or not imported which may be what you want? Are the columns separated by tabs? if so that help, you should specify the delimiter. Also I think you need to specify the data type as I have below because you are getting a structured array from this. from StringIO import StringIO import numpy as np d = StringIO("""0.227045E-01 0.610229E-03 \Omega_b h^2 0.110213E+00 0.550143E-02 \Omega_{DM} h^2 0.103980E+01 0.263806E-02 \theta 0.893775E-01 0.147515E-01 \tau""") print np.loadtxt(d, unpack=True, dtype=[('num1', float),('num2', float),('s1', '|S16')]) #a, b, c = np.loadtxt(d, unpack=True, dtype=[('num1', float),('num2', float),('s1', '|S16')]) Vincent > > The right side outputs 1 structured array with field names 'f0','f1' and 'f2' by default. > You can access individual columns by >>>> tmp = ?loadtxt('myfile.txt',unpack=True,dtype=(float,float,'S16')) >>>> (a,b,c) = [tmp["f%i" % i] for i in (0,1,2)] > > or something like that > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From algebraicamente at gmail.com Sat Jun 12 14:36:55 2010 From: algebraicamente at gmail.com (Oscar Gerardo Lazo Arjona) Date: Sat, 12 Jun 2010 13:36:55 -0500 Subject: [SciPy-User] multidimensional polynomial fit Message-ID: <4C13D3C7.4090103@gmail.com> Hello! Is there some way to get a polynomial fit to a set of n-tuples? I've got a set of 4-tuples: (x1,x2,x3,T), and i would like to get a polynomial T(x1,x2,x3). I've seen numpy.polyfit, but that doesn't work for multidimensional sets. If there is no method available, I would be willing to write the necessary code, just tell me how to get it included. thanks! Oscar From josef.pktd at gmail.com Sat Jun 12 15:43:08 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 12 Jun 2010 15:43:08 -0400 Subject: [SciPy-User] multidimensional polynomial fit In-Reply-To: <4C13D3C7.4090103@gmail.com> References: <4C13D3C7.4090103@gmail.com> Message-ID: On Sat, Jun 12, 2010 at 2:36 PM, Oscar Gerardo Lazo Arjona wrote: > Hello! > > Is there some way to get a polynomial fit to a set of n-tuples? I've got > a set of 4-tuples: (x1,x2,x3,T), and i would like to get a polynomial > T(x1,x2,x3). > > I've seen numpy.polyfit, but that doesn't work for multidimensional sets. > > If there is no method available, I would be willing to write the > necessary code, just tell me how to get it included. Assuming I understand correctly, fitting the last variable to a polynomial of the first three depends on how many cross terms you want. here is an example which restricts the powers in the cross-terms >>> x = np.arange(5)[:,None]+ [0,10,100] >>> x = x[:,::-1] #reverse for ndindex >>> x array([[100, 10, 0], [101, 11, 1], [102, 12, 2], [103, 13, 3], [104, 14, 4]]) >>> simplex = [ind for ind in np.ndindex(*[3]*x.shape[1]) if sum(ind)<=2] >>> simplex [(0, 0, 0), (0, 0, 1), (0, 0, 2), (0, 1, 0), (0, 1, 1), (0, 2, 0), (1, 0, 0), (1, 0, 1), (1, 1, 0), (2, 0, 0)] >>> np.array([np.prod(x**ind,1) for ind in simplex]).T array([[ 1, 0, 0, 10, 0, 100, 100, 0, 1000, 10000], [ 1, 1, 1, 11, 11, 121, 101, 101, 1111, 10201], [ 1, 2, 4, 12, 24, 144, 102, 204, 1224, 10404], [ 1, 3, 9, 13, 39, 169, 103, 309, 1339, 10609], [ 1, 4, 16, 14, 56, 196, 104, 416, 1456, 10816]]) >>> nobs = 100 >>> x0 = np.random.randn(nobs,3) >>> x = np.array([np.prod(x0**ind,1) for ind in simplex]).T >>> y = x.sum(1) + 0.1*np.random.randn(nobs) >>> y.shape (100,) >>> from scikits.statsmodels import OLS >>> res = OLS(y, x).fit() >>> res.params array([ 1.02381284, 1.00619277, 0.99437357, 0.96839791, 1.00923175, 1.00342817, 0.99046168, 1.00125689, 0.99069758, 0.98808115]) >>> yest = res.model.predict(x) >>> import matplotlib.pyplot as plt >>> plt.plot(y, yest) use of OLS can be replaced by np.linalg.lstsq Josef > > thanks! > > Oscar > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Sat Jun 12 15:44:30 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 12 Jun 2010 15:44:30 -0400 Subject: [SciPy-User] multidimensional polynomial fit In-Reply-To: References: <4C13D3C7.4090103@gmail.com> Message-ID: On Sat, Jun 12, 2010 at 3:43 PM, wrote: > On Sat, Jun 12, 2010 at 2:36 PM, Oscar Gerardo Lazo Arjona > wrote: >> Hello! >> >> Is there some way to get a polynomial fit to a set of n-tuples? I've got >> a set of 4-tuples: (x1,x2,x3,T), and i would like to get a polynomial >> T(x1,x2,x3). >> >> I've seen numpy.polyfit, but that doesn't work for multidimensional sets. >> >> If there is no method available, I would be willing to write the >> necessary code, just tell me how to get it included. > > Assuming I understand correctly, ?fitting the last variable to a > polynomial of the first three > > depends on how many cross terms you want. > > here is an example which restricts the powers in the cross-terms > > >>>> x = np.arange(5)[:,None]+ [0,10,100] >>>> x = x[:,::-1] #reverse for ndindex >>>> x > array([[100, ?10, ? 0], > ? ? ? [101, ?11, ? 1], > ? ? ? [102, ?12, ? 2], > ? ? ? [103, ?13, ? 3], > ? ? ? [104, ?14, ? 4]]) >>>> simplex = [ind for ind in np.ndindex(*[3]*x.shape[1]) if sum(ind)<=2] >>>> simplex > [(0, 0, 0), (0, 0, 1), (0, 0, 2), (0, 1, 0), (0, 1, 1), (0, 2, 0), (1, > 0, 0), (1, 0, 1), (1, 1, 0), (2, 0, 0)] >>>> np.array([np.prod(x**ind,1) for ?ind in simplex]).T > array([[ ? ?1, ? ? 0, ? ? 0, ? ?10, ? ? 0, ? 100, ? 100, ? ? 0, ?1000, > ? ? ? ?10000], > ? ? ? [ ? ?1, ? ? 1, ? ? 1, ? ?11, ? ?11, ? 121, ? 101, ? 101, ?1111, > ? ? ? ?10201], > ? ? ? [ ? ?1, ? ? 2, ? ? 4, ? ?12, ? ?24, ? 144, ? 102, ? 204, ?1224, > ? ? ? ?10404], > ? ? ? [ ? ?1, ? ? 3, ? ? 9, ? ?13, ? ?39, ? 169, ? 103, ? 309, ?1339, > ? ? ? ?10609], > ? ? ? [ ? ?1, ? ? 4, ? ?16, ? ?14, ? ?56, ? 196, ? 104, ? 416, ?1456, > ? ? ? ?10816]]) > > > > >>>> nobs = 100 >>>> x0 = np.random.randn(nobs,3) >>>> x = np.array([np.prod(x0**ind,1) for ?ind in simplex]).T >>>> y = x.sum(1) + 0.1*np.random.randn(nobs) >>>> y.shape > (100,) >>>> from scikits.statsmodels import OLS >>>> res = OLS(y, x).fit() >>>> res.params > array([ 1.02381284, ?1.00619277, ?0.99437357, ?0.96839791, ?1.00923175, > ? ? ? ?1.00342817, ?0.99046168, ?1.00125689, ?0.99069758, ?0.98808115]) >>>> yest = res.model.predict(x) >>>> import matplotlib.pyplot as plt >>>> plt.plot(y, yest) correction for scatter plot: plt.plot(y, yest, 'o') > > > use of OLS can be replaced by np.linalg.lstsq > > Josef > > >> >> thanks! >> >> Oscar >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From algebraicamente at gmail.com Sun Jun 13 01:28:28 2010 From: algebraicamente at gmail.com (Oscar Gerardo Lazo Arjona) Date: Sun, 13 Jun 2010 05:28:28 +0000 (UTC) Subject: [SciPy-User] multidimensional polynomial fit References: <4C13D3C7.4090103@gmail.com> Message-ID: gmail.com> writes: > Assuming I understand correctly, fitting the last variable to a > polynomial of the first three > > depends on how many cross terms you want. Well, I already wrote a generalized function that works like polyfit: def polynomial_fit(points,degree,depreciation=False): ... ... It returns an array of the coefficients of the multidimensional polynomial with the degrees indicated as a list of integers (just as polyfit). It was quite a lot of work, it's probably the most abstract thing I've done. I had lot's of fun writing it, and I would like it to be included in numpy (if you think that is wise). > here is an example which restricts the powers in the cross-terms > > >>> x = np.arange(5)[:,None]+ [0,10,100] ... ... > use of OLS can be replaced by np.linalg.lstsq Well, thank you, but that's a lot more complicated to remember ;) thanks. Oscar From d.l.goldsmith at gmail.com Sun Jun 13 15:42:24 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 13 Jun 2010 12:42:24 -0700 Subject: [SciPy-User] multidimensional polynomial fit In-Reply-To: References: <4C13D3C7.4090103@gmail.com> Message-ID: On Sat, Jun 12, 2010 at 10:28 PM, Oscar Gerardo Lazo Arjona < algebraicamente at gmail.com> wrote: > gmail.com> writes: > > > Assuming I understand correctly, fitting the last variable to a > > polynomial of the first three > > > > depends on how many cross terms you want. > > Well, I already wrote a generalized function that works like polyfit: > > def polynomial_fit(points,degree,depreciation=False): > ... > ... > > It returns an array of the coefficients of the multidimensional polynomial > with > the degrees indicated as a list of integers (just as polyfit). > > It was quite a lot of work, it's probably the most abstract thing I've > done. I > had lot's of fun writing it, and I would like it to be included in numpy > (if you > think that is wise). > Whatever you do, before submission, please ensure that your function has a complete, Standard-conforming docstring (and, though this is not "my department," so to speak, most would probably join me in also asking that it be accompanied by a pretty complete suite of unit tests, esp. something as mathematically complicated as this - my off-the-cuff would be that it should have at least three, preferably at least five non-trivial or semi-trivial tests, by which I mean tests which pass when a non-trivial solution consisting of simple, e.g., integer, values is produced exactly [to within, say, 9 sigfigs], for at least two degrees above 2, one even and one odd - just my $0.02). DG > > > here is an example which restricts the powers in the cross-terms > > > > >>> x = np.arange(5)[:,None]+ [0,10,100] > ... > ... > > use of OLS can be replaced by np.linalg.lstsq > > Well, thank you, but that's a lot more complicated to remember ;) > > thanks. > > Oscar > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From algebraicamente at gmail.com Sun Jun 13 17:29:51 2010 From: algebraicamente at gmail.com (Oscar Gerardo Lazo Arjona) Date: Sun, 13 Jun 2010 21:29:51 +0000 (UTC) Subject: [SciPy-User] multidimensional polynomial fit References: <4C13D3C7.4090103@gmail.com> Message-ID: David Goldsmith gmail.com> writes: > Whatever you do, before submission, please ensure that your function has a complete, Standard-conforming docstring (and, though this is not "my department," so to speak, most would probably join me in also asking that it be accompanied by a pretty complete suite of unit tests, esp. something as mathematically complicated as this - my off-the-cuff would be that it should have at least three, preferably at least five non-trivial or semi-trivial tests, by which I mean tests which pass when a non-trivial solution consisting of simple, e.g., integer, values is produced exactly [to within, say, 9 sigfigs], for at least two degrees above 2, one even and one odd - just my $0.02).DG I implemented the function so that it returns a sage[1] symbolic expression. Although it could also return a list of coefficients. Within sage's command propt, it can be used like this: sage: var('x y z') (x, y, z) sage: G=-2*y^3*x +x^2 + x*y -5*z -1 sage: pg=[(i,j,k,G.subs(x=i,y=j,z=k)) for i in srange(-1,1.5,0.5) for j in srange(-1,1.5,0.5) for k in srange(-1,1.5,0.5)] sage: load('/myhome/polynomial_fit.py') sage: Pg=polynomial_fit(pg,[2,3,1],[x,y,z],depreciation=True) -2.0*x*y^3 + x^2 + x*y - 5.0*z - 1.0 It reconstructs the original polynomial without *any error*. Notice the depreciation option which depreciates coefficients that are smaller than 10e-10 so that they do not appear in the resulting polynomial. So, if I understood you correctly, this passes your propposed requirement ;) .I haven't written a docstring, but in the mean time, how do I make my submission? I also did not understand what you meant by unit tests... Oscar From d.l.goldsmith at gmail.com Sun Jun 13 18:15:15 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 13 Jun 2010 15:15:15 -0700 Subject: [SciPy-User] multidimensional polynomial fit In-Reply-To: References: <4C13D3C7.4090103@gmail.com> Message-ID: On Sun, Jun 13, 2010 at 2:29 PM, Oscar Gerardo Lazo Arjona < algebraicamente at gmail.com> wrote: > David Goldsmith gmail.com> writes: > > > Whatever you do, before submission, please ensure that your function has > a > complete, Standard-conforming docstring (and, though this is not "my > department," so to speak, most would probably join me in also asking that > it be > accompanied by a pretty complete suite of unit tests, esp. something as > mathematically complicated as this - my off-the-cuff would be that it > should > have at least three, preferably at least five non-trivial or semi-trivial > tests, > by which I mean tests which pass when a non-trivial solution consisting of > simple, e.g., integer, values is produced exactly [to within, say, 9 > sigfigs], > for at least two degrees above 2, one even and one odd - just my $0.02).DG > > > I implemented the function so that it returns a sage[1] symbolic > expression. > Although it could also return a list of coefficients. Within sage's command > propt, it can be used like this: > > sage: var('x y z') > (x, y, z) > sage: G=-2*y^3*x +x^2 + x*y -5*z -1 > sage: pg=[(i,j,k,G.subs(x=i,y=j,z=k)) for i in srange(-1,1.5,0.5) for j in > srange(-1,1.5,0.5) for k in srange(-1,1.5,0.5)] > sage: load('/myhome/polynomial_fit.py') > sage: Pg=polynomial_fit(pg,[2,3,1],[x,y,z],depreciation=True) > -2.0*x*y^3 + x^2 + x*y - 5.0*z - 1.0 > > It reconstructs the original polynomial without *any error*. So the number of data _must_ equal the number of terms in the polynomial? (Anything else will be over- or under-determined and require either some sort of error minimizing fit.) What's your use case; just curious. DG > Notice the > depreciation option which depreciates coefficients that are smaller than > 10e-10 > so that they do not appear in the resulting polynomial. > > So, if I understood you correctly, this passes your propposed requirement > ;) .I > haven't written a docstring, but in the mean time, how do I make my > submission? > > I also did not understand what you meant by unit tests... > > Oscar > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sun Jun 13 18:18:27 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 13 Jun 2010 15:18:27 -0700 Subject: [SciPy-User] multidimensional polynomial fit In-Reply-To: References: <4C13D3C7.4090103@gmail.com> Message-ID: On Sun, Jun 13, 2010 at 3:15 PM, David Goldsmith wrote: > On Sun, Jun 13, 2010 at 2:29 PM, Oscar Gerardo Lazo Arjona < > algebraicamente at gmail.com> wrote: > >> David Goldsmith gmail.com> writes: >> >> > Whatever you do, before submission, please ensure that your function has >> a >> complete, Standard-conforming docstring (and, though this is not "my >> department," so to speak, most would probably join me in also asking that >> it be >> accompanied by a pretty complete suite of unit tests, esp. something as >> mathematically complicated as this - my off-the-cuff would be that it >> should >> have at least three, preferably at least five non-trivial or semi-trivial >> tests, >> by which I mean tests which pass when a non-trivial solution consisting of >> simple, e.g., integer, values is produced exactly [to within, say, 9 >> sigfigs], >> for at least two degrees above 2, one even and one odd - just my $0.02).DG >> >> >> I implemented the function so that it returns a sage[1] symbolic >> expression. >> > In general I don't think we "support" sage objects, so whatever you submit will have to suppress that output in favor, exclusively, of the list (numpy array preferred, I think) of coefficients (if I'm wrong about this, someone will correct me). DG > Although it could also return a list of coefficients. Within sage's command >> propt, it can be used like this: >> >> sage: var('x y z') >> (x, y, z) >> sage: G=-2*y^3*x +x^2 + x*y -5*z -1 >> sage: pg=[(i,j,k,G.subs(x=i,y=j,z=k)) for i in srange(-1,1.5,0.5) for j in >> srange(-1,1.5,0.5) for k in srange(-1,1.5,0.5)] >> sage: load('/myhome/polynomial_fit.py') >> sage: Pg=polynomial_fit(pg,[2,3,1],[x,y,z],depreciation=True) >> -2.0*x*y^3 + x^2 + x*y - 5.0*z - 1.0 >> >> It reconstructs the original polynomial without *any error*. > > > So the number of data _must_ equal the number of terms in the polynomial? > (Anything else will be over- or under-determined and require either some > sort of error minimizing fit.) What's your use case; just curious. > > DG > >> Notice the >> depreciation option which depreciates coefficients that are smaller than >> 10e-10 >> so that they do not appear in the resulting polynomial. >> >> So, if I understood you correctly, this passes your propposed requirement >> ;) .I >> haven't written a docstring, but in the mean time, how do I make my >> submission? >> >> I also did not understand what you meant by unit tests... >> >> Oscar >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From algebraicamente at gmail.com Sun Jun 13 19:01:56 2010 From: algebraicamente at gmail.com (Oscar Gerardo Lazo Arjona) Date: Sun, 13 Jun 2010 23:01:56 +0000 (UTC) Subject: [SciPy-User] multidimensional polynomial fit References: <4C13D3C7.4090103@gmail.com> Message-ID: David Goldsmith gmail.com> writes: > So the number of data _must_ equal the number of terms in the polynomial?? (Anything else will be over- or under-determined and require either some sort of error minimizing fit.)? What's your use case; just curious.DG No, in general, the number of data points will be different than the number of terms in the polynomial. For lower-dimmensional fits (in particular for dimension 1) the number of points is expected to be larger than the number of terms, but for higher dimmensional polynomials, the number of terms grows very fast, and will probably be larger than the number of points. At this point it works very slowly for dimmensions greater than 2. Adjusting a set of 125 4-tuples to a polynomial of degree 3 takes approximately 12 seconds on my intel-core2duo laptop. Perhaps it would be wise to implement it in some compiled language. I am learning fotran right now, perhaps that would make it... Yes, I will take sage objects out of an eventual contribution to numpy. Oscar From d.l.goldsmith at gmail.com Sun Jun 13 21:10:52 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 13 Jun 2010 18:10:52 -0700 Subject: [SciPy-User] multidimensional polynomial fit In-Reply-To: References: <4C13D3C7.4090103@gmail.com> Message-ID: On Sun, Jun 13, 2010 at 4:01 PM, Oscar Gerardo Lazo Arjona < algebraicamente at gmail.com> wrote: > David Goldsmith gmail.com> writes: > > So the number of data _must_ equal the number of terms in the > polynomial? > (Anything else will be over- or under-determined and require either some > sort of > error minimizing fit.) What's your use case; just curious.DG > > No, in general, the number of data points will be different than the number > of > terms in the polynomial. In that case, in what sense are your answers "exact"? DG > For lower-dimmensional fits (in particular for > dimension 1) the number of points is expected to be larger than the number > of > terms, but for higher dimmensional polynomials, the number of terms grows > very > fast, and will probably be larger than the number of points. > > At this point it works very slowly for dimmensions greater than 2. > Adjusting a > set of 125 4-tuples to a polynomial of degree 3 takes approximately 12 > seconds > on my intel-core2duo laptop. Perhaps it would be wise to implement it in > some > compiled language. I am learning fotran right now, perhaps that would make > it... > > Yes, I will take sage objects out of an eventual contribution to numpy. > > Oscar > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Mon Jun 14 10:50:49 2010 From: sturla at molden.no (Sturla Molden) Date: Mon, 14 Jun 2010 16:50:49 +0200 Subject: [SciPy-User] Building extensions on Win64 with GNU compilers Message-ID: <4C1641C9.90805@molden.no> I need libmsvcr90.a and libpython26.a for building extensions on Win64 with GNU compilers (gcc, g++, gfortran). Import libraries are only available for Win32. Looking at: http://projects.scipy.org/numpy/wiki/MicrosoftToolchainSupport This page contain a script with an invalid regex: TABLE = re.compile(r'^\s+\[([\s*\d*)\] (\w*)') What to do? Does anyone have a fix for this script or the def-files for producing the import libraries? Unfortunately I am not very good at solving build problems. :( Sturla Molden -------------- next part -------------- An HTML attachment was scrubbed... URL: From roblourens at gmail.com Mon Jun 14 12:15:52 2010 From: roblourens at gmail.com (Rob Lourens) Date: Mon, 14 Jun 2010 11:15:52 -0500 Subject: [SciPy-User] Building extensions on Win64 with GNU compilers In-Reply-To: <4C1641C9.90805@molden.no> References: <4C1641C9.90805@molden.no> Message-ID: My guess is that they're missing a bracket, and meant this- TABLE = re.compile(r'^\s+\[([\s*\d*])\] (\w*)') Rob On Mon, Jun 14, 2010 at 9:50 AM, Sturla Molden wrote: > > I need libmsvcr90.a and libpython26.a for building extensions on Win64 with > GNU compilers (gcc, g++, gfortran). Import libraries are only available for > Win32. > > Looking at: > > http://projects.scipy.org/numpy/wiki/MicrosoftToolchainSupport > > This page contain a script with an invalid regex: > > TABLE = re.compile(r'^\s+\[([\s*\d*)\] (\w*)') > > What to do? > > Does anyone have a fix for this script or the def-files for producing the > import libraries? > > Unfortunately I am not very good at solving build problems. :( > > > Sturla Molden > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From timothyjkinney at gmail.com Mon Jun 14 16:28:14 2010 From: timothyjkinney at gmail.com (Timothy Kinney) Date: Mon, 14 Jun 2010 15:28:14 -0500 Subject: [SciPy-User] Leastsq questions Message-ID: Scipy Community, I am using the scipy leastsq method to fit some cooling data, such that the temperature is defined by an exponential decay function (Newton's Cooling Law). However, there are some other factors which also influence the cooling rate and I am attempting to account for them in the cooling law. I have some questions about leastsq: 1) When I fit the data in Excel I get a different fit than when I fit the same data in Scipy. Why is this? The fits are not very different, but they are consistently different. 2) How do I calculate the goodness of fit (R squared) for the leastsq algorithm? I think it's just the sum of the squared errors divided by something, but shouldn't this be easily called from the output? I would like to iterate over a computation where I change one of the values and see how it effects the goodness of the fit. I'm not sure how to calculate the r-squared from the plsq that is returned from leastsq. My goal is to find the value of a single parameter that best optimizes the leastsq fit. Should I be using one of the other optimizing functions for this instead? Basically, I calculate a value from the theory and compare it to the experimental data. I fit the theory to the data and look at the r-squared. I want to adjust the theory to account for some other factors by adjusting one of the terms in a way that maximizes the goodness of fit. Thanks for your attention. -Tim From david_baddeley at yahoo.com.au Mon Jun 14 16:52:07 2010 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Mon, 14 Jun 2010 13:52:07 -0700 (PDT) Subject: [SciPy-User] Leastsq questions In-Reply-To: References: Message-ID: <378289.9106.qm@web33006.mail.mud.yahoo.com> Not too sure about Excel or R squared (is R squared appropriate for nonlinear fits?), but can coment on your additional factors. Why don't you just make them parameters? Leastsq will then do the optimisation for you. Note that the parameter argument to leastsq can be an array. For a goodness of fit it should be relatively easy to calculate Chi - squared or something similar. Cheers, David ----- Original Message ---- From: Timothy Kinney To: scipy-user at scipy.org Sent: Tue, 15 June, 2010 8:28:14 AM Subject: [SciPy-User] Leastsq questions Scipy Community, I am using the scipy leastsq method to fit some cooling data, such that the temperature is defined by an exponential decay function (Newton's Cooling Law). However, there are some other factors which also influence the cooling rate and I am attempting to account for them in the cooling law. I have some questions about leastsq: 1) When I fit the data in Excel I get a different fit than when I fit the same data in Scipy. Why is this? The fits are not very different, but they are consistently different. 2) How do I calculate the goodness of fit (R squared) for the leastsq algorithm? I think it's just the sum of the squared errors divided by something, but shouldn't this be easily called from the output? I would like to iterate over a computation where I change one of the values and see how it effects the goodness of the fit. I'm not sure how to calculate the r-squared from the plsq that is returned from leastsq. My goal is to find the value of a single parameter that best optimizes the leastsq fit. Should I be using one of the other optimizing functions for this instead? Basically, I calculate a value from the theory and compare it to the experimental data. I fit the theory to the data and look at the r-squared. I want to adjust the theory to account for some other factors by adjusting one of the terms in a way that maximizes the goodness of fit. Thanks for your attention. -Tim _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From charlesr.harris at gmail.com Mon Jun 14 21:49:07 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 14 Jun 2010 19:49:07 -0600 Subject: [SciPy-User] Leastsq questions In-Reply-To: References: Message-ID: On Mon, Jun 14, 2010 at 2:28 PM, Timothy Kinney wrote: > Scipy Community, > > I am using the scipy leastsq method to fit some cooling data, such > that the temperature is defined by an exponential decay function > (Newton's Cooling Law). However, there are some other factors which > also influence the cooling rate and I am attempting to account for > them in the cooling law. I have some questions about leastsq: > > 1) When I fit the data in Excel I get a different fit than when I fit > the same data in Scipy. Why is this? The fits are not very different, > but they are consistently different. > > It is impossible to know without a good deal more information, i.e., what is your model, how is it parameterized, what is the data, when do the iterations stop, and, if the parameters aren't sufficiently independent over the data set, what is the required condition number. I suspect the latter is coming into play here. > 2) How do I calculate the goodness of fit (R squared) for the leastsq > algorithm? I think it's just the sum of the squared errors divided by > something, but shouldn't this be easily called from the output? > > If you are using the function correctly, then you already have an error function that returns the residuals. Note that it is also available in the full return. I would like to iterate over a computation where I change one of the > values and see how it effects the goodness of the fit. I'm not sure > how to calculate the r-squared from the plsq that is returned from > leastsq. > > plsq? > My goal is to find the value of a single parameter that best optimizes > the leastsq fit. Should I be using one of the other optimizing > functions for this instead? Basically, I calculate a value from the > theory and compare it to the experimental data. I fit the theory to > the data and look at the r-squared. I want to adjust the theory to > account for some other factors by adjusting one of the terms in a way > that maximizes the goodness of fit. > > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From algebraicamente at gmail.com Mon Jun 14 22:32:20 2010 From: algebraicamente at gmail.com (Oscar Gerardo Lazo Arjona) Date: Tue, 15 Jun 2010 02:32:20 +0000 (UTC) Subject: [SciPy-User] multidimensional polynomial fit References: <4C13D3C7.4090103@gmail.com> Message-ID: David Goldsmith gmail.com> writes: > In that case, in what sense are your answers "exact"?DG? The points I used in the example were generated using a polynomial with integer coefficients. The algorithm returs a polynomial with those exact coefficients (not as integers though). Of course, that exactness only makes sense if the data is generated with a polynomial in the first place ;) Oscar. From d.l.goldsmith at gmail.com Tue Jun 15 00:45:51 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 14 Jun 2010 21:45:51 -0700 Subject: [SciPy-User] multidimensional polynomial fit In-Reply-To: References: <4C13D3C7.4090103@gmail.com> Message-ID: On Mon, Jun 14, 2010 at 7:32 PM, Oscar Gerardo Lazo Arjona < algebraicamente at gmail.com> wrote: > David Goldsmith gmail.com> writes: > > > In that case, in what sense are your answers "exact"?DG > > The points I used in the example were generated using a polynomial with > integer > coefficients. The algorithm returs a polynomial with those exact > coefficients > (not as integers though). Of course, that exactness only makes sense if the > data > is generated with a polynomial in the first place ;) > Are you always going to be generating your own data in that fashion? DG > > Oscar. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Tue Jun 15 03:38:59 2010 From: david at silveregg.co.jp (David) Date: Tue, 15 Jun 2010 16:38:59 +0900 Subject: [SciPy-User] Building extensions on Win64 with GNU compilers In-Reply-To: <4C1641C9.90805@molden.no> References: <4C1641C9.90805@molden.no> Message-ID: <4C172E13.7040609@silveregg.co.jp> On 06/14/2010 11:50 PM, Sturla Molden wrote: > > I need libmsvcr90.a and libpython26.a for building extensions on Win64 > with GNU compilers (gcc, g++, gfortran). Import libraries are only > available for Win32. > > Looking at: > > http://projects.scipy.org/numpy/wiki/MicrosoftToolchainSupport > > This page contain a script with an invalid regex: > > TABLE = re.compile(r'^\s+\[([\s*\d*)\] (\w*)') The page is outdated, and there is a script in tools/win32build/misc/msvcrt90 (yop.sh - I should change it to a real name) to do it. You need cygwin as well to do it. But I suspect you won't be able to do much: if you build numpy with visual studio and say Intel Fortran compiler, you won't be able to use gfortran with it. Last time I looked at it, there were numerous issues because of C runtime clashes between the MS runtime and libgfortran - I think the only solution would be to rewrite our own libgfortran and compile it with the MS compiler so that they all use the same C runtime. cheers, David From d.l.goldsmith at gmail.com Mon Jun 14 05:05:35 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 14 Jun 2010 02:05:35 -0700 Subject: [SciPy-User] SciPy docs marathon Message-ID: Hi, all! The scipy doc marathon has gotten off to a very slow start this summer. We are producing less than 1000 words a week, perhaps because many universities are still finishing up spring classes. So, this is a second appeal to everyone to pitch in and help get scipy documented so that it's easy to learn how to use it. Because some of the packages are quite specialized, we need both "regular" contributors to write lots of pages, and some people experienced in using each module (and the mathematics behind the software) to make sure we don't water it down or make it wrong in the process. If you can help, please, now is the time to step forward. Thanks! On behalf of Joe and myself, David Goldsmith Olympia, WA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.root at ou.edu Thu Jun 10 17:08:00 2010 From: ben.root at ou.edu (Benjamin Root) Date: Thu, 10 Jun 2010 16:08:00 -0500 Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: <28848191.post@talk.nabble.com> References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28711581.post@talk.nabble.com> <28824023.post@talk.nabble.com> <28846602.post@talk.nabble.com> <28848191.post@talk.nabble.com> Message-ID: Good! The -1 in the reshape means "however many it takes" to have a correct reshaped array. Of course, as always with developing code that uses reshape operations, do some "sanity checks" to make sure that your array was reshaped in a manner you expect. When you have many-dimension arrays, it is very easy to make a mistake with reshape. Print out a few slices of the array and/or see if the averages make sense until you are convinced that you coded it correctly. Ben Root On Thu, Jun 10, 2010 at 3:36 PM, mdekauwe wrote: > > OK I think it is clear now!! Although what does the -1 bit do, this is > surely > the same as saying 11, 12 or numyears, nummonths? > > thanks. > > > > Benjamin Root-2 wrote: > > > > Well, let's try a more direct example. I am going to create a 4d array > of > > random values to illustrate. I know the length of the dimensions won't > be > > exactly the same as yours, but the example will still be valid. > > > > In this example, I will be able to calculate *all* of the monthly > averages > > for *all* of the variables for *all* of the grid points without a single > > loop. > > > >> jules = np.random.random((132, 10, 50, 3)) > >> print jules.shape > > (132, 10, 50, 3) > > > >> jules_5d = np.reshape(jules, (-1, 12) + jules.shape[1:]) > >> print jules_5d.shape > > (11, 12, 10, 50, 3) > > > >> jules_5d = np.ma.masked_array(jules_5d, mask=jules_5d < 0.0) > > > >> jules_means = np.mean(jules_5d, axis=0) > >> print jules_means.shape > > (12, 10, 50, 3) > > > > voila! This matrix has a mean for each month across all eleven years for > > each datapoint in each of the 10 variables at each (I am assuming) level > > in > > the atmosphere. > > > > So, if you want to operate on a subset of your jules matrix (for example, > > you need to do special masking for each variable), then you can just work > > off of a slice of the original matrix, and many of these same concepts in > > this example and the previous example still applies. > > > > Ben Root > > > > > > On Thu, Jun 10, 2010 at 1:08 PM, mdekauwe wrote: > > > >> > >> Hi, > >> > >> No if I am honest I am a little confused how what you are suggesting > >> would > >> work. As I see it the array I am trying to average from has dims > >> jules[(numyears * nummonths),1,numpts,0]. Where the first dimension > (132) > >> is > >> 12 months x 11 years. And as I said before I would like to average the > >> jan > >> from the first, second, third years etc. Then the same for the feb and > so > >> on. > >> > >> So I don't see how you get to your 2d array that you mention in the > first > >> line? I thought what you were suggesting was I could precompute the step > >> that builds the index for the months e.g > >> > >> mth_index = np.zeros(0) > >> for month in xrange(nummonths): > >> mth_index = np.append(mth_index, np.arange(month, numyears * > >> nummonths, > >> nummonths)) > >> > >> and use this as my index to skip the for loop. Though I still have a for > >> loop I guess! > >> > >> > >> > >> > >> > >> > >> Benjamin Root-2 wrote: > >> > > >> > Correction for me as well. To mask out the negative values, use > masked > >> > arrays. So we will turn jules_2d into a masked array (second line), > >> then > >> > all subsequent commands will still work as expected. It is very > >> similar > >> > to > >> > replacing negative values with nans and using nanmin(). > >> > > >> >> jules_2d = jules.reshape((-1, 12)) > >> >> jules_2d = np.ma.masked_array(jules_2d, mask=jules_2d < 0.0) > >> >> jules_monthly = np.mean(jules_2d, axis=0) > >> >> print jules_monthly.shape > >> > (12,) > >> > > >> > Ben Root > >> > > >> > On Tue, Jun 8, 2010 at 7:49 PM, Benjamin Root > wrote: > >> > > >> >> The np.mod in my example caused the data points to stay within [0, > 11] > >> in > >> >> order to illustrate that these are months. In my example, months are > >> >> column, years are rows. In your desired output, months are rows and > >> >> years > >> >> are columns. It makes very little difference which way you have it. > >> >> > >> >> Anyway, let's imagine that we have a time series of data "jules". We > >> can > >> >> easily reshape this like so: > >> >> > >> >> > jules_2d = jules.reshape((-1, 12)) > >> >> > jules_monthly = np.mean(jules_2d, axis=0) > >> >> > print jules_monthly.shape > >> >> (12,) > >> >> > >> >> voila! You have 12 values in jules_monthly which are means for that > >> >> month > >> >> across all years. > >> >> > >> >> protip - if you want yearly averages just change the ax parameter in > >> >> np.mean(): > >> >> > jules_yearly = np.mean(jules_2d, axis=1) > >> >> > >> >> I hope that makes my previous explanation clearer. > >> >> > >> >> Ben Root > >> >> > >> >> > >> >> On Tue, Jun 8, 2010 at 5:41 PM, mdekauwe wrote: > >> >> > >> >>> > >> >>> OK... > >> >>> > >> >>> but if I do... > >> >>> > >> >>> In [28]: np.mod(np.arange(nummonths*numyears), > >> nummonths).reshape((-1, > >> >>> nummonths)) > >> >>> Out[28]: > >> >>> array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >> >>> [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) > >> >>> > >> >>> When really I would be after something like this I think? > >> >>> > >> >>> array([ 0, 12, 24, 36, 48, 60, 72, 84, 96, 108, 120], > >> >>> [ 1, 13, 25, 37, 49, 61, 73, 85, 97, 109, 121], > >> >>> [ 2, 14, 26, 38, 50, 62, 74, 86, 98, 110, 122] > >> >>> etc, etc > >> >>> > >> >>> i.e. so for each month jump across the years. > >> >>> > >> >>> Not quite sure of this example...this is what I currently have which > >> >>> does > >> >>> seem to work, though I guess not completely efficiently. > >> >>> > >> >>> for month in xrange(nummonths): > >> >>> tmp = jules[xrange(0, numyears * nummonths, > >> nummonths),VAR,:,0] > >> >>> tmp[tmp < 0.0] = np.nan > >> >>> data[month,:] = np.mean(tmp, axis=0) > >> >>> > >> >>> > >> >>> > >> >>> > >> >>> Benjamin Root-2 wrote: > >> >>> > > >> >>> > If you want an average for each month from your timeseries, then > >> the > >> >>> > sneaky > >> >>> > way would be to reshape your array so that the time dimension is > >> split > >> >>> > into > >> >>> > two (month, year) dimensions. > >> >>> > > >> >>> > For a 1-D array, this would be: > >> >>> > > >> >>> >> dataarray = numpy.mod(numpy.arange(36), 12) > >> >>> >> print dataarray > >> >>> > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, > >> 3, > >> >>> 4, > >> >>> > 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, 4, 5, 6, 7, > >> 8, > >> >>> 9, > >> >>> > 10, 11]) > >> >>> >> datamatrix = dataarray.reshape((-1, 12)) > >> >>> >> print datamatrix > >> >>> > array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >> >>> > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], > >> >>> > [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]]) > >> >>> > > >> >>> > Hope that helps. > >> >>> > > >> >>> > Ben Root > >> >>> > > >> >>> > > >> >>> > On Fri, May 28, 2010 at 3:28 PM, mdekauwe > >> wrote: > >> >>> > > >> >>> >> > >> >>> >> OK so I just need to have a quick loop across the 12 months then, > >> >>> that > >> >>> is > >> >>> >> fine, just thought there might have been a sneaky way! > >> >>> >> > >> >>> >> Really appreciated, getting there slowly! > >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> josef.pktd wrote: > >> >>> >> > > >> >>> >> > On Fri, May 28, 2010 at 4:14 PM, mdekauwe > >> >>> wrote: > >> >>> >> >> > >> >>> >> >> ok - something like this then...but how would i get the index > >> for > >> >>> the > >> >>> >> >> month > >> >>> >> >> for the data array (where month is 0, 1, 2, 4 ... 11)? > >> >>> >> >> > >> >>> >> >> data[month,:] = array[xrange(0, numyears * nummonths, > >> >>> >> nummonths),VAR,:,0] > >> >>> >> > > >> >>> >> > you would still need to start at the right month > >> >>> >> > data[month,:] = array[xrange(month, numyears * nummonths, > >> >>> >> > nummonths),VAR,:,0] > >> >>> >> > or > >> >>> >> > data[month,:] = array[month: numyears * nummonths : > >> >>> nummonths),VAR,:,0] > >> >>> >> > > >> >>> >> > an alternative would be a reshape with an extra month dimension > >> and > >> >>> >> > then sum only once over the year axis. this might be faster but > >> >>> >> > trickier to get the correct reshape . > >> >>> >> > > >> >>> >> > Josef > >> >>> >> > > >> >>> >> >> > >> >>> >> >> and would that be quicker than making an array months... > >> >>> >> >> > >> >>> >> >> months = np.arange(numyears * nummonths) > >> >>> >> >> > >> >>> >> >> and you that instead like you suggested x[start:end:12,:]? > >> >>> >> >> > >> >>> >> >> Many thanks again... > >> >>> >> >> > >> >>> >> >> > >> >>> >> >> josef.pktd wrote: > >> >>> >> >>> > >> >>> >> >>> On Fri, May 28, 2010 at 3:53 PM, mdekauwe < > mdekauwe at gmail.com> > >> >>> wrote: > >> >>> >> >>>> > >> >>> >> >>>> Ok thanks...I'll take a look. > >> >>> >> >>>> > >> >>> >> >>>> Back to my loops issue. What if instead this time I wanted > to > >> >>> take > >> >>> >> an > >> >>> >> >>>> average so every march in 11 years, is there a quicker way > to > >> go > >> >>> >> about > >> >>> >> >>>> doing > >> >>> >> >>>> that than my current method? > >> >>> >> >>>> > >> >>> >> >>>> nummonths = 12 > >> >>> >> >>>> numyears = 11 > >> >>> >> >>>> > >> >>> >> >>>> for month in xrange(nummonths): > >> >>> >> >>>> for i in xrange(numpts): > >> >>> >> >>>> for ym in xrange(month, numyears * nummonths, > >> nummonths): > >> >>> >> >>>> data[month, i] += array[ym, VAR, > >> land_pts_index[i], > >> >>> 0] > >> >>> >> >>> > >> >>> >> >>> > >> >>> >> >>> x[start:end:12,:] gives you every 12th row of an array x > >> >>> >> >>> > >> >>> >> >>> something like this should work to get rid of the inner loop, > >> or > >> >>> you > >> >>> >> >>> could directly put > >> >>> >> >>> range(month, numyears * nummonths, nummonths) into the array > >> >>> instead > >> >>> >> >>> of ym and sum() > >> >>> >> >>> > >> >>> >> >>> Josef > >> >>> >> >>> > >> >>> >> >>> > >> >>> >> >>>> > >> >>> >> >>>> so for each point in the array for a given month i am > jumping > >> >>> >> through > >> >>> >> >>>> and > >> >>> >> >>>> getting the next years month and so on, summing it. > >> >>> >> >>>> > >> >>> >> >>>> Thanks... > >> >>> >> >>>> > >> >>> >> >>>> > >> >>> >> >>>> josef.pktd wrote: > >> >>> >> >>>>> > >> >>> >> >>>>> On Wed, May 26, 2010 at 5:03 PM, mdekauwe > >> >> > > >> >>> >> wrote: > >> >>> >> >>>>>> > >> >>> >> >>>>>> Could you possibly if you have time explain further your > >> >>> comment > >> >>> >> re > >> >>> >> >>>>>> the > >> >>> >> >>>>>> p-values, your suggesting I am misusing them? > >> >>> >> >>>>> > >> >>> >> >>>>> Depends on your use and interpretation > >> >>> >> >>>>> > >> >>> >> >>>>> test statistics, p-values are random variables, if you look > >> at > >> >>> >> several > >> >>> >> >>>>> tests at the same time, some p-values will be large just by > >> >>> chance. > >> >>> >> >>>>> If, for example you just look at the largest test > statistic, > >> >>> then > >> >>> >> the > >> >>> >> >>>>> distribution for the max of several test statistics is not > >> the > >> >>> same > >> >>> >> as > >> >>> >> >>>>> the distribution for a single test statistic > >> >>> >> >>>>> > >> >>> >> >>>>> http://en.wikipedia.org/wiki/Multiple_comparisons > >> >>> >> >>>>> > >> http://www.itl.nist.gov/div898/handbook/prc/section4/prc47.htm > >> >>> >> >>>>> > >> >>> >> >>>>> we also just had a related discussion for ANOVA post-hoc > >> tests > >> >>> on > >> >>> >> the > >> >>> >> >>>>> pystatsmodels group. > >> >>> >> >>>>> > >> >>> >> >>>>> Josef > >> >>> >> >>>>>> > >> >>> >> >>>>>> Thanks. > >> >>> >> >>>>>> > >> >>> >> >>>>>> > >> >>> >> >>>>>> josef.pktd wrote: > >> >>> >> >>>>>>> > >> >>> >> >>>>>>> On Sat, May 22, 2010 at 6:21 AM, mdekauwe > >> >>> > >> >>> >> >>>>>>> wrote: > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> Sounds like I am stuck with the loop as I need to do the > >> >>> >> comparison > >> >>> >> >>>>>>>> for > >> >>> >> >>>>>>>> each > >> >>> >> >>>>>>>> pixel of the world and then I have a basemap function > >> call > >> >>> which > >> >>> >> I > >> >>> >> >>>>>>>> guess > >> >>> >> >>>>>>>> slows it down further...hmm > >> >>> >> >>>>>>> > >> >>> >> >>>>>>> I don't see much that could be done differently, after a > >> >>> brief > >> >>> >> look. > >> >>> >> >>>>>>> > >> >>> >> >>>>>>> stats.pearsonr could be replaced by an array version > using > >> >>> >> directly > >> >>> >> >>>>>>> the formula for correlation even with nans. wilcoxon > looks > >> >>> slow, > >> >>> >> and > >> >>> >> >>>>>>> I > >> >>> >> >>>>>>> never tried or seen a faster version. > >> >>> >> >>>>>>> > >> >>> >> >>>>>>> just a reminder, the p-values are for a single test, when > >> you > >> >>> >> have > >> >>> >> >>>>>>> many of them, then they don't have the right > >> size/confidence > >> >>> >> level > >> >>> >> >>>>>>> for > >> >>> >> >>>>>>> an overall or joint test. (some packages report a > >> Bonferroni > >> >>> >> >>>>>>> correction in this case) > >> >>> >> >>>>>>> > >> >>> >> >>>>>>> Josef > >> >>> >> >>>>>>> > >> >>> >> >>>>>>> > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> i.e. > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> def compareSnowData(jules_var): > >> >>> >> >>>>>>>> # Extract the 11 years of snow data and return > >> >>> >> >>>>>>>> outrows = 180 > >> >>> >> >>>>>>>> outcols = 360 > >> >>> >> >>>>>>>> numyears = 11 > >> >>> >> >>>>>>>> nummonths = 12 > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> # Read various files > >> >>> >> >>>>>>>> fname="world_valid_jules_pts.ascii" > >> >>> >> >>>>>>>> (numpts, land_pts_index, latitude, longitude, rows, > >> cols) > >> >>> = > >> >>> >> >>>>>>>> jo.read_land_points_ascii(fname, 1.0) > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax0.mon.gra" > >> >>> >> >>>>>>>> jules_data1 = jo.readJulesOutBinary(fname, > >> numrows=15238, > >> >>> >> >>>>>>>> numcols=1, > >> >>> >> >>>>>>>> \ > >> >>> >> >>>>>>>> timesteps=132, numvars=26) > >> >>> >> >>>>>>>> fname = "globalSnowRun_1985_96.GSWP2.nsmax3.mon.gra" > >> >>> >> >>>>>>>> jules_data2 = jo.readJulesOutBinary(fname, > >> numrows=15238, > >> >>> >> >>>>>>>> numcols=1, > >> >>> >> >>>>>>>> \ > >> >>> >> >>>>>>>> timesteps=132, numvars=26) > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> # grab some space > >> >>> >> >>>>>>>> data1_snow = np.zeros((nummonths * numyears, numpts), > >> >>> >> >>>>>>>> dtype=np.float32) > >> >>> >> >>>>>>>> data2_snow = np.zeros((nummonths * numyears, numpts), > >> >>> >> >>>>>>>> dtype=np.float32) > >> >>> >> >>>>>>>> pearsonsr_snow = np.ones((outrows, outcols), > >> >>> >> dtype=np.float32) > >> >>> >> * > >> >>> >> >>>>>>>> np.nan > >> >>> >> >>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), > >> >>> >> dtype=np.float32) > >> >>> >> >>>>>>>> * > >> >>> >> >>>>>>>> np.nan > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> # extract the data > >> >>> >> >>>>>>>> data1_snow = jules_data1[:,jules_var,:,0] > >> >>> >> >>>>>>>> data2_snow = jules_data2[:,jules_var,:,0] > >> >>> >> >>>>>>>> data1_snow = np.where(data1_snow < 0.0, np.nan, > >> >>> data1_snow) > >> >>> >> >>>>>>>> data2_snow = np.where(data2_snow < 0.0, np.nan, > >> >>> data2_snow) > >> >>> >> >>>>>>>> #for month in xrange(numyears * nummonths): > >> >>> >> >>>>>>>> # for i in xrange(numpts): > >> >>> >> >>>>>>>> # data1 = > >> >>> >> >>>>>>>> jules_data1[month,jules_var,land_pts_index[i],0] > >> >>> >> >>>>>>>> # data2 = > >> >>> >> >>>>>>>> jules_data2[month,jules_var,land_pts_index[i],0] > >> >>> >> >>>>>>>> # if data1 >= 0.0: > >> >>> >> >>>>>>>> # data1_snow[month,i] = data1 > >> >>> >> >>>>>>>> # else: > >> >>> >> >>>>>>>> # data1_snow[month,i] = np.nan > >> >>> >> >>>>>>>> # if data2 > 0.0: > >> >>> >> >>>>>>>> # data2_snow[month,i] = data2 > >> >>> >> >>>>>>>> # else: > >> >>> >> >>>>>>>> # data2_snow[month,i] = np.nan > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> # exclude any months from *both* arrays where we have > >> >>> dodgy > >> >>> >> >>>>>>>> data, > >> >>> >> >>>>>>>> else > >> >>> >> >>>>>>>> we > >> >>> >> >>>>>>>> # can't do the correlations correctly!! > >> >>> >> >>>>>>>> data1_snow = np.where(np.isnan(data2_snow), np.nan, > >> >>> >> data1_snow) > >> >>> >> >>>>>>>> data2_snow = np.where(np.isnan(data1_snow), np.nan, > >> >>> >> data1_snow) > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> # put data on a regular grid... > >> >>> >> >>>>>>>> print 'regridding landpts...' > >> >>> >> >>>>>>>> for i in xrange(numpts): > >> >>> >> >>>>>>>> # exclude the NaN, note masking them doesn't work > >> in > >> >>> the > >> >>> >> >>>>>>>> stats > >> >>> >> >>>>>>>> func > >> >>> >> >>>>>>>> x = data1_snow[:,i] > >> >>> >> >>>>>>>> x = x[np.isfinite(x)] > >> >>> >> >>>>>>>> y = data2_snow[:,i] > >> >>> >> >>>>>>>> y = y[np.isfinite(y)] > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> # r^2 > >> >>> >> >>>>>>>> # exclude v.small arrays, i.e. we need just less > >> over > >> >>> 4 > >> >>> >> >>>>>>>> years > >> >>> >> >>>>>>>> of > >> >>> >> >>>>>>>> data > >> >>> >> >>>>>>>> if len(x) and len(y) > 50: > >> >>> >> >>>>>>>> > >> pearsonsr_snow[((180-1)-(rows[i]-1)),cols[i]-1] > >> = > >> >>> >> >>>>>>>> (stats.pearsonr(x, y)[0])**2 > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> # wilcox signed rank test > >> >>> >> >>>>>>>> # make sure we have enough samples to do the test > >> >>> >> >>>>>>>> d = x - y > >> >>> >> >>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) # > >> Keep > >> >>> all > >> >>> >> >>>>>>>> non-zero > >> >>> >> >>>>>>>> differences > >> >>> >> >>>>>>>> count = len(d) > >> >>> >> >>>>>>>> if count > 10: > >> >>> >> >>>>>>>> z, pval = stats.wilcoxon(x, y) > >> >>> >> >>>>>>>> # only map out sign different data > >> >>> >> >>>>>>>> if pval < 0.05: > >> >>> >> >>>>>>>> > >> >>> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] > >> >>> >> = > >> >>> >> >>>>>>>> np.mean(x - y) > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> return (pearsonsr_snow, wilcoxStats_snow) > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> josef.pktd wrote: > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>>> On Fri, May 21, 2010 at 10:14 PM, mdekauwe < > >> >>> mdekauwe at gmail.com> > >> >>> >> >>>>>>>>> wrote: > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> Also I then need to remap the 2D array I make onto > >> another > >> >>> >> grid > >> >>> >> >>>>>>>>>> (the > >> >>> >> >>>>>>>>>> world in > >> >>> >> >>>>>>>>>> this case). Which again I had am doing with a loop > >> (note > >> >>> >> numpts > >> >>> >> >>>>>>>>>> is > >> >>> >> >>>>>>>>>> a > >> >>> >> >>>>>>>>>> lot > >> >>> >> >>>>>>>>>> bigger than my example above). > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> wilcoxStats_snow = np.ones((outrows, outcols), > >> >>> >> dtype=np.float32) > >> >>> >> >>>>>>>>>> * > >> >>> >> >>>>>>>>>> np.nan > >> >>> >> >>>>>>>>>> for i in xrange(numpts): > >> >>> >> >>>>>>>>>> # exclude the NaN, note masking them doesn't > >> work > >> >>> in > >> >>> >> the > >> >>> >> >>>>>>>>>> stats > >> >>> >> >>>>>>>>>> func > >> >>> >> >>>>>>>>>> x = data1_snow[:,i] > >> >>> >> >>>>>>>>>> x = x[np.isfinite(x)] > >> >>> >> >>>>>>>>>> y = data2_snow[:,i] > >> >>> >> >>>>>>>>>> y = y[np.isfinite(y)] > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> # wilcox signed rank test > >> >>> >> >>>>>>>>>> # make sure we have enough samples to do the > >> test > >> >>> >> >>>>>>>>>> d = x - y > >> >>> >> >>>>>>>>>> d = np.compress(np.not_equal(d,0), d ,axis=-1) > # > >> >>> Keep > >> >>> >> all > >> >>> >> >>>>>>>>>> non-zero > >> >>> >> >>>>>>>>>> differences > >> >>> >> >>>>>>>>>> count = len(d) > >> >>> >> >>>>>>>>>> if count > 10: > >> >>> >> >>>>>>>>>> z, pval = stats.wilcoxon(x, y) > >> >>> >> >>>>>>>>>> # only map out sign different data > >> >>> >> >>>>>>>>>> if pval < 0.05: > >> >>> >> >>>>>>>>>> > >> >>> >> wilcoxStats_snow[((180-1)-(rows[i]-1)),cols[i]-1] > >> >>> >> >>>>>>>>>> = > >> >>> >> >>>>>>>>>> np.mean(x - y) > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> Now I think I can push the data in one move into the > >> >>> >> >>>>>>>>>> wilcoxStats_snow > >> >>> >> >>>>>>>>>> array > >> >>> >> >>>>>>>>>> by removing the index, > >> >>> >> >>>>>>>>>> but I can't see how I will get the individual x and y > >> pts > >> >>> for > >> >>> >> >>>>>>>>>> each > >> >>> >> >>>>>>>>>> array > >> >>> >> >>>>>>>>>> member correctly without the loop, this was my attempt > >> >>> which > >> >>> >> of > >> >>> >> >>>>>>>>>> course > >> >>> >> >>>>>>>>>> doesn't work! > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> x = data1_snow[:,:] > >> >>> >> >>>>>>>>>> x = x[np.isfinite(x)] > >> >>> >> >>>>>>>>>> y = data2_snow[:,:] > >> >>> >> >>>>>>>>>> y = y[np.isfinite(y)] > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> # r^2 > >> >>> >> >>>>>>>>>> # exclude v.small arrays, i.e. we need just less over > 4 > >> >>> years > >> >>> >> of > >> >>> >> >>>>>>>>>> data > >> >>> >> >>>>>>>>>> if len(x) and len(y) > 50: > >> >>> >> >>>>>>>>>> pearsonsr_snow[((180-1)-(rows-1)),cols-1] = > >> >>> >> (stats.pearsonr(x, > >> >>> >> >>>>>>>>>> y)[0])**2 > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>>> If you want to do pairwise comparisons with > >> stats.wilcoxon, > >> >>> >> then > >> >>> >> >>>>>>>>> you > >> >>> >> >>>>>>>>> might be stuck with the loop, since wilcoxon takes only > >> two > >> >>> 1d > >> >>> >> >>>>>>>>> arrays > >> >>> >> >>>>>>>>> at a time (if I read the help correctly). > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>>> Also the presence of nans might force the use a loop. > >> >>> >> stats.mstats > >> >>> >> >>>>>>>>> has > >> >>> >> >>>>>>>>> masked array versions, but I didn't see wilcoxon in the > >> >>> list. > >> >>> >> >>>>>>>>> (Even > >> >>> >> >>>>>>>>> when vectorized operations would work with regular > >> arrays, > >> >>> nan > >> >>> >> or > >> >>> >> >>>>>>>>> masked array versions still have to loop in many > cases.) > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>>> If you have many columns with count <= 10, so that > >> wilcoxon > >> >>> is > >> >>> >> not > >> >>> >> >>>>>>>>> calculated then it might be worth to use only array > >> >>> operations > >> >>> >> up > >> >>> >> >>>>>>>>> to > >> >>> >> >>>>>>>>> that point. If wilcoxon is calculated most of the time, > >> >>> then > >> >>> >> it's > >> >>> >> >>>>>>>>> not > >> >>> >> >>>>>>>>> worth thinking too hard about this. > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>>> Josef > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> thanks. > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> mdekauwe wrote: > >> >>> >> >>>>>>>>>>> > >> >>> >> >>>>>>>>>>> Yes as Zachary said index is only 0 to 15237, so both > >> >>> methods > >> >>> >> >>>>>>>>>>> work. > >> >>> >> >>>>>>>>>>> > >> >>> >> >>>>>>>>>>> I don't quite get what you mean about slicing with > >> axis > >> > > >> >>> 3. > >> >>> >> Is > >> >>> >> >>>>>>>>>>> there > >> >>> >> >>>>>>>>>>> a > >> >>> >> >>>>>>>>>>> link you can recommend I should read? Does that mean > >> >>> given > >> >>> I > >> >>> >> >>>>>>>>>>> have > >> >>> >> >>>>>>>>>>> 4dims > >> >>> >> >>>>>>>>>>> that Josef's suggestion would be more advised in this > >> >>> case? > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>>> There were several discussions on the mailing lists > >> (fancy > >> >>> >> slicing > >> >>> >> >>>>>>>>> and > >> >>> >> >>>>>>>>> indexing). Your case is safe, but if you run in future > >> into > >> >>> >> funny > >> >>> >> >>>>>>>>> shapes, you can look up the details. > >> >>> >> >>>>>>>>> when in doubt, I use np.arange(...) > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>>> Josef > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>>>>> > >> >>> >> >>>>>>>>>>> Thanks. > >> >>> >> >>>>>>>>>>> > >> >>> >> >>>>>>>>>>> > >> >>> >> >>>>>>>>>>> > >> >>> >> >>>>>>>>>>> josef.pktd wrote: > >> >>> >> >>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>> On Fri, May 21, 2010 at 10:55 AM, mdekauwe < > >> >>> >> mdekauwe at gmail.com> > >> >>> >> >>>>>>>>>>>> wrote: > >> >>> >> >>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>> Thanks that works... > >> >>> >> >>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>> So the way to do it is with > >> np.arange(tsteps)[:,None], > >> >>> that > >> >>> >> >>>>>>>>>>>>> was > >> >>> >> >>>>>>>>>>>>> the > >> >>> >> >>>>>>>>>>>>> step > >> >>> >> >>>>>>>>>>>>> I > >> >>> >> >>>>>>>>>>>>> was struggling with, so this forms a 2D array which > >> >>> >> replaces > >> >>> >> >>>>>>>>>>>>> the > >> >>> >> >>>>>>>>>>>>> the > >> >>> >> >>>>>>>>>>>>> two > >> >>> >> >>>>>>>>>>>>> for > >> >>> >> >>>>>>>>>>>>> loops? Do I have that right? > >> >>> >> >>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>> Yes, but as Zachary showed, if you need the full > >> index > >> >>> in > >> >>> a > >> >>> >> >>>>>>>>>>>> dimension, > >> >>> >> >>>>>>>>>>>> then you can use slicing. It might be faster. > >> >>> >> >>>>>>>>>>>> And a warning, mixing slices and index arrays with 3 > >> or > >> >>> more > >> >>> >> >>>>>>>>>>>> dimensions can have some surprise switching of axes. > >> >>> >> >>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>> Josef > >> >>> >> >>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>> A lot quicker...! > >> >>> >> >>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>> Martin > >> >>> >> >>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>> josef.pktd wrote: > >> >>> >> >>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>> On Fri, May 21, 2010 at 8:59 AM, mdekauwe > >> >>> >> >>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>> wrote: > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>>> Hi, > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>>> I am trying to extract data from a 4D array and > >> store > >> >>> it > >> >>> >> in > >> >>> >> >>>>>>>>>>>>>>> a > >> >>> >> >>>>>>>>>>>>>>> 2D > >> >>> >> >>>>>>>>>>>>>>> array, > >> >>> >> >>>>>>>>>>>>>>> but > >> >>> >> >>>>>>>>>>>>>>> avoid my current usage of the for loops for > speed, > >> as > >> >>> in > >> >>> >> >>>>>>>>>>>>>>> reality > >> >>> >> >>>>>>>>>>>>>>> the > >> >>> >> >>>>>>>>>>>>>>> arrays > >> >>> >> >>>>>>>>>>>>>>> sizes are quite big. Could someone also try and > >> >>> explain > >> >>> >> the > >> >>> >> >>>>>>>>>>>>>>> solution > >> >>> >> >>>>>>>>>>>>>>> as > >> >>> >> >>>>>>>>>>>>>>> well > >> >>> >> >>>>>>>>>>>>>>> if they have a spare moment as I am still finding > >> it > >> >>> >> quite > >> >>> >> >>>>>>>>>>>>>>> difficult > >> >>> >> >>>>>>>>>>>>>>> to > >> >>> >> >>>>>>>>>>>>>>> get > >> >>> >> >>>>>>>>>>>>>>> over the habit of using loops (C convert for my > >> >>> sins). > >> >>> I > >> >>> >> get > >> >>> >> >>>>>>>>>>>>>>> that > >> >>> >> >>>>>>>>>>>>>>> one > >> >>> >> >>>>>>>>>>>>>>> could > >> >>> >> >>>>>>>>>>>>>>> precompute the indices's i and j i.e. > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>>> i = np.arange(tsteps) > >> >>> >> >>>>>>>>>>>>>>> j = np.arange(numpts) > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>>> but just can't get my head round how i then use > >> >>> them... > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>>> Thanks, > >> >>> >> >>>>>>>>>>>>>>> Martin > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>>> import numpy as np > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>>> numpts=10 > >> >>> >> >>>>>>>>>>>>>>> tsteps = 12 > >> >>> >> >>>>>>>>>>>>>>> vari = 22 > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>>> data = np.random.random((tsteps, vari, numpts, > 1)) > >> >>> >> >>>>>>>>>>>>>>> new_data = np.zeros((tsteps, numpts), > >> >>> dtype=np.float32) > >> >>> >> >>>>>>>>>>>>>>> index = np.arange(numpts) > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>>> for i in xrange(tsteps): > >> >>> >> >>>>>>>>>>>>>>> for j in xrange(numpts): > >> >>> >> >>>>>>>>>>>>>>> new_data[i,j] = data[i,5,index[j],0] > >> >>> >> >>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>> The index arrays need to be broadcastable against > >> each > >> >>> >> other. > >> >>> >> >>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>> I think this should do it > >> >>> >> >>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>> new_data = data[np.arange(tsteps)[:,None], 5, > >> >>> >> >>>>>>>>>>>>>> np.arange(numpts), > >> >>> >> >>>>>>>>>>>>>> 0] > >> >>> >> >>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>> Josef > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>>> -- > >> >>> >> >>>>>>>>>>>>>>> View this message in context: > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> > >> http://old.nabble.com/removing-for-loops...-tp28633477p28633477.html > >> >>> >> >>>>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at > >> >>> >> Nabble.com. > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>>> _______________________________________________ > >> >>> >> >>>>>>>>>>>>>>> SciPy-User mailing list > >> >>> >> >>>>>>>>>>>>>>> SciPy-User at scipy.org > >> >>> >> >>>>>>>>>>>>>>> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >>>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>> _______________________________________________ > >> >>> >> >>>>>>>>>>>>>> SciPy-User mailing list > >> >>> >> >>>>>>>>>>>>>> SciPy-User at scipy.org > >> >>> >> >>>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>> -- > >> >>> >> >>>>>>>>>>>>> View this message in context: > >> >>> >> >>>>>>>>>>>>> > >> >>> >> > >> http://old.nabble.com/removing-for-loops...-tp28633477p28634924.html > >> >>> >> >>>>>>>>>>>>> Sent from the Scipy-User mailing list archive at > >> >>> >> Nabble.com. > >> >>> >> >>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>>> _______________________________________________ > >> >>> >> >>>>>>>>>>>>> SciPy-User mailing list > >> >>> >> >>>>>>>>>>>>> SciPy-User at scipy.org > >> >>> >> >>>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >>>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>> _______________________________________________ > >> >>> >> >>>>>>>>>>>> SciPy-User mailing list > >> >>> >> >>>>>>>>>>>> SciPy-User at scipy.org > >> >>> >> >>>>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >>>>>>>>>>>> > >> >>> >> >>>>>>>>>>>> > >> >>> >> >>>>>>>>>>> > >> >>> >> >>>>>>>>>>> > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> -- > >> >>> >> >>>>>>>>>> View this message in context: > >> >>> >> >>>>>>>>>> > >> >>> >> > >> http://old.nabble.com/removing-for-loops...-tp28633477p28640656.html > >> >>> >> >>>>>>>>>> Sent from the Scipy-User mailing list archive at > >> >>> Nabble.com. > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>>> _______________________________________________ > >> >>> >> >>>>>>>>>> SciPy-User mailing list > >> >>> >> >>>>>>>>>> SciPy-User at scipy.org > >> >>> >> >>>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >>>>>>>>>> > >> >>> >> >>>>>>>>> _______________________________________________ > >> >>> >> >>>>>>>>> SciPy-User mailing list > >> >>> >> >>>>>>>>> SciPy-User at scipy.org > >> >>> >> >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>>> > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> -- > >> >>> >> >>>>>>>> View this message in context: > >> >>> >> >>>>>>>> > >> >>> >> > >> http://old.nabble.com/removing-for-loops...-tp28633477p28642434.html > >> >>> >> >>>>>>>> Sent from the Scipy-User mailing list archive at > >> Nabble.com. > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>>> _______________________________________________ > >> >>> >> >>>>>>>> SciPy-User mailing list > >> >>> >> >>>>>>>> SciPy-User at scipy.org > >> >>> >> >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >>>>>>>> > >> >>> >> >>>>>>> _______________________________________________ > >> >>> >> >>>>>>> SciPy-User mailing list > >> >>> >> >>>>>>> SciPy-User at scipy.org > >> >>> >> >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >>>>>>> > >> >>> >> >>>>>>> > >> >>> >> >>>>>> > >> >>> >> >>>>>> -- > >> >>> >> >>>>>> View this message in context: > >> >>> >> >>>>>> > >> >>> >> > >> http://old.nabble.com/removing-for-loops...-tp28633477p28686356.html > >> >>> >> >>>>>> Sent from the Scipy-User mailing list archive at > >> Nabble.com. > >> >>> >> >>>>>> > >> >>> >> >>>>>> _______________________________________________ > >> >>> >> >>>>>> SciPy-User mailing list > >> >>> >> >>>>>> SciPy-User at scipy.org > >> >>> >> >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >>>>>> > >> >>> >> >>>>> _______________________________________________ > >> >>> >> >>>>> SciPy-User mailing list > >> >>> >> >>>>> SciPy-User at scipy.org > >> >>> >> >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >>>>> > >> >>> >> >>>>> > >> >>> >> >>>> > >> >>> >> >>>> -- > >> >>> >> >>>> View this message in context: > >> >>> >> >>>> > >> >>> > http://old.nabble.com/removing-for-loops...-tp28633477p28711249.html > >> >>> >> >>>> Sent from the Scipy-User mailing list archive at Nabble.com. > >> >>> >> >>>> > >> >>> >> >>>> _______________________________________________ > >> >>> >> >>>> SciPy-User mailing list > >> >>> >> >>>> SciPy-User at scipy.org > >> >>> >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >>>> > >> >>> >> >>> _______________________________________________ > >> >>> >> >>> SciPy-User mailing list > >> >>> >> >>> SciPy-User at scipy.org > >> >>> >> >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >>> > >> >>> >> >>> > >> >>> >> >> > >> >>> >> >> -- > >> >>> >> >> View this message in context: > >> >>> >> >> > >> >>> > http://old.nabble.com/removing-for-loops...-tp28633477p28711444.html > >> >>> >> >> Sent from the Scipy-User mailing list archive at Nabble.com. > >> >>> >> >> > >> >>> >> >> _______________________________________________ > >> >>> >> >> SciPy-User mailing list > >> >>> >> >> SciPy-User at scipy.org > >> >>> >> >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> >> > >> >>> >> > _______________________________________________ > >> >>> >> > SciPy-User mailing list > >> >>> >> > SciPy-User at scipy.org > >> >>> >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> > > >> >>> >> > > >> >>> >> > >> >>> >> -- > >> >>> >> View this message in context: > >> >>> >> > >> http://old.nabble.com/removing-for-loops...-tp28633477p28711581.html > >> >>> >> Sent from the Scipy-User mailing list archive at Nabble.com. > >> >>> >> > >> >>> >> _______________________________________________ > >> >>> >> SciPy-User mailing list > >> >>> >> SciPy-User at scipy.org > >> >>> >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> >> > >> >>> > > >> >>> > _______________________________________________ > >> >>> > SciPy-User mailing list > >> >>> > SciPy-User at scipy.org > >> >>> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> > > >> >>> > > >> >>> > >> >>> -- > >> >>> View this message in context: > >> >>> > http://old.nabble.com/removing-for-loops...-tp28633477p28824023.html > >> >>> Sent from the Scipy-User mailing list archive at Nabble.com. > >> >>> > >> >>> _______________________________________________ > >> >>> SciPy-User mailing list > >> >>> SciPy-User at scipy.org > >> >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> >>> > >> >> > >> >> > >> > > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > > >> > >> -- > >> View this message in context: > >> http://old.nabble.com/removing-for-loops...-tp28633477p28846602.html > >> Sent from the Scipy-User mailing list archive at Nabble.com. > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > View this message in context: > http://old.nabble.com/removing-for-loops...-tp28633477p28848191.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.krueger2 at gmail.com Tue Jun 15 03:47:17 2010 From: andreas.krueger2 at gmail.com (Andreas Krueger) Date: Tue, 15 Jun 2010 08:47:17 +0100 Subject: [SciPy-User] ... can't be installed ... (SCIPY-INSTALLERBUG) In-Reply-To: Message-ID: ------ Forwarded Message Hi! There is a bug in your installer. See attachment. With the recent Apple update, Python 2.5.4 got updated to 2.5.5, I guess that is the reason for this problem. Please keep me posted when repaired. Thanks a lot! Andreas ------ End of Forwarded Message ------ Forwarded Message From: Jarrod Millman [...] For questions about SciPy, please contact the users mailing list: http://mail.scipy.org/mailman/listinfo/scipy-user [...] ------ End of Forwarded Message -------------- next part -------------- A non-text attachment was scrubbed... Name: SCIPY-INSTALLERBUG.png Type: video/x-fl Size: 224130 bytes Desc: not available URL: From thoeger at nbi.ku.dk Sat Jun 12 11:28:45 2010 From: thoeger at nbi.ku.dk (thoeger at nbi.ku.dk) Date: Sat, 12 Jun 2010 17:28:45 +0200 (CEST) Subject: [SciPy-User] Boxcar smoothing of 1D data array...? Message-ID: <10519.90.184.76.157.1276356525.squirrel@webmail.nbi.ku.dk> Hello list; This seems like it should be a simple task, but I couldn't seem to find anything in the docs about it - or rather, what I found seems to be from the Numeric/Numarray days and not valid anymore. As the subject line suggests, I have a 1D array that I want to smooth/convolve with a Boxcar kernel of a certain width. In IDL there's simply a function to do this, and there might be something people have hacked together out there to do it too - but isn't there a simple way to do it using built-in NumPy and SciPy tools? Cheers; Emil From thoeger at fys.ku.dk Mon Jun 14 08:22:26 2010 From: thoeger at fys.ku.dk (=?ISO-8859-1?Q?Th=F8ger?= Emil Juul Thorsen) Date: Mon, 14 Jun 2010 14:22:26 +0200 Subject: [SciPy-User] boxcar smoothing of 1D data Message-ID: <1276518146.2470.34.camel@falconeer> Hello list; I have a 1D NumPy array of data that I wish to smooth/convolve with a boxcar kernel of a certain width. In IDL, this is easily done with the SMOOTH function, and I'd believe it would be possible to do using NumPy/SciPy tools, but I haven't been able to find any. Will anybody be able to help me here? From cournape at gmail.com Tue Jun 15 11:39:55 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 16 Jun 2010 00:39:55 +0900 Subject: [SciPy-User] ... can't be installed ... (SCIPY-INSTALLERBUG) In-Reply-To: References: Message-ID: On Tue, Jun 15, 2010 at 4:47 PM, Andreas Krueger wrote: > > ------ Forwarded Message > > Hi! > > There is a bug in your installer. > See attachment. > > With the recent Apple update, Python 2.5.4 got updated to 2.5.5, > I guess that is the reason for this problem. You need to install python from python.org. Contrary to what the misleading message says, you need the official python, and *not* the apple system python. We do not support the Apple system python David From ben.root at ou.edu Tue Jun 15 11:46:37 2010 From: ben.root at ou.edu (Benjamin Root) Date: Tue, 15 Jun 2010 10:46:37 -0500 Subject: [SciPy-User] Boxcar smoothing of 1D data array...? In-Reply-To: <10519.90.184.76.157.1276356525.squirrel@webmail.nbi.ku.dk> References: <10519.90.184.76.157.1276356525.squirrel@webmail.nbi.ku.dk> Message-ID: Emil, You can find various windowing functions like boxcar, hamming and hanning in scipy.signals module. Ben Root On Sat, Jun 12, 2010 at 10:28 AM, wrote: > Hello list; > > This seems like it should be a simple task, but I couldn't seem to find > anything in the docs about it - or rather, what I found seems to be from > the Numeric/Numarray days and not valid anymore. > > As the subject line suggests, I have a 1D array that I want to > smooth/convolve with a Boxcar kernel of a certain width. In IDL there's > simply a function to do this, and there might be something people have > hacked together out there to do it too - but isn't there a simple way to > do it using built-in NumPy and SciPy tools? > > Cheers; > Emil > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From timothyjkinney at gmail.com Tue Jun 15 11:52:35 2010 From: timothyjkinney at gmail.com (Timothy Kinney) Date: Tue, 15 Jun 2010 10:52:35 -0500 Subject: [SciPy-User] SciPy-User Digest, Vol 82, Issue 43 In-Reply-To: References: Message-ID: > From: Charles R Harris > Subject: Re: [SciPy-User] Leastsq questions >> I am using the scipy leastsq method to fit some cooling data, such >> that the temperature is defined by an exponential decay function >> (Newton's Cooling Law). However, there are some other factors which >> also influence the cooling rate and I am attempting to account for >> them in the cooling law. I have some questions about leastsq: >> >> 1) When I fit the data in Excel I get a different fit than when I fit >> the same data in Scipy. Why is this? The fits are not very different, >> but they are consistently different. >> > It is impossible to know without a good deal more information, i.e., what is > your model, how is it parameterized, what is the data, when do the > iterations stop, and, if the parameters aren't sufficiently independent over > the data set, what is the required condition number. I suspect the latter is > coming into play here. I am using Newton's Cooling Law as the model. It states that the temperature at time t can be determined as an exponential decay term modifying the difference in temperature between the body and the environment: T(t) = Ta + (T0- Ta)Exp(-kt) where Ta is the ambient temperature, T0 is the temperature at t=0, and k is the cooling constant (determined empirically, which is what I am doing by regression). I have temperature and time data points for my sample which I am plotting in Excel 2003 and Python. I then perform an exponential fit using the graphical trendline in Excel and using leastsq in Scipy. The data points (as lists in Python) are: time = [356, 1476, 1477, 1478, 1480, 1480, 1481, 1482, 1485, 1489] temp = [600, 50, 46, 43, 40, 39, 38, 37, 36, 35] In Excel, I get the fit: y = 1416.904401 exp(-0.002406x) with R^2 0.9855 In Python, I get y = 1638.2891 * exp(-0.00251719x) and I'm not sure what the R^2 is. >> 2) How do I calculate the goodness of fit (R squared) for the leastsq >> algorithm? I think it's just the sum of the squared errors divided by >> something, but shouldn't this be easily called from the output? >> >> > If you are using the function correctly, then you already have an error > function that returns the residuals. Note that it is also available in the > full return. If I call the residuals function with the plsq[0] (the returned information from calling leastsq), I get an array (abbreviating the decimals): [-68.67 10.11 7.21 5.21 3.31 0.42 0.51 -0.49 -1.39 -2.29 -2.19 -2.99 -3.70 -3.60] If I square each value and sum over the array I get 4956.00, but I need to divide this by something to calculate the R^2. I see no where in the documentation for leastsq where this is explained or where this calculation is callable. My full return is pasted below but I don't see an R^2 written in there...maybe it is called something else? (array([ 1.63828911e+03, -2.51719240e-03]), array([[ 2.11286829e-01, -1.68448450e-06], [ -1.68448450e-06, 2.41312772e-11]]), {'qtf': array([-0.00033384, 0.00037401]), 'nfev': 16, 'fjac': array([[ 3.05684317e+05, 2.36280141e-01, 2.23786930e-01, 2.15345493e-01, 2.06813134e-01, 1.93842627e-01, 1.93842627e-01, 1.89472893e-01, 1.85079918e-01, 1.80663583e-01, 1.80663583e-01, 1.76223829e-01, 1.71760516e-01, 1.71760516e-01], [ 2.43706858e+00, 2.17552354e+00, 2.48497525e-01, 2.56589204e-01, 2.64756241e-01, 2.77149269e-01, 2.77149269e-01, 2.81318568e-01, 2.85507103e-01, 2.89714982e-01, 2.89714982e-01, 2.93942237e-01, 2.98188985e-01, 2.98188985e-01]]), 'fvec': array([ -5.80026885, 31.45722273, 21.5073541 , 14.16136832, 7.77830677, -2.36621221, -1.36621221, -5.09979308, -7.84278392, -10.59520846, -9.59520846, -11.35709047, -12.12845379, -11.12845379]), 'ipvt': array([2, 1])}, 'Both actual and predicted relative reductions in the sum of squares\n are at most 0.000000', 1) > I would like to iterate over a computation where I change one of the >> values and see how it effects the goodness of the fit. I'm not sure >> how to calculate the r-squared from the plsq that is returned from >> leastsq. > plsq? Sorry, that's what I named the returned information from calling leastsq. plsq[0] is the array containing the coefficients for the fit. This is actually a different fit from the one I am confused about above. But if I can figure out how to calculate the R^2 from leastsq, I think I can figure out how to solve that one. Or if not, I'll post more detailed info about it. I appreciate any help you can offer. -Tim From josef.pktd at gmail.com Tue Jun 15 12:15:43 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 15 Jun 2010 12:15:43 -0400 Subject: [SciPy-User] SciPy-User Digest, Vol 82, Issue 43 In-Reply-To: References: Message-ID: On Tue, Jun 15, 2010 at 11:52 AM, Timothy Kinney wrote: >> From: Charles R Harris >> Subject: Re: [SciPy-User] Leastsq questions >>> I am using the scipy leastsq method to fit some cooling data, such >>> that the temperature is defined by an exponential decay function >>> (Newton's Cooling Law). However, there are some other factors which >>> also influence the cooling rate and I am attempting to account for >>> them in the cooling law. I have some questions about leastsq: >>> >>> 1) When I fit the data in Excel I get a different fit than when I fit >>> the same data in Scipy. Why is this? The fits are not very different, >>> but they are consistently different. >>> >> It is impossible to know without a good deal more information, i.e., what is >> your model, how is it parameterized, what is the data, when do the >> iterations stop, and, if the parameters aren't sufficiently independent over >> the data set, what is the required condition number. I suspect the latter is >> coming into play here. > > I am using Newton's Cooling Law as the model. It states that the > temperature at time t can be determined as an exponential decay term > modifying the difference in temperature between the body and the > environment: > > T(t) = Ta + (T0- Ta)Exp(-kt) where Ta is the ambient temperature, T0 > is the temperature at t=0, and k is the cooling constant (determined > empirically, which is what I am doing by regression). > > I have temperature and time data points for my sample which I am > plotting in Excel 2003 and Python. I then perform an exponential fit > using the graphical trendline in Excel and using leastsq in Scipy. > > The data points (as lists in Python) are: > time = [356, 1476, 1477, 1478, 1480, 1480, 1481, 1482, 1485, 1489] > temp = [600, 50, 46, 43, 40, 39, 38, 37, 36, 35] > > In Excel, I get the fit: y = 1416.904401 exp(-0.002406x) with R^2 0.9855 > > In Python, I get y = 1638.2891 * exp(-0.00251719x) and I'm not sure > what the R^2 is. > >>> 2) How do I calculate the goodness of fit (R squared) for the leastsq >>> algorithm? I think it's just the sum of the squared errors divided by >>> something, but shouldn't this be easily called from the output? >>> >>> >> If you are using the function correctly, then you already have an error >> function that returns the residuals. Note that it is also available in the >> full return. > > If I call the residuals function with the plsq[0] (the returned > information from calling leastsq), I get an array (abbreviating the > decimals): > > [-68.67 10.11 7.21 5.21 3.31 0.42 0.51 -0.49 -1.39 -2.29 -2.19 -2.99 > -3.70 -3.60] > > If I square each value and sum over the array I get 4956.00, but I > need to divide this by something to calculate the R^2. I see no where > in the documentation for leastsq where this is explained or where this > calculation is callable. > > My full return is pasted below but I don't see an R^2 written in > there...maybe it is called something else? > > (array([ ?1.63828911e+03, ?-2.51719240e-03]), array([[ > 2.11286829e-01, ?-1.68448450e-06], > ? ? ? [ -1.68448450e-06, ? 2.41312772e-11]]), {'qtf': > array([-0.00033384, ?0.00037401]), 'nfev': 16, 'fjac': array([[ > 3.05684317e+05, ? 2.36280141e-01, ? 2.23786930e-01, > ? ? ? ? ?2.15345493e-01, ? 2.06813134e-01, ? 1.93842627e-01, > ? ? ? ? ?1.93842627e-01, ? 1.89472893e-01, ? 1.85079918e-01, > ? ? ? ? ?1.80663583e-01, ? 1.80663583e-01, ? 1.76223829e-01, > ? ? ? ? ?1.71760516e-01, ? 1.71760516e-01], > ? ? ? [ ?2.43706858e+00, ? 2.17552354e+00, ? 2.48497525e-01, > ? ? ? ? ?2.56589204e-01, ? 2.64756241e-01, ? 2.77149269e-01, > ? ? ? ? ?2.77149269e-01, ? 2.81318568e-01, ? 2.85507103e-01, > ? ? ? ? ?2.89714982e-01, ? 2.89714982e-01, ? 2.93942237e-01, > ? ? ? ? ?2.98188985e-01, ? 2.98188985e-01]]), 'fvec': array([ > -5.80026885, ?31.45722273, ?21.5073541 , ?14.16136832, > ? ? ? ? 7.77830677, ?-2.36621221, ?-1.36621221, ?-5.09979308, > ? ? ? ?-7.84278392, -10.59520846, ?-9.59520846, -11.35709047, > ? ? ? -12.12845379, -11.12845379]), 'ipvt': array([2, 1])}, 'Both > actual and predicted relative reductions in the sum of squares\n ?are > at most 0.000000', 1) > >> I would like to iterate over a computation where I change one of the >>> values and see how it effects the goodness of the fit. I'm not sure >>> how to calculate the r-squared from the plsq that is returned from >>> leastsq. >> plsq? > > Sorry, that's what I named the returned information from calling > leastsq. plsq[0] is the array containing the coefficients for the fit. > > This is actually a different fit from the one I am confused about > above. But if I can figure out how to calculate the R^2 from leastsq, > I think I can figure out how to solve that one. Or if not, I'll post > more detailed info about it. > > I appreciate any help you can offer. I get the same results as your Excel results for the log transformed linear model >>> y=np.array([600., 50, 46, 43, 40, 39, 38, 37, 36, 35]) >>> t = np.array([356, 1476, 1477, 1478, 1480, 1480, 1481, 1482, 1485, 1489]) >>> from scipy import stats >>> res = stats.linregress(t, np.log(y)) >>> np.exp(res[1]), res[0] (1416.9044014863491, -0.002405999077350617) >>> yhat = np.exp(res[1])*np.exp(res[0]*t) >>> y-yhat array([-1.6609619 , 9.35096348, 5.44864747, 2.5460967 , -0.2597068 , -1.2597068 , -2.16295842, -3.06644253, -3.77828427, -4.39729448]) >>> rss=((np.log(y)-np.log(yhat))**2).sum() >>> rss 0.096919787740057148 >>> lny=np.log(y) >>> 1-rss/((lny-lny.mean())**2).sum() #R-squared 0.98551323008032532 Josef > > -Tim > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From timothyjkinney at gmail.com Tue Jun 15 13:42:14 2010 From: timothyjkinney at gmail.com (Timothy Kinney) Date: Tue, 15 Jun 2010 12:42:14 -0500 Subject: [SciPy-User] SciPy-User Digest, Vol 82, Issue 45 In-Reply-To: References: Message-ID: Josef, Thank you for showing me how to do the R-squared calculation. Also, looking at your code I realized that had a typo in my python data so I was actually analyzing two different data sets, which is why I got different results. I get the same result in both when I use the same data set. Back to work. :) -Tim > I get the same results as your Excel results for the log transformed > linear model > > >>>> y=np.array([600., 50, 46, 43, 40, 39, 38, 37, 36, 35]) >>>> t = np.array([356, 1476, 1477, 1478, 1480, 1480, 1481, 1482, 1485, 1489]) >>>> from scipy import stats >>>> res = stats.linregress(t, np.log(y)) >>>> np.exp(res[1]), res[0] > (1416.9044014863491, -0.002405999077350617) >>>> yhat = np.exp(res[1])*np.exp(res[0]*t) >>>> y-yhat > array([-1.6609619 , ?9.35096348, ?5.44864747, ?2.5460967 , -0.2597068 , > ? ? ? -1.2597068 , -2.16295842, -3.06644253, -3.77828427, -4.39729448]) > > >>>> rss=((np.log(y)-np.log(yhat))**2).sum() >>>> rss > 0.096919787740057148 >>>> lny=np.log(y) >>>> 1-rss/((lny-lny.mean())**2).sum() ? #R-squared > 0.98551323008032532 > > Josef From tmp50 at ukr.net Tue Jun 15 14:42:49 2010 From: tmp50 at ukr.net (Dmitrey) Date: Tue, 15 Jun 2010 21:42:49 +0300 Subject: [SciPy-User] [Ann] OpenOpt 0.29, FuncDesigner 0.19, DerApproximator 0.19 Message-ID: Hi all, I'm glad to inform you about new release of the software (numerical optimization, linear/nonlinear/ODE systems, automatic differentiation, etc). For more details see http://forum.openopt.org/viewtopic.php?id=252 Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From algebraicamente at gmail.com Tue Jun 15 19:40:39 2010 From: algebraicamente at gmail.com (Oscar Gerardo Lazo Arjona) Date: Tue, 15 Jun 2010 23:40:39 +0000 (UTC) Subject: [SciPy-User] multidimensional polynomial fit References: <4C13D3C7.4090103@gmail.com> Message-ID: David Goldsmith gmail.com> writes: > Are you always going to be generating your own data in that fashion?DG? Obviously not... That was just an example. Oscar From david_baddeley at yahoo.com.au Tue Jun 15 20:26:35 2010 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Tue, 15 Jun 2010 17:26:35 -0700 (PDT) Subject: [SciPy-User] Boxcar smoothing of 1D data array...? In-Reply-To: References: <10519.90.184.76.157.1276356525.squirrel@webmail.nbi.ku.dk> Message-ID: <708711.77910.qm@web33002.mail.mud.yahoo.com> Alternatively you could just use scipy.convolve with a tophat kernel ie (for a filter of length N & signal y): scipy.convolve(y, ones(N)/N) see the docs for scipy.convolve for more info (you might want to specify how it handles the ends, for example) cheers, David ________________________________ From: Benjamin Root To: SciPy Users List Sent: Wed, 16 June, 2010 3:46:37 AM Subject: Re: [SciPy-User] Boxcar smoothing of 1D data array...? Emil, You can find various windowing functions like boxcar, hamming and hanning in scipy.signals module. Ben Root On Sat, Jun 12, 2010 at 10:28 AM, wrote: Hello list; > >>This seems like it should be a simple task, but I couldn't seem to find >>anything in the docs about it - or rather, what I found seems to be from >>the Numeric/Numarray days and not valid anymore. > >>As the subject line suggests, I have a 1D array that I want to >>smooth/convolve with a Boxcar kernel of a certain width. In IDL there's >>simply a function to do this, and there might be something people have >>hacked together out there to do it too - but isn't there a simple way to >>do it using built-in NumPy and SciPy tools? > >> Cheers; >>Emil > >>_______________________________________________ >>SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Jun 16 11:40:45 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 16 Jun 2010 23:40:45 +0800 Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: References: <4C0C905D.6080307@silveregg.co.jp> Message-ID: On Tue, Jun 8, 2010 at 11:22 AM, Scott Stephens wrote: > On Mon, Jun 7, 2010 at 1:23 AM, David wrote: > > On 06/07/2010 12:42 PM, Scott Stephens wrote: > >> On Sat, Jun 5, 2010 at 10:49 PM, Ralf Gommers > >> wrote: > >>> Could you try the 0.8.0 beta ( > http://sourceforge.net/projects/scipy/files/)? > >>> Some of this may be fixed, and you also get another 18 months worth or > so of > >>> new features and bug fixes. > >> > >> 0.8.0b1 doesn't build for me using numscons. Build log is attached. > >> I'm using numpy 1.4.1 built from source (and tested, no errors or > >> unknown failures) and numscons freshly checked out from the git > >> repository (at least numscons-0.11 is required to build scipy-0.8.0b1, > >> and only numscons-0.10 is available via easy_install; couldn't find > >> source for numscons-0.11). > > > > Could you see whether r6487 fixes it for you ? > > > > Checked out and built the trunk (r6490). r6487 fixed the build > problem, and some of the test failures I was getting in 0.7.1, but not > all. The errors and failures I get now are: > > ERROR: test_decomp.test_lapack_misaligned( 0x103da4f50>, (array([[ 1.734e-255, 8.189e-217, 4.025e-178, > 1.903e-139, 9.344e-101, > ERROR: test_complex_nonsymmetric_modes > (test_arpack.TestEigenComplexNonSymmetric) > ERROR: test_nonsymmetric_modes (test_arpack.TestEigenNonSymmetric) > ERROR: test_starting_vector (test_arpack.TestEigenNonSymmetric) > ERROR: > test_continuous_basic.test_cont_basic( object at 0x104608e50>, (), 'wald') > ERROR: > test_continuous_basic.test_cont_basic( object at 0x104608e50>, (), 'wald') > ERROR: > test_continuous_basic.test_cont_basic( object at 0x104608e50>, (), 'wald') > FAIL: test_complex_symmetric_modes (test_arpack.TestEigenComplexSymmetric) > > Full text of the test run is attached. > The lapack_misaligned one is a known error. The wald errors were fixed in r6495. The arpack errors seem to be due to a lapack issue, see http://thread.gmane.org/gmane.comp.python.scientific.devel/8551 Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From neurino at gmail.com Wed Jun 16 12:13:36 2010 From: neurino at gmail.com (neurino) Date: Wed, 16 Jun 2010 17:13:36 +0100 Subject: [SciPy-User] how does scipy.stats.t.isf work? Message-ID: Honestly the full question is "how does scipy.stats.t.isf work compared to common spreadsheets function T.INV?" I have to translate an excel calculation using TINV In all excel, ooo or abiword I get: TINV: inverse of the survival function of the Student t-distribution Arguments: p: probability dof: number of degrees of freedom The survival function is 1 minus the cumulative distribution function. Note: If p < 0 or p > 1 or dof < 1 this function returns a #NUM! error. Microsoft Excel: This function is Excel compatible. Examples: tinv(0,4;32) evaluates to 0,852998453651888. while with scipy I get: >>> from scipy.stats import t >>> t.isf(.4, 32) 0.25546356665122449 Any advice welcome, please consider I'm an informatic but not a mathematician. Thanks for your support -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Jun 16 12:40:14 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 16 Jun 2010 12:40:14 -0400 Subject: [SciPy-User] how does scipy.stats.t.isf work? In-Reply-To: References: Message-ID: On Wed, Jun 16, 2010 at 12:13 PM, neurino wrote: > Honestly the full question is > "how does scipy.stats.t.isf work compared to common spreadsheets function > T.INV?" > I have to translate an excel calculation using TINV > In all excel, ooo or abiword I get: > > TINV: inverse of the survival function of the Student t-distribution > Arguments: > p: probability > dof: number of degrees of freedom > The survival function is 1 minus the cumulative distribution function. > Note: If p < 0 or p > 1 or dof < 1 this function returns a #NUM! error. > Microsoft Excel: This function is Excel compatible. > Examples: > tinv(0,4;32) evaluates to 0,852998453651888. > > while with scipy I get: >>>> from scipy.stats import t >>>>?t.isf(.4, 32) > 0.25546356665122449 > Any advice welcome, please consider I'm an informatic but not a > mathematician. > Thanks for your support I guess Excel uses a two-sided tail probability >>> stats.t.isf(.4/2., 32) 0.85299845247181938 Josef > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From warren.weckesser at enthought.com Wed Jun 16 12:42:06 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 16 Jun 2010 11:42:06 -0500 Subject: [SciPy-User] how does scipy.stats.t.isf work? In-Reply-To: References: Message-ID: <4C18FEDE.1000709@enthought.com> josef.pktd at gmail.com wrote: > On Wed, Jun 16, 2010 at 12:13 PM, neurino wrote: > >> Honestly the full question is >> "how does scipy.stats.t.isf work compared to common spreadsheets function >> T.INV?" >> I have to translate an excel calculation using TINV >> In all excel, ooo or abiword I get: >> >> TINV: inverse of the survival function of the Student t-distribution >> Arguments: >> p: probability >> dof: number of degrees of freedom >> The survival function is 1 minus the cumulative distribution function. >> Note: If p < 0 or p > 1 or dof < 1 this function returns a #NUM! error. >> Microsoft Excel: This function is Excel compatible. >> Examples: >> tinv(0,4;32) evaluates to 0,852998453651888. >> >> while with scipy I get: >> >>>>> from scipy.stats import t >>>>> t.isf(.4, 32) >>>>> >> 0.25546356665122449 >> Any advice welcome, please consider I'm an informatic but not a >> mathematician. >> Thanks for your support >> > > I guess Excel uses a two-sided tail probability > > >>>> stats.t.isf(.4/2., 32) >>>> > 0.85299845247181938 > > Yes. Check out the documentation for TINV here: http://support.microsoft.com/kb/828340 Note that TINV(p, df) is the inverse for TDIST(x, df, 2). That '2' means TDIST is two-sided. To quote from the above link: "For any particular positive value of x, TDIST(x, df, 2) returns the probability that a t-distributed random variable with df degrees of freedom is greater than or equal to x or is less than or equal to ?x." So you will need to divide the probability by 2 to compare t.isf to TINV. For example, this matches TINV(0.4; 32): >>> t.isf(0.2, 32) 0.8529984524718196 Warren > Josef > > > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From neurino at gmail.com Wed Jun 16 12:53:17 2010 From: neurino at gmail.com (neurino) Date: Wed, 16 Jun 2010 18:53:17 +0200 Subject: [SciPy-User] how does scipy.stats.t.isf work? In-Reply-To: <4C18FEDE.1000709@enthought.com> References: <4C18FEDE.1000709@enthought.com> Message-ID: Thank you very much, now I can get my function to work as expected, I'm afraid not founding myself these notes online. Thanks again for your support. Greetings Renzo 2010/6/16 Warren Weckesser > josef.pktd at gmail.com wrote: > > On Wed, Jun 16, 2010 at 12:13 PM, neurino wrote: > > > >> Honestly the full question is > >> "how does scipy.stats.t.isf work compared to common spreadsheets > function > >> T.INV?" > >> I have to translate an excel calculation using TINV > >> In all excel, ooo or abiword I get: > >> > >> TINV: inverse of the survival function of the Student t-distribution > >> Arguments: > >> p: probability > >> dof: number of degrees of freedom > >> The survival function is 1 minus the cumulative distribution function. > >> Note: If p < 0 or p > 1 or dof < 1 this function returns a #NUM! error. > >> Microsoft Excel: This function is Excel compatible. > >> Examples: > >> tinv(0,4;32) evaluates to 0,852998453651888. > >> > >> while with scipy I get: > >> > >>>>> from scipy.stats import t > >>>>> t.isf(.4, 32) > >>>>> > >> 0.25546356665122449 > >> Any advice welcome, please consider I'm an informatic but not a > >> mathematician. > >> Thanks for your support > >> > > > > I guess Excel uses a two-sided tail probability > > > > > >>>> stats.t.isf(.4/2., 32) > >>>> > > 0.85299845247181938 > > > > > > Yes. > > Check out the documentation for TINV here: > > http://support.microsoft.com/kb/828340 > > Note that TINV(p, df) is the inverse for TDIST(x, df, 2). That '2' means > TDIST is two-sided. To quote from the above link: > > "For any particular positive value of x, TDIST(x, df, 2) returns the > probability that a t-distributed random variable with df degrees of > freedom is greater than or equal to x or is less than or equal to ?x." > > So you will need to divide the probability by 2 to compare t.isf to TINV. > For example, this matches TINV(0.4; 32): > > >>> t.isf(0.2, 32) > 0.8529984524718196 > > > Warren > > > Josef > > > > > > > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > >> > >> > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lasagnadavide at gmail.com Wed Jun 16 14:25:04 2010 From: lasagnadavide at gmail.com (Davide Lasagna) Date: Wed, 16 Jun 2010 20:25:04 +0200 Subject: [SciPy-User] Boxcar smoothing of 1D data array...? Message-ID: You could use the Savitzky-Golay smoothing filter function present in the cookbook. It is very well suited for such operation. My two cents, Davide -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.paesold at googlemail.com Thu Jun 17 02:24:49 2010 From: martin.paesold at googlemail.com (Martin Paesold) Date: Thu, 17 Jun 2010 14:24:49 +0800 Subject: [SciPy-User] Bug of sp.optimize.curve_fit Message-ID: Hi, I ran into trouble as I tried to fit data using only one fitting parameter. The functions '_general_function' and '_weigted_general_function' in the module python2.6/site-packages/scipy/optimize/minpack.py throw an TypeError. I use Python 2.6.5 -- EPD 6.2-1 (32-bit) on Ubuntu 9.10 I attached a file that produces the error. I think the problem is that the argument 'params' of the above functions is passed to the model used for the fit as 'function(xdata, *params)'. It seems that 'params' can be scalar which causes the TypeError when calling 'function'. I don't see why that happens, but for now I could solve my problem if '_general_function' and '_weigted_general_function' check whether 'params' is scalar and cast it to a list if so: 'if isscalar(params): params = [params]'. Cheers, Martin Paesold martin.paesold at gmail.com 6 Clementi Road #01-07 Amicus Student Hostel Singapore 129 741 +65 9448 8914 -------------- next part -------------- A non-text attachment was scrubbed... Name: min_bug.py Type: text/x-python Size: 427 bytes Desc: not available URL: From warren.weckesser at enthought.com Thu Jun 17 09:13:39 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 17 Jun 2010 08:13:39 -0500 Subject: [SciPy-User] Bug of sp.optimize.curve_fit In-Reply-To: References: Message-ID: <4C1A1F83.7070607@enthought.com> Looks like a bug. I filed a ticket here: http://projects.scipy.org/scipy/ticket/1204 Here's another example to reproduce the problem: ----- In [5]: def func(x, a): ...: y = x**a ...: return y ...: In [6]: x = np.array([2.0, 5.0, 6.0]) In [7]: y = np.array([4.5, 24.0, 37.5]) In [8]: curve_fit(func, x, y) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/warren/Desktop/ in () /Library/Frameworks/Python.framework/Versions/6.1/lib/python2.6/site-packages/scipy/optimize/minpack.pyc in curve_fit(f, xdata, ydata, p0, sigma, **kw) 423 424 if (len(ydata) > len(p0)) and pcov is not None: --> 425 s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0)) 426 pcov = pcov * s_sq 427 else: /Library/Frameworks/Python.framework/Versions/6.1/lib/python2.6/site-packages/scipy/optimize/minpack.pyc in _general_function(params, xdata, ydata, function) 337 338 def _general_function(params, xdata, ydata, function): --> 339 return function(xdata, *params) - ydata 340 341 def _weighted_general_function(params, xdata, ydata, function, weights): TypeError: func() argument after * must be a sequence, not numpy.float64 In [9]: def func2(x, a, b): ...: y = b * x**a ...: return y ...: In [10]: curve_fit(func2, x, y) Out[10]: (array([ 2.17102543, 0.75638651]), array([[ 0.0848126 , -0.11091404], [-0.11091404, 0.1456962 ]])) ----- Warren Martin Paesold wrote: > Hi, > > I ran into trouble as I tried to fit data using only one fitting > parameter. The functions '_general_function' and > '_weigted_general_function' in the module > python2.6/site-packages/scipy/optimize/minpack.py throw an TypeError. > > I use Python 2.6.5 -- EPD 6.2-1 (32-bit) on Ubuntu 9.10 > > I attached a file that produces the error. I think the problem is that > the argument 'params' of the above functions is passed to the model > used for the fit as 'function(xdata, *params)'. It seems that 'params' > can be scalar which causes the TypeError when calling 'function'. I > don't see why that happens, but for now I could solve my problem if > '_general_function' and '_weigted_general_function' check whether > 'params' is scalar and cast it to a list if so: > 'if isscalar(params): params = [params]'. > > Cheers, > > Martin Paesold > martin.paesold at gmail.com > > 6 Clementi Road > #01-07 Amicus Student Hostel > Singapore 129 741 > > +65 9448 8914 > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From mdekauwe at gmail.com Thu Jun 17 12:02:19 2010 From: mdekauwe at gmail.com (mdekauwe) Date: Thu, 17 Jun 2010 09:02:19 -0700 (PDT) Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28711581.post@talk.nabble.com> <28824023.post@talk.nabble.com> <28846602.post@talk.nabble.com> <28848191.post@talk.nabble.com> Message-ID: <28916343.post@talk.nabble.com> So what happens if I need to extend in two directions at once? So for example I have 2 arrays... timesteps = np.arange(30) and y which has dimensions 90, 3, where each row has values of say 1.2, 3.4, 5.5, then I have some function I call "func", where the arguments are timesteps and each of the rows of y, e.g. func(x, y[0,:]) However if I wanted to carry out this step for each of the rows (90) of y, I can't seem to broadcast this correctly. tmp = func(timesteps[:,np.newaxis], y) I can see why, as this will only stretch timesteps in one direction so that it becomes (30, 3), so is there a nice way to stretch the rows as well? I thought perhaps I needed to reshape timesteps first, but I didn't seem to solve it that way either. Incase none of this made sense... my original loop version tmp = np.zeros((90, 30)) for i in xrange(len(y)): tmp[i,:] = func(timsteps, y[i]) I have the feeling I am missing something very obvious here! Thanks! -- View this message in context: http://old.nabble.com/removing-for-loops...-tp28633477p28916343.html Sent from the Scipy-User mailing list archive at Nabble.com. From stevenj at alum.mit.edu Thu Jun 17 01:26:15 2010 From: stevenj at alum.mit.edu (Steven G. Johnson) Date: Thu, 17 Jun 2010 01:26:15 -0400 Subject: [SciPy-User] [ANN] NLopt, a nonlinear optimization library Message-ID: The NLopt library, available from http://ab-initio.mit.edu/nlopt provides a common interface for a large number of algorithms for both global and local nonlinear optimizations, both with and without gradient information, and including both bound constraints and nonlinear equality/inequality constraints. NLopt is written in C, but now includes a Python interface (as well as interfaces for C++, Fortran, Matlab, Octave, and Guile). It is free software under the GNU LGPL. Regards, Steven G. Johnson From ben.root at ou.edu Thu Jun 17 12:33:27 2010 From: ben.root at ou.edu (Benjamin Root) Date: Thu, 17 Jun 2010 11:33:27 -0500 Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: <28916343.post@talk.nabble.com> References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28711581.post@talk.nabble.com> <28824023.post@talk.nabble.com> <28846602.post@talk.nabble.com> <28848191.post@talk.nabble.com> <28916343.post@talk.nabble.com> Message-ID: Well, there is numpy.tile() that can replicate a numpy array multiple times in various dimensions, but I have to wonder if you need to step back and think about why your dimensions aren't matching up. If you are pulling in data where one of the dimensions has a length of 90, but the corresponding array that denotes that dimension (timesteps) has a length of 30, then I have to wonder if there needs to be some rethinking of the overall design. Ben Root On Thu, Jun 17, 2010 at 11:02 AM, mdekauwe wrote: > > So what happens if I need to extend in two directions at once? So for > example > I have 2 arrays... > > timesteps = np.arange(30) > and > y which has dimensions 90, 3, where each row has values of say 1.2, 3.4, > 5.5, > > then I have some function I call "func", where the arguments are timesteps > and each of the rows of y, e.g. > > func(x, y[0,:]) > > However if I wanted to carry out this step for each of the rows (90) of y, > I > can't seem to broadcast this correctly. > > tmp = func(timesteps[:,np.newaxis], y) > > I can see why, as this will only stretch timesteps in one direction so that > it becomes (30, 3), so is there a nice way to stretch the rows as well? I > thought perhaps I needed to reshape timesteps first, but I didn't seem to > solve it that way either. > > Incase none of this made sense... my original loop version > > tmp = np.zeros((90, 30)) > for i in xrange(len(y)): > tmp[i,:] = func(timsteps, y[i]) > > I have the feeling I am missing something very obvious here! Thanks! > > -- > View this message in context: > http://old.nabble.com/removing-for-loops...-tp28633477p28916343.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdekauwe at gmail.com Thu Jun 17 13:48:40 2010 From: mdekauwe at gmail.com (mdekauwe) Date: Thu, 17 Jun 2010 10:48:40 -0700 (PDT) Subject: [SciPy-User] re[SciPy-user] moving for loops... In-Reply-To: References: <28633477.post@talk.nabble.com> <28634924.post@talk.nabble.com> <28640602.post@talk.nabble.com> <28640656.post@talk.nabble.com> <28642434.post@talk.nabble.com> <28686356.post@talk.nabble.com> <28711249.post@talk.nabble.com> <28711444.post@talk.nabble.com> <28711581.post@talk.nabble.com> <28824023.post@talk.nabble.com> <28846602.post@talk.nabble.com> <28848191.post@talk.nabble.com> <28916343.post@talk.nabble.com> Message-ID: <28917615.post@talk.nabble.com> Hi, Possibly... but y in reality is an array of all the iterations of an optimisation algorithm for 3 parameters. So it is 90 in the scenario I described but it could have been 100,000. What I was trying to do was plot all (really the avg) of all the possible realisations of the model function given these iterations (attempts to optimise the model parameters). Hence the dims don't match. So after I do a run of the model function for each of the possible optimised parameters I average the ensemble. So I end up with a 30, 1 dim array. So I guess it is possible I am being very dumb here, and I said as much in my original posting! Thanks, Martin -- View this message in context: http://old.nabble.com/removing-for-loops...-tp28633477p28917615.html Sent from the Scipy-User mailing list archive at Nabble.com. From jsseabold at gmail.com Thu Jun 17 18:21:51 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 17 Jun 2010 18:21:51 -0400 Subject: [SciPy-User] spelling numpy and scipy Message-ID: This is probably fodder for the FAQ (though I couldn't find an answer searching) and possibly rather trivial, but how does one spell numpy and scipy? Is it definitely NumPy and SciPy or is it numpy and scipy while SciPy refers to the larger Scientific Python community? Skipper PS. Is it safe to remove the bit about the sandbox in the FAQ? http://www.scipy.org/FAQ#head-690f5c7fb8d9a6998229bb2b271a198e078a7975 -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Thu Jun 17 20:15:48 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 17 Jun 2010 19:15:48 -0500 Subject: [SciPy-User] spelling numpy and scipy In-Reply-To: References: Message-ID: <4C1ABAB4.9090402@enthought.com> Skipper Seabold wrote: > This is probably fodder for the FAQ (though I couldn't find an answer > searching) and possibly rather trivial, but how does one spell numpy > and scipy? Is it definitely NumPy and SciPy or is it numpy and scipy > while SciPy refers to the larger Scientific Python community? > The main web page, www.scipy.org, uses NumPy and SciPy. I'd go with those. Warren > Skipper > > PS. Is it safe to remove the bit about the sandbox in the FAQ? > > http://www.scipy.org/FAQ#head-690f5c7fb8d9a6998229bb2b271a198e078a7975 > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Thu Jun 17 20:18:16 2010 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Jun 2010 19:18:16 -0500 Subject: [SciPy-User] spelling numpy and scipy In-Reply-To: References: Message-ID: On Thu, Jun 17, 2010 at 17:21, Skipper Seabold wrote: > This is probably fodder for the FAQ (though I couldn't find an answer > searching) and possibly rather trivial, but how does one spell numpy and > scipy?? Is it definitely NumPy and SciPy or is it numpy and scipy while > SciPy refers to the larger Scientific Python community? NumPy and SciPy to refer to the projects. numpy and scipy to refer to the packages, specifically. When in doubt, use the former. > Skipper > > PS.? Is it safe to remove the bit about the sandbox in the FAQ? > > http://www.scipy.org/FAQ#head-690f5c7fb8d9a6998229bb2b271a198e078a7975 Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jsseabold at gmail.com Thu Jun 17 21:05:42 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 17 Jun 2010 21:05:42 -0400 Subject: [SciPy-User] spelling numpy and scipy In-Reply-To: References: Message-ID: On Thu, Jun 17, 2010 at 8:18 PM, Robert Kern wrote: > > On Thu, Jun 17, 2010 at 17:21, Skipper Seabold wrote: > > This is probably fodder for the FAQ (though I couldn't find an answer > > searching) and possibly rather trivial, but how does one spell numpy and > > scipy?? Is it definitely NumPy and SciPy or is it numpy and scipy while > > SciPy refers to the larger Scientific Python community? > > NumPy and SciPy to refer to the projects. numpy and scipy to refer to > the packages, specifically. When in doubt, use the former. > Thanks. Hope no one minds I added this to the FAQ. > > Skipper > > > > PS.? Is it safe to remove the bit about the sandbox in the FAQ? > > > > http://www.scipy.org/FAQ#head-690f5c7fb8d9a6998229bb2b271a198e078a7975 > > Yes. > Done. Skipper From d.l.goldsmith at gmail.com Thu Jun 17 23:13:58 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 17 Jun 2010 20:13:58 -0700 Subject: [SciPy-User] spelling numpy and scipy In-Reply-To: References: Message-ID: On Thu, Jun 17, 2010 at 6:05 PM, Skipper Seabold wrote: > On Thu, Jun 17, 2010 at 8:18 PM, Robert Kern > wrote: > > > > On Thu, Jun 17, 2010 at 17:21, Skipper Seabold > wrote: > > > This is probably fodder for the FAQ (though I couldn't find an answer > > > searching) and possibly rather trivial, but how does one spell numpy > and > > > scipy? Is it definitely NumPy and SciPy or is it numpy and scipy while > > > SciPy refers to the larger Scientific Python community? > > > > NumPy and SciPy to refer to the projects. numpy and scipy to refer to > > the packages, specifically. When in doubt, use the former. > That's what I've been trying to do in the docs...FWIW. DG PS: Re: MATLAB, Joe checked w/ his University's lawyers and IIRC they said (TM) everywhere is overkill (that amounts to defending their trademark, which we're under no legal obligation to do), but we should (to be considerate and "safe") use their capitalization which, glancing over at my guidebooks to confirm, is MATLAB (all caps). So, there it is. > > > > Thanks. Hope no one minds I added this to the FAQ. > > > > Skipper > > > > > > PS. Is it safe to remove the bit about the sandbox in the FAQ? > > > > > > http://www.scipy.org/FAQ#head-690f5c7fb8d9a6998229bb2b271a198e078a7975 > > > > Yes. > > > > Done. > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From berthold.hoellmann at gl-group.com Fri Jun 18 04:04:23 2010 From: berthold.hoellmann at gl-group.com (Berthold Hoellmann) Date: Fri, 18 Jun 2010 10:04:23 +0200 Subject: [SciPy-User] checking fpr array type in C extension Message-ID: Hello, I do have problems for checking for integer array types in a C extension under Windows. I've put together a small example to illustrate the problem: ------------------------------------------------------------------------ hoel at pc090498 ~/pytest $ cat tst.c #include #include "Python.h" #include "numpy/arrayobject.h" #define TRY(E) (E) ; if(PyErr_Occurred()) {fprintf(stderr, "%s:%d\n", __FILE__, __LINE__); return NULL;} static PyObject* inttest_cfunc (PyObject *dummy, PyObject *args) { PyArrayObject *array; TRY(PyArg_ParseTuple(args, "O!:inttest", &PyArray_Type, &array)); fprintf(stderr, "PyArray_TYPE(array): %ld; NPY_INT: %ld\n", PyArray_TYPE(array), NPY_INT); if (PyArray_TYPE(array) == NPY_INT) { fprintf(stderr, "found NPY_INT\n"); } else { fprintf(stderr, "NPY_INT not found\n"); } Py_RETURN_NONE; } static PyMethodDef mymethods[] = { { "inttestfunc", inttest_cfunc, METH_VARARGS, "Doc string"}, {NULL, NULL, 0, NULL} /* Sentinel */ }; PyMODINIT_FUNC inittst(void) { (void)Py_InitModule("tst", mymethods); import_array(); } hoel at pc090498 ~/pytest $ python setup.py build running build ... hoel at pc090498 ~/pytest $ cat xx.py import tst, sys import numpy as np print >>sys.stderr, np.__version__, np.__path__ tst.inttestfunc(np.array((1,2),dtype=np.int)) tst.inttestfunc(np.array((1,2),dtype=np.int8)) tst.inttestfunc(np.array((1,2),dtype=np.int16)) tst.inttestfunc(np.array((1,2),dtype=np.int32)) tst.inttestfunc(np.array((1,2),dtype=np.int64)) hoel at pc090498 ~/pytest $ PYTHONPATH=build/lib.win32-2.5/ python xx.py 1.4.1 ['C:\\Python25\\lib\\site-packages\\numpy'] PyArray_TYPE(array): 7; NPY_INT: 5 NPY_INT not found PyArray_TYPE(array): 1; NPY_INT: 5 NPY_INT not found PyArray_TYPE(array): 3; NPY_INT: 5 NPY_INT not found PyArray_TYPE(array): 7; NPY_INT: 5 NPY_INT not found PyArray_TYPE(array): 9; NPY_INT: 5 NPY_INT not found ------------------------------------------------------------------------ NPY_INT32 is 7, but shouldn't NPY_INT correspond to numpy.int. And what kind of int is NPY_INT in this case? Kind regards Berthold H?llmann -- Germanischer Lloyd AG Berthold H?llmann Project Engineer, CAE Development Brooktorkai 18 20457 Hamburg Germany Phone: +49(0)40 36149-7374 Fax: +49(0)40 36149-7320 e-mail: berthold.hoellmann at gl-group.com Internet: http://www.gl-group.com This e-mail and any attachment thereto may contain confidential information and/or information protected by intellectual property rights for the exclusive attention of the intended addressees named above. Any access of third parties to this e-mail is unauthorised. Any use of this e-mail by unintended recipients such as total or partial copying, distribution, disclosure etc. is prohibited and may be unlawful. When addressed to our clients the content of this e-mail is subject to the General Terms and Conditions of GL's Group of Companies applicable at the date of this e-mail. If you have received this e-mail in error, please notify the sender either by telephone or by e-mail and delete the material from any computer. GL's Group of Companies does not warrant and/or guarantee that this message at the moment of receipt is authentic, correct and its communication free of errors, interruption etc. Germanischer Lloyd AG, 31393 AG HH, Hamburg, Vorstand: Dr. Hermann J. Klein, Dr. Joachim Segatz, Pekka Paasivaara, Vorsitzender des Aufsichtsrats: Dr. Wolfgang Peiner From sturla at molden.no Fri Jun 18 07:22:45 2010 From: sturla at molden.no (Sturla Molden) Date: Fri, 18 Jun 2010 13:22:45 +0200 Subject: [SciPy-User] Boxcar smoothing of 1D data array...? In-Reply-To: <708711.77910.qm@web33002.mail.mud.yahoo.com> References: <10519.90.184.76.157.1276356525.squirrel@webmail.nbi.ku.dk> <708711.77910.qm@web33002.mail.mud.yahoo.com> Message-ID: <4C1B5705.4080600@molden.no> Den 16.06.2010 02:26, skrev David Baddeley: > Alternatively you could just use scipy.convolve with a tophat kernel > ie (for a filter of length N & signal y): > > scipy.convolve(y, ones(N)/N) > > see the docs for scipy.convolve for more info (you might want to > specify how it handles the ends, for example) > You should not use convolution for boxcar filtering. It can be solved using a recursive filter, basically y[n] = y[n-1] + x[n] - x[n-m] then normalize y by 1/m. Sturla -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.halder at mytum.de Fri Jun 18 09:31:36 2010 From: marco.halder at mytum.de (P3trus) Date: Fri, 18 Jun 2010 06:31:36 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Covariance Matrix Message-ID: <28926113.post@talk.nabble.com> Hello im sorry if this is a doublepost, but I'm not used to mailling lists. I'd like to know if there is a build in function to get the covariance matrix of the fit parameter from a polynomial fit. I tried polyfit() but it doesn't return a covariance matrix.Is there another way to get it? Or do i have to calculate it manually and what would be an effective way? -- View this message in context: http://old.nabble.com/Covariance-Matrix-tp28926113p28926113.html Sent from the Scipy-User mailing list archive at Nabble.com. From charlesr.harris at gmail.com Fri Jun 18 10:13:44 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 18 Jun 2010 08:13:44 -0600 Subject: [SciPy-User] [SciPy-user] Covariance Matrix In-Reply-To: <28926113.post@talk.nabble.com> References: <28926113.post@talk.nabble.com> Message-ID: On Fri, Jun 18, 2010 at 7:31 AM, P3trus wrote: > > Hello im sorry if this is a doublepost, but I'm not used to mailling lists. > > I'd like to know if there is a build in function to get the covariance > matrix of the fit parameter from a polynomial fit. > I tried polyfit() but it doesn't return a covariance matrix.Is there > another > way to get it? Or do i have to calculate it manually and what would be an > effective way? > Polyfit call lstsq, and lstsq in turn calls the lapack function dgelsd, which doesn't seem to return the needed information, although it may be buried among the returns in undocumented form. Since I usually want to see the covariance, this is annoying. i've been thinking we should modify lstsq so that it is more useful, but in the meantime, if you are solving Ax = y, then the covariance is (A^T*A)^{-1}*sigma**2, where sigma is the estimated standard deviation of the measurement errors. The (A^T*A)^{-1} part can be gotten from A in one of its factored forms. The tricky part is estimating the errors since 1) they are often correlated instead of independent and 2) their variance may vary from data point to data point. Which is to say the estimated covariance matrix is useful, but probably not statistically rigorous. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aarchiba at physics.mcgill.ca Fri Jun 18 10:51:28 2010 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Fri, 18 Jun 2010 10:51:28 -0400 Subject: [SciPy-User] Boxcar smoothing of 1D data array...? In-Reply-To: <4C1B5705.4080600@molden.no> References: <10519.90.184.76.157.1276356525.squirrel@webmail.nbi.ku.dk> <708711.77910.qm@web33002.mail.mud.yahoo.com> <4C1B5705.4080600@molden.no> Message-ID: On 18 June 2010 07:22, Sturla Molden wrote: > > Den 16.06.2010 02:26, skrev David Baddeley: > > Alternatively you could just use scipy.convolve with a tophat kernel ie (for > a filter of length N & signal y): > scipy.convolve(y, ones(N)/N) > see the docs for scipy.convolve for more info (you might want to specify how > it handles the ends, for example) > > You should not use convolution for boxcar filtering. It can be solved using > a recursive filter, basically > > ??? y[n] = y[n-1] + x[n] - x[n-m] > > then normalize y by 1/m. How does the numerical stability of this compare to a FIR implementation (with or without a Fourier transform)? Anne > Sturla > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From d.l.goldsmith at gmail.com Fri Jun 18 13:43:23 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Fri, 18 Jun 2010 10:43:23 -0700 Subject: [SciPy-User] SciPy docs marathon: a little more info Message-ID: On Mon, Jun 14, 2010 at 2:05 AM, David Goldsmith wrote: > Hi, all! The scipy doc marathon has gotten off to a very slow start this > summer. We are producing less than 1000 words a week, perhaps because > many universities are still finishing up spring classes. So, this is > a second appeal to everyone to pitch in and help get scipy documented > so that it's easy to learn how to use it. Because some of the > packages are quite specialized, we need both "regular" contributors to > write lots of pages, and some people experienced in using each module > (and the mathematics behind the software) to make sure we don't water > it down or make it wrong in the process. If you can help, please, now is > the > time to step forward. Thanks! > > On behalf of Joe and myself, > > David Goldsmith > Olympia, WA > (Apparently this didn't go through the first time.) OK, a few people have come forward - thanks! Let me enumerate the categories that still have no "declared" volunteer writer-editors (all categories are in need of leaders): Max. Entropy, Misc., Image Manip. (Milestone 6) Signal processing (Milestone 8) Sparse Matrices (Milestone 9) Spatial Algorithms., Special funcs. (Milestone 10) C/C++ Integration (Milestone 13) As for the rest, only Interpolation (Milestone 3) has more than one person (but I'm one of the two), and I'm the only person on four others. So, hopefully, knowing specifically which areas are in dire need will inspire people skilled in those areas to sign up. Thanks for your time and help, DG PS: For your convenience, here's the link to the scipy Milestonespage. (Note that the Milestones link at the top of each Wiki page links, incorrectly in the case of the SciPy pages, to the NumPy Milestones page, which we are not actively working on in this Marathon; this is a known, reported bug in the Wiki program.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Fri Jun 18 15:43:19 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 19 Jun 2010 03:43:19 +0800 Subject: [SciPy-User] Bug of sp.optimize.curve_fit In-Reply-To: <4C1A1F83.7070607@enthought.com> References: <4C1A1F83.7070607@enthought.com> Message-ID: On Thu, Jun 17, 2010 at 9:13 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > Looks like a bug. I filed a ticket here: > http://projects.scipy.org/scipy/ticket/1204 > Fixed in r6542. leastsq was returning a scalar instead of an array with a single element. Both its docstring and this bug say it should return the latter. Cheers, Ralf > Here's another example to reproduce the problem: > > ----- > > In [5]: def func(x, a): > ...: y = x**a > ...: return y > ...: > > In [6]: x = np.array([2.0, 5.0, 6.0]) > > In [7]: y = np.array([4.5, 24.0, 37.5]) > > In [8]: curve_fit(func, x, y) > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > > /Users/warren/Desktop/ in () > > > /Library/Frameworks/Python.framework/Versions/6.1/lib/python2.6/site-packages/scipy/optimize/minpack.pyc > in curve_fit(f, xdata, ydata, p0, sigma, **kw) > 423 > 424 if (len(ydata) > len(p0)) and pcov is not None: > --> 425 s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0)) > 426 pcov = pcov * s_sq > 427 else: > > > /Library/Frameworks/Python.framework/Versions/6.1/lib/python2.6/site-packages/scipy/optimize/minpack.pyc > in _general_function(params, xdata, ydata, function) > 337 > 338 def _general_function(params, xdata, ydata, function): > --> 339 return function(xdata, *params) - ydata > 340 > 341 def _weighted_general_function(params, xdata, ydata, function, > weights): > > TypeError: func() argument after * must be a sequence, not numpy.float64 > > In [9]: def func2(x, a, b): > ...: y = b * x**a > ...: return y > ...: > > In [10]: curve_fit(func2, x, y) > Out[10]: > (array([ 2.17102543, 0.75638651]), > array([[ 0.0848126 , -0.11091404], > [-0.11091404, 0.1456962 ]])) > > ----- > > > > Warren > > Martin Paesold wrote: > > Hi, > > > > I ran into trouble as I tried to fit data using only one fitting > > parameter. The functions '_general_function' and > > '_weigted_general_function' in the module > > python2.6/site-packages/scipy/optimize/minpack.py throw an TypeError. > > > > I use Python 2.6.5 -- EPD 6.2-1 (32-bit) on Ubuntu 9.10 > > > > I attached a file that produces the error. I think the problem is that > > the argument 'params' of the above functions is passed to the model > > used for the fit as 'function(xdata, *params)'. It seems that 'params' > > can be scalar which causes the TypeError when calling 'function'. I > > don't see why that happens, but for now I could solve my problem if > > '_general_function' and '_weigted_general_function' check whether > > 'params' is scalar and cast it to a list if so: > > 'if isscalar(params): params = [params]'. > > > > Cheers, > > > > Martin Paesold > > martin.paesold at gmail.com > > > > 6 Clementi Road > > #01-07 Amicus Student Hostel > > Singapore 129 741 > > > > +65 9448 8914 > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jh at physics.ucf.edu Fri Jun 18 15:56:20 2010 From: jh at physics.ucf.edu (Joe Harrington) Date: Fri, 18 Jun 2010 15:56:20 -0400 Subject: [SciPy-User] SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: (scipy-user-request@scipy.org) References: Message-ID: On Thu, 17 Jun 2010 20:13:58 -0700, David Goldsmith wrote > PS: Re: MATLAB, Joe checked w/ his University's lawyers and IIRC they said > (TM) everywhere is overkill (that amounts to defending their trademark, > which we're under no legal obligation to do), but we should (to be > considerate and "safe") use their capitalization which, glancing over at my > guidebooks to confirm, is MATLAB (all caps). So, there it is. YDRC. That was my opinion but not the lawyers'. The issue is still going back and forth between me and the lawyers. *Any* use of a trademarked word is reserved for the trademark owner unless it is "fair use" or they permit you to use it. Fair use is really nebulous. Even if you think you are fairly using the term, if the company thinks your use harms them they can sue, and such cases have been won against people who have not been producing a competing product. For example, given that NumPy and SciPy *do* provide an overlapping set of functionality, if we peppered our front page with statements like "SciPy has nothing to do with MATLAB(R)," "MATLAB(R) and SciPy are completely incompatible," and so forth, the Mathworks could say we were doing that to get high search-engine placement and to associate our product with theirs in the eyes of readers, and that we were therefore using their registered mark to divert customers from them. That would likely be called infringing, no matter how you decorated the word with (R), TM, etc. This is why, for example, the producers of CentOS do not mention the term "Red Hat" anywhere at all. Even a truthful statement using their registered word puts them in danger of a lawsuit they would have a good chance of losing. There's a goodwill aspect of things here as well; if they prefer MATLAB and you say Matlab, it can annoy them and make them more likely to have their lawyers address the issue. I've pressed our lawyers to look for established cases and precedents for use of undecorated trademarks in commentary and review, but for the docs, which are part of our "product", I think the safe route is to use MATLAB(R) as the Mathworks recommends. Quite frankly, I think doing so also makes us look more competent and serious to our own users. None of this is to recommend that we go out of our way to use the term in the docs, however. We're documenting SciPy, not MATLAB(R), and many of us have never even seen a MATLAB(R) prompt or session. We handle things like a translation table in web pages, not the docs. Someone might want to look at those pages and make sure they respect the trademark. --jh-- From warren.weckesser at enthought.com Fri Jun 18 16:07:53 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Fri, 18 Jun 2010 15:07:53 -0500 Subject: [SciPy-User] Bug of sp.optimize.curve_fit In-Reply-To: References: <4C1A1F83.7070607@enthought.com> Message-ID: <4C1BD219.9010409@enthought.com> Ralf Gommers wrote: > > > On Thu, Jun 17, 2010 at 9:13 PM, Warren Weckesser > > wrote: > > Looks like a bug. I filed a ticket here: > http://projects.scipy.org/scipy/ticket/1204 > > > Fixed in r6542. leastsq was returning a scalar instead of an array > with a single element. Both its docstring and this bug say it should > return the latter. Great, thanks. Warren > > Cheers, > Ralf > > > > Here's another example to reproduce the problem: > > ----- > > In [5]: def func(x, a): > ...: y = x**a > ...: return y > ...: > > In [6]: x = np.array([2.0, 5.0, 6.0]) > > In [7]: y = np.array([4.5, 24.0, 37.5]) > > In [8]: curve_fit(func, x, y) > --------------------------------------------------------------------------- > TypeError Traceback (most recent > call last) > > /Users/warren/Desktop/ in () > > /Library/Frameworks/Python.framework/Versions/6.1/lib/python2.6/site-packages/scipy/optimize/minpack.pyc > in curve_fit(f, xdata, ydata, p0, sigma, **kw) > 423 > 424 if (len(ydata) > len(p0)) and pcov is not None: > --> 425 s_sq = (func(popt, > *args)**2).sum()/(len(ydata)-len(p0)) > 426 pcov = pcov * s_sq > 427 else: > > /Library/Frameworks/Python.framework/Versions/6.1/lib/python2.6/site-packages/scipy/optimize/minpack.pyc > in _general_function(params, xdata, ydata, function) > 337 > 338 def _general_function(params, xdata, ydata, function): > --> 339 return function(xdata, *params) - ydata > 340 > 341 def _weighted_general_function(params, xdata, ydata, function, > weights): > > TypeError: func() argument after * must be a sequence, not > numpy.float64 > > In [9]: def func2(x, a, b): > ...: y = b * x**a > ...: return y > ...: > > In [10]: curve_fit(func2, x, y) > Out[10]: > (array([ 2.17102543, 0.75638651]), > array([[ 0.0848126 , -0.11091404], > [-0.11091404, 0.1456962 ]])) > > ----- > > > > Warren > > Martin Paesold wrote: > > Hi, > > > > I ran into trouble as I tried to fit data using only one fitting > > parameter. The functions '_general_function' and > > '_weigted_general_function' in the module > > python2.6/site-packages/scipy/optimize/minpack.py throw an > TypeError. > > > > I use Python 2.6.5 -- EPD 6.2-1 (32-bit) on Ubuntu 9.10 > > > > I attached a file that produces the error. I think the problem > is that > > the argument 'params' of the above functions is passed to the model > > used for the fit as 'function(xdata, *params)'. It seems that > 'params' > > can be scalar which causes the TypeError when calling 'function'. I > > don't see why that happens, but for now I could solve my problem if > > '_general_function' and '_weigted_general_function' check whether > > 'params' is scalar and cast it to a list if so: > > 'if isscalar(params): params = [params]'. > > > > Cheers, > > > > Martin Paesold > > martin.paesold at gmail.com > > > > 6 Clementi Road > > #01-07 Amicus Student Hostel > > Singapore 129 741 > > > > +65 9448 8914 > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From david_baddeley at yahoo.com.au Fri Jun 18 16:13:26 2010 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Fri, 18 Jun 2010 13:13:26 -0700 (PDT) Subject: [SciPy-User] Boxcar smoothing of 1D data array...? In-Reply-To: <4C1B5705.4080600@molden.no> References: <10519.90.184.76.157.1276356525.squirrel@webmail.nbi.ku.dk> <708711.77910.qm@web33002.mail.mud.yahoo.com> <4C1B5705.4080600@molden.no> Message-ID: <27770.39270.qm@web33004.mail.mud.yahoo.com> Out of curiosity, are there any reasons other than performance (which might be moot if you have to implement the recursive filter as a python loop) for not using a convolution? cheers, David ________________________________ From: Sturla Molden To: scipy-user at scipy.org Sent: Fri, 18 June, 2010 11:22:45 PM Subject: Re: [SciPy-User] Boxcar smoothing of 1D data array...? Den 16.06.2010 02:26, skrev David Baddeley: > >Alternatively you could just use scipy.convolve with a tophat >kernel ie (for a filter of length N & signal y): > > >scipy.convolve(y, ones(N)/N) > > >see the docs for scipy.convolve for more info (you might want to >specify how it handles the ends, for example) > You should not use convolution for boxcar filtering. It can be solved using a recursive filter, basically y[n] = y[n-1] + x[n] - x[n-m] then normalize y by 1/m. Sturla -------------- next part -------------- An HTML attachment was scrubbed... URL: From ijstokes at hkl.hms.harvard.edu Fri Jun 18 16:30:35 2010 From: ijstokes at hkl.hms.harvard.edu (Ian Stokes-Rees) Date: Fri, 18 Jun 2010 16:30:35 -0400 Subject: [SciPy-User] genfromtxt with missing fields Message-ID: <4C1BD76B.1000803@hkl.hms.harvard.edu> I have an ASCII file with missing fields. The entries are tab delimited, and of mixed type. The missing field issue is somewhat different from what "missing_values" and "filling_values" supports: if a field is missing, it marks the end of the entry (i.e. there is a new line character). I cannot work out how to import this. The command (an example below) works fine when I remove the fields with missing data. Any suggestions on how to cope with this would be greatly appreciated. Ian d = genfromtxt('3cdx.snapshot.dat', delimiter="\t", dtype=[('mtz','S4'), ('scop', 'S6'), ('result', 'S12'), ('start','i4'), ('runtime', 'i2'), ('exitcode','u1'), ('rfz', 'f4'), ('tfz','f4'), ('pak','u1'), ('llginitial', 'i2'), ('llg', 'i2')], usecols=(1,3,4,6,7,10)) From jsseabold at gmail.com Fri Jun 18 16:38:28 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 18 Jun 2010 16:38:28 -0400 Subject: [SciPy-User] genfromtxt with missing fields In-Reply-To: <4C1BD76B.1000803@hkl.hms.harvard.edu> References: <4C1BD76B.1000803@hkl.hms.harvard.edu> Message-ID: On Fri, Jun 18, 2010 at 4:30 PM, Ian Stokes-Rees wrote: > I have an ASCII file with missing fields. ?The entries are tab > delimited, and of mixed type. ?The missing field issue is somewhat > different from what "missing_values" and "filling_values" supports: if a > field is missing, it marks the end of the entry (i.e. there is a new > line character). ?I cannot work out how to import this. ? The command > (an example below) works fine when I remove the fields with missing > data. ?Any suggestions on how to cope with this would be greatly > appreciated. > > Ian > > d = genfromtxt('3cdx.snapshot.dat', > delimiter="\t", > dtype=[('mtz','S4'), > ? ?('scop', 'S6'), > ? ?('result', 'S12'), > ? ?('start','i4'), > ? ?('runtime', 'i2'), > ? ?('exitcode','u1'), > ? ?('rfz', 'f4'), > ? ?('tfz','f4'), > ? ?('pak','u1'), > ? ?('llginitial', 'i2'), > ? ?('llg', 'i2')], > usecols=(1,3,4,6,7,10)) Since you can specify which columns to skip, then there is a newline character only if it's the last (few) column(s) that's missing correct? If not the last column then it's just \t\t? My suggestion would be to iterate through the ASCII file, split each line on the delimiter, and add tabs for these last missing entries if needed, if I understand you correctly. Skipper From ijstokes at hkl.hms.harvard.edu Fri Jun 18 16:49:14 2010 From: ijstokes at hkl.hms.harvard.edu (Ian Stokes-Rees) Date: Fri, 18 Jun 2010 16:49:14 -0400 Subject: [SciPy-User] genfromtxt with missing fields In-Reply-To: References: <4C1BD76B.1000803@hkl.hms.harvard.edu> Message-ID: <4C1BDBCA.6060106@hkl.hms.harvard.edu> > Since you can specify which columns to skip, then there is a newline > character only if it's the last (few) column(s) that's missing > correct? If not the last column then it's just \t\t? My suggestion > would be to iterate through the ASCII file, split each line on the > delimiter, and add tabs for these last missing entries if needed, if I > understand you correctly. > Yes, I can re-process the data inside my script, but I was hoping there was some clever numpy (or scipy.io) oriented way to deal with this, and there is... I should have read the exception better and looked at the full docs for genfromtxt. The answer is right there: the boolean flag "invalid_raise" will cause malformed lines to be skipped. The exception even shows where this will be applied. Ian numpy/lib/io.py in genfromtxt(fname, dtype, comments, delimiter, skiprows, skip_header, skip_footer, converters, missing, missing_values, filling_values, usecols, names, excludelist, deletechars, autostrip, case_sensitive, defaultfmt, unpack, usemask, loose, invalid_raise) 1319 # Raise an exception ? 1320 if invalid_raise: -> 1321 raise ValueError(errmsg) 1322 # Issue a warning ? 1323 else: From lutz.maibaum at gmail.com Fri Jun 18 21:31:59 2010 From: lutz.maibaum at gmail.com (Lutz Maibaum) Date: Fri, 18 Jun 2010 18:31:59 -0700 Subject: [SciPy-User] Reading / writing sparse matrices Message-ID: How can I write a sparse matrix with elements of type uint64 to a file, and recover it while preserving the data type? For example: >>> import numpy as np >>> import scipy.sparse >>> a=scipy.sparse.lil_matrix((5,5), dtype=np.uint64) >>> a[0,0]=9876543210 Now I save this matrix to a file: >>> import scipy.io >>> scipy.io.mmwrite("test.mtx", a, field='integer') If I do not specify the field argument of mmwrite, I get a "unexpected dtype of kind u" exception. The generated file test.mtx looks as expected. But when I try to read this matrix, it is converted to int32: >>> b=scipy.io.mmread("test.mtx") >>> b.dtype dtype('int32') >>> b.data array([-2147483648], dtype=int32) As far as I can tell, it is not possible to specify a dtype when calling mmread. Is there a better way to go about this? Any help is much appreciated. Lutz From sturla at molden.no Sat Jun 19 06:00:33 2010 From: sturla at molden.no (Sturla Molden) Date: Sat, 19 Jun 2010 12:00:33 +0200 Subject: [SciPy-User] Boxcar smoothing of 1D data array...? In-Reply-To: References: <10519.90.184.76.157.1276356525.squirrel@webmail.nbi.ku.dk> <708711.77910.qm@web33002.mail.mud.yahoo.com> <4C1B5705.4080600@molden.no> Message-ID: <9C163EEB-0ACE-466D-8650-07621555CF43@molden.no> Den 18. juni 2010 kl. 16.51 skrev Anne Archibald : >> >> >> >> y[n] = y[n-1] + x[n] - x[n-m] >> >> then normalize y by 1/m. > > How does the numerical stability of this compare to a FIR > implementation (with or without a Fourier transform)? > >> For practical purposes, x will be a digital signal (from an ADC) or a digital image. Thus the recursive boxcar can be implemented with integer maths. Stability is excellent as numerical error is 0. :-) You just have to make sure that y does not overflow (e.g. let y be 32 bit if x is 16 bit). Sturla From matthew.brett at gmail.com Sat Jun 19 09:18:14 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 19 Jun 2010 14:18:14 +0100 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 Message-ID: Hi, > I've pressed our lawyers to look for established cases and precedents > for use of undecorated trademarks in commentary and review, but for > the docs, which are part of our "product", I think the safe route is > to use MATLAB(R) as the Mathworks recommends. ?Quite frankly, I think > doing so also makes us look more competent and serious to our own > users. As far as I can see, it doesn't make any legal difference to the use of the term, whether you attach (R) to MATLAB or not. It's difficult to see how a phrase such as 'MATLAB file format' could be anything but nominative use: http://en.wikipedia.org/wiki/Fair_use_%28U.S._trademark_law%29 http://www.publaw.com/fairusetrade.html and therefore fair use. I guess that you mean that putting (R) next to MATLAB in every use will make the Mathworks feel better and therefore less likely to sue, but it seems vanishingly unlikely to me that they would attempt this. For example, on the Sage home page: http://www.sagemath.org/ we see an undecorated 'Mission: Creating a viable free open source alternative to Magma, Maple, Mathematica and Matlab.' - and this is a much more directly comparative use than we have. I think the best way is the way I suggested a while back; that is something on the lines of: These are readers for the MATLAB [1] file format. Blah Blah. The MATLAB file format specifies that... [1] MATLAB is a registered trademark belonging to the Mathworks inc. We use this trademark without permission from the Mathworks.. Our use of the trademark is not authorized by, associated with or sponsored by the trademark owner. (see http://www.publaw.com/fairusetrade.html). Putting (R) for the many mentions of MATLAB seems like overkill to me and conveys the impression that we are a bit scared of lawyers for no good reason, and thus makes us seem less competent than not doing so. On the other hand, sticking to MATLAB rather than Matlab is probably safer (http://www.publaw.com/fairusetrade.html again). Our only possible problem is that we also use 'matlab' as a module name. I can't imagine that this will exercise the Mathworks much, but it does mean we sometimes don't use 'matlab' in a nominative sense. If we want to avoid that, we'll have to rename the module to something like 'matfile'. But - 'I am not a lawyer' (TM). See you, Matthew From aarchiba at physics.mcgill.ca Sat Jun 19 12:15:59 2010 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Sat, 19 Jun 2010 12:15:59 -0400 Subject: [SciPy-User] Boxcar smoothing of 1D data array...? In-Reply-To: <9C163EEB-0ACE-466D-8650-07621555CF43@molden.no> References: <10519.90.184.76.157.1276356525.squirrel@webmail.nbi.ku.dk> <708711.77910.qm@web33002.mail.mud.yahoo.com> <4C1B5705.4080600@molden.no> <9C163EEB-0ACE-466D-8650-07621555CF43@molden.no> Message-ID: On 19 June 2010 06:00, Sturla Molden wrote: > > > Den 18. juni 2010 kl. 16.51 skrev Anne Archibald ?>: > >>> >>> >>> >>> ? ? y[n] = y[n-1] + x[n] - x[n-m] >>> >>> then normalize y by 1/m. >> >> How does the numerical stability of this compare to a FIR >> implementation (with or without a Fourier transform)? >> >>> > > For practical purposes, x will be a digital signal (from an ADC) or a > digital image. Thus the recursive boxcar can be implemented with > integer maths. Stability is excellent as numerical error is 0. :-) > > You just have to make sure that y does not overflow (e.g. let y be 32 > bit if x is 16 bit). Heh. You have a point there. But I should say that in the application in which we use boxcar filtering (searching for single pulses in radio pulsar search data), the data has already been processed sufficiently that we can't use integers any more, and in fact we use 32-bit floats rather than doubles. It's kind of moot for us in any case since we plan to modify the code to do matched filtering with a different filter, so convolution will be necessary. It's also worth checking: while scipy.signal does implement IIR filters, I don't think it takes advantage of zero coefficients to avoid arithmetic, so using it to implement a boxcar is probably worse than using even a non-FFT convolution. Is this right? Anne From ben.root at ou.edu Sat Jun 19 13:38:07 2010 From: ben.root at ou.edu (Benjamin Root) Date: Sat, 19 Jun 2010 12:38:07 -0500 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: On Sat, Jun 19, 2010 at 8:18 AM, Matthew Brett wrote: > Hi, > > > I've pressed our lawyers to look for established cases and precedents > > for use of undecorated trademarks in commentary and review, but for > > the docs, which are part of our "product", I think the safe route is > > to use MATLAB(R) as the Mathworks recommends. Quite frankly, I think > > doing so also makes us look more competent and serious to our own > > users. > > As far as I can see, it doesn't make any legal difference to the use > of the term, whether you attach (R) to MATLAB or not. > > It's difficult to see how a phrase such as 'MATLAB file format' could > be anything but nominative use: > > http://en.wikipedia.org/wiki/Fair_use_%28U.S._trademark_law%29 > http://www.publaw.com/fairusetrade.html > > and therefore fair use. > > I guess that you mean that putting (R) next to MATLAB in every use > will make the Mathworks feel better and therefore less likely to sue, > but it seems vanishingly unlikely to me that they would attempt this. > For example, on the Sage home page: > > http://www.sagemath.org/ > > we see an undecorated 'Mission: Creating a viable free open source > alternative to Magma, Maple, Mathematica and Matlab.' - and this is a > much more directly comparative use than we have. > > I think the best way is the way I suggested a while back; that is > something on the lines of: > > These are readers for the MATLAB [1] file format. Blah Blah. The > MATLAB file format specifies that... > > [1] MATLAB is a registered trademark belonging to the Mathworks inc. > We use this trademark without permission from the Mathworks.. Our use > of the trademark is not authorized by, associated with or sponsored by > the trademark owner. > > (see http://www.publaw.com/fairusetrade.html). > > Putting (R) for the many mentions of MATLAB seems like overkill to me > and conveys the impression that we are a bit scared of lawyers for no > good reason, and thus makes us seem less competent than not doing so. > On the other hand, sticking to MATLAB rather than Matlab is probably > safer (http://www.publaw.com/fairusetrade.html again). > > Our only possible problem is that we also use 'matlab' as a module > name. I can't imagine that this will exercise the Mathworks much, but > it does mean we sometimes don't use 'matlab' in a nominative sense. > If we want to avoid that, we'll have to rename the module to something > like 'matfile'. > > I would also like to point out another possible source of issues. There are times when we might compare a function's behavior against another system, like MATLAB. While I don't recall an example in SciPy, I have seen it in matplotlib's pcolor() functions. I wouldn't be surprised to see it elsewhere, considering how we do try to cater for those moving from MATLAB. > But - 'I am not a lawyer' (TM). > > "But I play one on the internet!" :-P Ben Root See you, > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Jun 19 14:00:23 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 19 Jun 2010 12:00:23 -0600 Subject: [SciPy-User] Boxcar smoothing of 1D data array...? In-Reply-To: References: <10519.90.184.76.157.1276356525.squirrel@webmail.nbi.ku.dk> <708711.77910.qm@web33002.mail.mud.yahoo.com> <4C1B5705.4080600@molden.no> <9C163EEB-0ACE-466D-8650-07621555CF43@molden.no> Message-ID: On Sat, Jun 19, 2010 at 10:15 AM, Anne Archibald wrote: > On 19 June 2010 06:00, Sturla Molden wrote: > > > > > > Den 18. juni 2010 kl. 16.51 skrev Anne Archibald < > aarchiba at physics.mcgill.ca > > >: > > > >>> > >>> > >>> > >>> y[n] = y[n-1] + x[n] - x[n-m] > >>> > >>> then normalize y by 1/m. > >> > >> How does the numerical stability of this compare to a FIR > >> implementation (with or without a Fourier transform)? > >> > >>> > > > > For practical purposes, x will be a digital signal (from an ADC) or a > > digital image. Thus the recursive boxcar can be implemented with > > integer maths. Stability is excellent as numerical error is 0. :-) > > > > You just have to make sure that y does not overflow (e.g. let y be 32 > > bit if x is 16 bit). > > Heh. You have a point there. But I should say that in the application > in which we use boxcar filtering (searching for single pulses in radio > pulsar search data), the data has already been processed sufficiently > that we can't use integers any more, and in fact we use 32-bit floats > rather than doubles. It's kind of moot for us in any case since we > plan to modify the code to do matched filtering with a different > filter, so convolution will be necessary. > > It's also worth checking: while scipy.signal does implement IIR > filters, I don't think it takes advantage of zero coefficients to > avoid arithmetic, so using it to implement a boxcar is probably worse > than using even a non-FFT convolution. Is this right? > > Note that the transfer functions are the same if the zero at 1 in the numerator exactly cancels the zero in the denominator. However, the output has to be correctly initialized, since the initial value is otherwise carried along, which you can see by setting the inputs to zero. So the algebraic cancellation doesn't remove all the side effects of using a recursive filter. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sat Jun 19 14:09:35 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 19 Jun 2010 11:09:35 -0700 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: On Sat, Jun 19, 2010 at 6:18 AM, Matthew Brett wrote: > Hi, > > > I've pressed our lawyers to look for established cases and precedents > > for use of undecorated trademarks in commentary and review, but for > > the docs, which are part of our "product", I think the safe route is > > to use MATLAB(R) as the Mathworks recommends. Quite frankly, I think > > doing so also makes us look more competent and serious to our own > > users. > > As far as I can see, it doesn't make any legal difference to the use > of the term, whether you attach (R) to MATLAB or not. > > It's difficult to see how a phrase such as 'MATLAB file format' could > be anything but nominative use: > > http://en.wikipedia.org/wiki/Fair_use_%28U.S._trademark_law%29 > http://www.publaw.com/fairusetrade.html > > and therefore fair use. > > I guess that you mean that putting (R) next to MATLAB in every use > will make the Mathworks feel better and therefore less likely to sue, > but it seems vanishingly unlikely I think you mean vanishingly likely (vanishingly unlikely would mean approaching certainty). DG -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sat Jun 19 14:11:41 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 19 Jun 2010 11:11:41 -0700 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: On Sat, Jun 19, 2010 at 10:38 AM, Benjamin Root wrote: > > On Sat, Jun 19, 2010 at 8:18 AM, Matthew Brett wrote: > >> Hi, >> >> > I've pressed our lawyers to look for established cases and precedents >> > for use of undecorated trademarks in commentary and review, but for >> > the docs, which are part of our "product", I think the safe route is >> > to use MATLAB(R) as the Mathworks recommends. Quite frankly, I think >> > doing so also makes us look more competent and serious to our own >> > users. >> >> As far as I can see, it doesn't make any legal difference to the use >> of the term, whether you attach (R) to MATLAB or not. >> >> It's difficult to see how a phrase such as 'MATLAB file format' could >> be anything but nominative use: >> >> http://en.wikipedia.org/wiki/Fair_use_%28U.S._trademark_law%29 >> http://www.publaw.com/fairusetrade.html >> >> and therefore fair use. >> >> I guess that you mean that putting (R) next to MATLAB in every use >> will make the Mathworks feel better and therefore less likely to sue, >> but it seems vanishingly unlikely to me that they would attempt this. >> For example, on the Sage home page: >> >> http://www.sagemath.org/ >> >> we see an undecorated 'Mission: Creating a viable free open source >> alternative to Magma, Maple, Mathematica and Matlab.' - and this is a >> much more directly comparative use than we have. >> >> I think the best way is the way I suggested a while back; that is >> something on the lines of: >> >> These are readers for the MATLAB [1] file format. Blah Blah. The >> MATLAB file format specifies that... >> >> [1] MATLAB is a registered trademark belonging to the Mathworks inc. >> We use this trademark without permission from the Mathworks.. Our use >> of the trademark is not authorized by, associated with or sponsored by >> the trademark owner. >> >> (see http://www.publaw.com/fairusetrade.html). >> >> Putting (R) for the many mentions of MATLAB seems like overkill to me >> and conveys the impression that we are a bit scared of lawyers for no >> good reason, and thus makes us seem less competent than not doing so. >> On the other hand, sticking to MATLAB rather than Matlab is probably >> safer (http://www.publaw.com/fairusetrade.html again). >> >> Our only possible problem is that we also use 'matlab' as a module >> name. I can't imagine that this will exercise the Mathworks much, but >> it does mean we sometimes don't use 'matlab' in a nominative sense. >> If we want to avoid that, we'll have to rename the module to something >> like 'matfile'. >> >> I would also like to point out another possible source of issues. There > are times when we might compare a function's behavior against another > system, like MATLAB. While I don't recall an example in SciPy, I have seen > it in matplotlib's pcolor() functions. I wouldn't be surprised to see it > elsewhere, considering how we do try to cater for those moving from MATLAB. > > >> But - 'I am not a lawyer' (TM). >> >> "But I play one on the internet!" :-P > > Ben Root > In Matthew's defense, he didn't only cite Wikipedia. ;-) DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Jun 19 15:12:14 2010 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 19 Jun 2010 12:12:14 -0700 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: On Sat, Jun 19, 2010 at 11:09 AM, David Goldsmith wrote: > On Sat, Jun 19, 2010 at 6:18 AM, Matthew Brett > wrote: >> I guess that you mean that putting (R) next to MATLAB in every use >> will make the Mathworks feel better and therefore less likely to sue, >> but it seems vanishingly unlikely > > I think you mean vanishingly likely (vanishingly unlikely would mean > approaching certainty). "Vanishingly unlikely" is a conventional expression (you can google it to see lots of uses), and it means "so unlikely that the possibility of it happening vanishes"; pretty standard in my dialect. OTOH, I wouldn't understand "vanishingly likely" if I heard it. (Linguists use SciPy too ;-)) -- Nathaniel From d.l.goldsmith at gmail.com Sat Jun 19 17:58:45 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 19 Jun 2010 14:58:45 -0700 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: On Sat, Jun 19, 2010 at 12:12 PM, Nathaniel Smith wrote: > On Sat, Jun 19, 2010 at 11:09 AM, David Goldsmith > wrote: > > On Sat, Jun 19, 2010 at 6:18 AM, Matthew Brett > > wrote: > >> I guess that you mean that putting (R) next to MATLAB in every use > >> will make the Mathworks feel better and therefore less likely to sue, > >> but it seems vanishingly unlikely > > > > I think you mean vanishingly likely (vanishingly unlikely would mean > > approaching certainty). > > "Vanishingly unlikely" is a conventional expression (you can google it > to see lots of uses), and it means "so unlikely that the possibility > of it happening vanishes"; pretty standard in my dialect. OTOH, I > wouldn't understand "vanishingly likely" if I heard it. > > (Linguists use SciPy too ;-)) > I wouldn't have thunk it a linguistic issue, but a logic one. Vanishingly likely produces google hits also, but "vanishingly likely" (in quotes) produces O(10^2) fewer hits than "vanishingly unlikely" (in quotes) so I guess, given predominant usage (for language is determined democratically, not logically), you win. Sorry for the noise. DG > > -- Nathaniel > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sat Jun 19 19:44:52 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 20 Jun 2010 00:44:52 +0100 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: Hi, > I would also like to point out another possible source of issues.? There are > times when we might compare a function's behavior against another system, > like MATLAB.? While I don't recall an example in SciPy, I have seen it in > matplotlib's pcolor() functions.? I wouldn't be surprised to see it > elsewhere, considering how we do try to cater for those moving from MATLAB. I think that's the same issue. From the wikipedia page: "A nonowner may also use a trademark nominatively?to refer to the actual trademarked product or its source. In addition to protecting product criticism and analysis, United States law actually encourages nominative usage by competitors in the form of comparative advertising." >> But - 'I am not a lawyer' (TM). >> > "But I play one on the internet!" :-P I very much like to see arguments based on sources - then I can see how the argument is formed, and how I can engage in it, if I am interested. It makes it easier to have an informed discussion. See you, Matthew From jh at physics.ucf.edu Sun Jun 20 13:01:53 2010 From: jh at physics.ucf.edu (Joe Harrington) Date: Sun, 20 Jun 2010 13:01:53 -0400 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: (message from Matthew Brett on Sat, 19 Jun 2010 14:18:14 +0100) References: Message-ID: >> I've pressed our lawyers to look for established cases and precedents >> for use of undecorated trademarks in commentary and review, but for >> the docs, which are part of our "product", I think the safe route is >> to use MATLAB(R) as the Mathworks recommends. ?Quite frankly, I think >> doing so also makes us look more competent and serious to our own >> users. > >As far as I can see, it doesn't make any legal difference to the use >of the term, whether you attach (R) to MATLAB or not. > >It's difficult to see how a phrase such as 'MATLAB file format' could >be anything but nominative use: > >http://en.wikipedia.org/wiki/Fair_use_%28U.S._trademark_law%29 >http://www.publaw.com/fairusetrade.html > >and therefore fair use. > >I guess that you mean that putting (R) next to MATLAB in every use >will make the Mathworks feel better and therefore less likely to sue, >but it seems vanishingly unlikely to me that they would attempt this. > For example, on the Sage home page: > >http://www.sagemath.org/ > >we see an undecorated 'Mission: Creating a viable free open source >alternative to Magma, Maple, Mathematica and Matlab.' - and this is a >much more directly comparative use than we have. > >I think the best way is the way I suggested a while back; that is >something on the lines of: > >These are readers for the MATLAB [1] file format. Blah Blah. The >MATLAB file format specifies that... > >[1] MATLAB is a registered trademark belonging to the Mathworks inc. >We use this trademark without permission from the Mathworks.. Our use >of the trademark is not authorized by, associated with or sponsored by >the trademark owner. > >(see http://www.publaw.com/fairusetrade.html). > >Putting (R) for the many mentions of MATLAB seems like overkill to me >and conveys the impression that we are a bit scared of lawyers for no >good reason, and thus makes us seem less competent than not doing so. > On the other hand, sticking to MATLAB rather than Matlab is probably >safer (http://www.publaw.com/fairusetrade.html again). > >Our only possible problem is that we also use 'matlab' as a module >name. I can't imagine that this will exercise the Mathworks much, but >it does mean we sometimes don't use 'matlab' in a nominative sense. >If we want to avoid that, we'll have to rename the module to something >like 'matfile'. > >But - 'I am not a lawyer' (TM). Those are nice arguments, but neither of us is a lawyer. If there's one thing I've learned about the law, it's that precedent, argument, and demonstration of a loss (whether justly attributed to the defendent or not) play much larger roles in comparison to the text of the law than folks like ourselves would like to believe. You really can argue, successfully, what "is" means, and a significant lawsuit, even one defended successfully, can wreck a small company. So, why don't we see what the real lawyers have to say about it? I was doing just that but needed to respond to David's premature (and incorrect) speculation. I'm not saying that your arguments are wrong, I'm just waiting for the lawyers who can say it based on precedent and their expertise applying the law, rather than Wikipedia. --jh-- From matthew.brett at gmail.com Sun Jun 20 16:51:44 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 20 Jun 2010 21:51:44 +0100 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: Hi, > Those are nice arguments, but neither of us is a lawyer. ?If there's > one thing I've learned about the law, it's that precedent, argument, > and demonstration of a loss (whether justly attributed to the > defendent or not) play much larger roles in comparison to the text of > the law than folks like ourselves would like to believe. ?You really > can argue, successfully, what "is" means, and a significant lawsuit, > even one defended successfully, can wreck a small company. ?So, why > don't we see what the real lawyers have to say about it? ?I was doing > just that but needed to respond to David's premature (and incorrect) > speculation. ?I'm not saying that your arguments are wrong, I'm just > waiting for the lawyers who can say it based on precedent and their > expertise applying the law, It seems sensible and reasonable to get a lawyer's opinion. It would be a shame though, if we ended up taking an over-conservative approach, because I think it doesn't make us look very good if we put a lot of defensive legal stuff into our code. > rather than Wikipedia. Dammit - Wikipedia again. If you think my arguments are unsound, or Wikipedia or the other links I sent are poorly informed on this issue, please say why. Otherwise it's just patronizing. See you, Matthew From d.l.goldsmith at gmail.com Mon Jun 21 01:05:11 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 20 Jun 2010 22:05:11 -0700 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: On Sun, Jun 20, 2010 at 1:51 PM, Matthew Brett wrote: > Hi, > > > Those are nice arguments, but neither of us is a lawyer. If there's > > one thing I've learned about the law, it's that precedent, argument, > > and demonstration of a loss (whether justly attributed to the > > defendent or not) play much larger roles in comparison to the text of > > the law than folks like ourselves would like to believe. You really > > can argue, successfully, what "is" means, and a significant lawsuit, > > even one defended successfully, can wreck a small company. So, why > > don't we see what the real lawyers have to say about it? I was doing > > just that but needed to respond to David's premature (and incorrect) > > speculation. I'm not saying that your arguments are wrong, I'm just > > waiting for the lawyers who can say it based on precedent and their > > expertise applying the law, > > It seems sensible and reasonable to get a lawyer's opinion. It would > be a shame though, if we ended up taking an over-conservative > approach, because I think it doesn't make us look very good if we put > a lot of defensive legal stuff into our code. > > > rather than Wikipedia. > > Dammit - Wikipedia again. If you think my arguments are unsound, or > Wikipedia or the other links I sent are poorly informed on this issue, > please say why. Otherwise it's just patronizing. > Wikipedia's quality control process is certainly rigorous enough for some purposes (e.g., settling a bet w/ someone who agrees to let Wikipedia be the arbiter of truth on the matter, or answering a question with a low cost of being wrong) but I don't think its QC process is rigorous enough to count on it when some manner of liability is at issue, that's all. That's why, for example, we strongly prefer not to use Wikipedia (or any other electronic reference whose stability is unrelaible) as the sole reference for things in NumPy/SciPy docstrings. DG > > See you, > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Jun 21 05:58:04 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 21 Jun 2010 10:58:04 +0100 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: Hi, > Wikipedia's quality control process is certainly rigorous enough for some > purposes (e.g., settling a bet w/ someone who agrees to let Wikipedia be the > arbiter of truth on the matter, or answering a question with a low cost of > being wrong) I find myself in the unfortunate position of unpacking John Hunter's wry joke about young Jedis earlier on in this thread. My point is not about Wikipedia's quality control or lack of it. My point is about how to have a useful opinion on a technical field, like law, where there is some inevitable ambiguity. In law, as for medicine, we are tempted to come out with some 'I would have thought X was true' statement that has essentially no content. These statements can be dangerous, in law, as for medicine, because they often rehearse quite unconscious prejudices that do not reflect the development of the field. One approach is to say 'you can't have an opinion on the law unless you're a lawyer'. The other approach is to try and get to grips with the law and precedent, and make a sensible statement based on that. The disadvantage of the 'lawyer only' approach, is that lawyers, in general, will tend to advise you to do the safest possible thing, because they want to avoid any possibility of being sued, and therefore themselves becoming liable. If, for some reason, you want to avoid the consequences of the safest possible approach (here, because we'd have embarrassing legal cruft in our docstrings) your only choice is to try and understand on what basis the lawyer might give her opinion, so you can discuss it with them sensibly. Wikipedia happens - as so often - to have a nice summary of the issues - with some detail on the precedents on which they are based. I sent the other links so you could see that there were other sensible sources that say the same thing. In short - let's read - and argue from sources - and try and have an informed opinion. Or not have an opinion because we don't have any interest in reading about it. But, I don't think we should allow ourselves the luxury of not reading about it and laughing at those who have for their naivety in quoting Wikipedia. See you, Matthew From jeremy at jeremysanders.net Mon Jun 21 08:57:36 2010 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Mon, 21 Jun 2010 13:57:36 +0100 Subject: [SciPy-User] ANN: Veusz 1.8 Message-ID: Veusz 1.8 --------- Velvet Ember Under Sky Zenith ----------------------------- http://home.gna.org/veusz/ Veusz is Copyright (C) 2003-2010 Jeremy Sanders Licenced under the GPL (version 2 or greater). Veusz is a Qt4 based scientific plotting package. It is written in Python, using PyQt4 for display and user-interfaces, and numpy for handling the numeric data. Veusz is designed to produce publication-ready Postscript/PDF/SVG output. The user interface aims to be simple, consistent and powerful. Veusz provides a GUI, command line, embedding and scripting interface (based on Python) to its plotting facilities. It also allows for manipulation and editing of datasets. Data can be captured from external sources such as internet sockets or other programs. Changes in 1.8: * Rewritten several inner loops in C++ giving speedups for large datasets * Lines, points and shapes are clipped before plotting, which speeds up plotting with some Qt backends and reduces file sizes * Data histogram feature added for calculating histograms of datasets, including cumulative histograms * Data import plugins allow the user to add support for importing any file type - see Veusz wiki for details * Experimental Bezier curve option for joining data points Minor changes in 1.8: * Fix zoom button default action * Speed up user interface when handling large numbers of datasets * Reset buttons added to several dialog boxes * Add engineering number formatting for axes: %VE giving e.g. 1k or 50m * Add drop down list of number formatting option to axis tick labels * Force Qt dialog boxes to be used instead of KDE ones, as KDE ones are currently broken * Use miter joins for plotting data points for sharper appearance * Add SetAntiAliasing command to command interface to toggle anti aliasing * Fix highlighting of errors when entering settings * Fix conversion of numpy to QVariants for new versions of PyQt * New point styles added for showing limits * Reworked internals of import dialog substantially * Several other minor bug fixes Note for people building from source and package builders: * Veusz now contains C++ code, dependent for building on the development libraries of SIP, PyQt4 and Qt4. Note that Veusz will still work (but more slowly) without this helper library. Features of package: * X-Y plots (with errorbars) * Line and function plots * Contour plots * Images (with colour mappings and colorbars) * Stepped plots (for histograms) * Bar graphs * Plotting dates * Fitting functions to data * Stacked plots and arrays of plots * Plot keys * Plot labels * Shapes and arrows on plots * LaTeX-like formatting for text * EPS/PDF/PNG/SVG/EMF export * Scripting interface * Dataset creation/manipulation * Embed Veusz within other programs * Text, CSV, FITS and user-plugin importing * Data can be captured from external sources * User defined functions, constants and can import external Python functions Requirements for source install: Python (2.4 or greater required) http://www.python.org/ Qt >= 4.3 (free edition) http://www.trolltech.com/products/qt/ PyQt >= 4.3 (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/pyqt/ http://www.riverbankcomputing.co.uk/sip/ numpy >= 1.0 http://numpy.scipy.org/ Optional: Microsoft Core Fonts (recommended for nice output) http://corefonts.sourceforge.net/ PyFITS >= 1.1 (optional for FITS import) http://www.stsci.edu/resources/software_hardware/pyfits pyemf >= 2.0.0 (optional for EMF export) http://pyemf.sourceforge.net/ For EMF and better SVG export, PyQt >= 4.6 or better is required, to fix a bug in the C++ wrapping For documentation on using Veusz, see the "Documents" directory. The manual is in PDF, HTML and text format (generated from docbook). The examples are also useful documentation. Please also see and contribute to the Veusz wiki: http://barmag.net/veusz-wiki/ Issues with the current version: * Plots can sometimes be slow using antialiasing. Go to the preferences dialog or right click on the plot to disable antialiasing. * Some recent versions of PyQt/SIP will causes crashes when exporting SVG files. Update to 4.7.4 (if released) or a recent snapshot to solve this problem. If you enjoy using Veusz, I would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The latest code can always be found in the SVN repository. Jeremy Sanders From jh at physics.ucf.edu Mon Jun 21 11:11:12 2010 From: jh at physics.ucf.edu (Joe Harrington) Date: Mon, 21 Jun 2010 11:11:12 -0400 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: (message from Matthew Brett on Sun, 20 Jun 2010 21:51:44 +0100) References: Message-ID: >> rather than Wikipedia. > >Dammit - Wikipedia again. If you think my arguments are unsound, or >Wikipedia or the other links I sent are poorly informed on this issue, >please say why. Otherwise it's just patronizing. There is no need for swearing or name calling on this list. We are among friends here. I have based (non-legal) decisions on Wikipedia only to discover a gross error that embarrassed me. Fortunately the one time it happened in print, an anonymous referee caught it. When I went back to check, the Wikipedia formula that had stood for some time had changed (it was regarding the definition of the Bayesian information criterion; look at the history if you care to). I have wondered whether it was the referee who changed it. I find Wikipedia to be right most of the time, and often to have excellent explanations of technical topics (sometimes better than any textbook). We encourage doc writers to refer to it rather than repeating long explanations of math topics, and even over any but the most popular reviewed texts, since it is so easily available to just about anyone. However, on topics with social and political implications (which might include the legalities of information), it is often manipulated or simply unbalanced, reflecting whatever the last editor decided to write. This is a well-known phenomenon on which numerous scholarly articles have been written (you may ascertain this for yourself). In my opinion, the safeguards against these abuses do not (and probably cannot) make up for the problem. For example, a proponent of fair use *might* write all the favorable arguments without citing key countervailing cases, and make it look like there is nothing to worry about. The problem with legal issues is that a non-lawyer cannot reliably detect whether the analysis presented is complete or not since we don't have access to or experience with the relevant decisions that might become part of the case history. This is why I'm not going to play amateur lawyer and try to evaluate the legal arguments laid out on essentially *any* web page. It is the job of a real lawyer to do that, by bringing out the arguments and precedent favoring the *other* side, to see whether our side can prevail over or sidestep them. Certainly the lawyers looking at this issue will read the Wikipedia article and the other items you have posted, and take them into consideration in their research. So at this point, I hope we can suspend this discussion, except for posting any new resources that the lawyers might use. Let's also stop changing the MATLAB-related terms until we have an opinion from a lawyer about what to change them TO. There's no need to waste time changing things twice. We have plenty of work to do as it is. --jh-- From eneide.odissea at gmail.com Mon Jun 21 17:17:41 2010 From: eneide.odissea at gmail.com (eneide.odissea) Date: Mon, 21 Jun 2010 23:17:41 +0200 Subject: [SciPy-User] max likelihood Message-ID: Hi All I had a look at the scipy.stats documentation and I was not able to find a function for maximum likelihood parameter estimation. Do you know whether is available in some other namespace/library of scipy? I found on the web few libraries ( this one is an example http://bmnh.org/~pf/p4.html ) having it, but I would prefer to start playing with what scipy already offers by default ( if any ). Kind Regards eo -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Mon Jun 21 17:22:03 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 21 Jun 2010 17:22:03 -0400 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 5:17 PM, eneide.odissea wrote: > > Hi All > I had a look at the scipy.stats documentation and I was not able to find a function for > maximum likelihood parameter estimation. > Do you know whether is available in some other namespace/library of scipy? > I found on the web few libraries ( this one is an example?http://bmnh.org/~pf/p4.html?) having it, > but I would prefer to start playing with?what scipy already offers by default ( if any ). > Kind Regards > eo What does your likelihood function look like? I am working on a Generic Likelihood model as part of statsmodels (http://statsmodels.sourceforge.net/) You can see an example here: http://scipystats.blogspot.com/2010/06/statsmodels-gsoc-week-3-update.html Of course, you can always just roll your own (negative log) likelihood function and use an optimizer from scipy.optimize. Skipper From d.l.goldsmith at gmail.com Mon Jun 21 17:34:51 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 21 Jun 2010 14:34:51 -0700 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea wrote: > Hi All > I had a look at the scipy.stats documentation and I was not able to find a > function for > maximum likelihood parameter estimation. > Do you know whether is available in some other namespace/library of scipy? > I found on the web few libraries ( this one is an example > http://bmnh.org/~pf/p4.html ) having it, > but I would prefer to start playing with what scipy already offers by > default ( if any ). > Kind Regards > eo > scipy.stats.distributions.rv_continuous.fit (I was just working on the docstring for that; I don't believe my changes have been merged; I believe Travis recently updated its code...) DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Mon Jun 21 17:36:40 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 21 Jun 2010 14:36:40 -0700 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 2:34 PM, David Goldsmith wrote: > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea wrote: > >> Hi All >> I had a look at the scipy.stats documentation and I was not able to find a >> function for >> maximum likelihood parameter estimation. >> Do you know whether is available in some other namespace/library of >> scipy? >> I found on the web few libraries ( this one is an example >> http://bmnh.org/~pf/p4.html ) having it, >> but I would prefer to start playing with what scipy already offers by >> default ( if any ). >> Kind Regards >> eo >> > > scipy.stats.distributions.rv_continuous.fit (I was just working on the > docstring for that; I don't believe my changes have been merged; I believe > Travis recently updated its code...) > > DG > Yeah, according to the doc Wiki, the initial source revision is dated a week ago... DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh.k.lawrence at gmail.com Mon Jun 21 17:39:40 2010 From: josh.k.lawrence at gmail.com (Josh Lawrence) Date: Mon, 21 Jun 2010 17:39:40 -0400 Subject: [SciPy-User] Parallel Matrix Operations Message-ID: <88D6998B-AC54-44C9-9788-20E634AF649E@gmail.com> Hey all, In my work, I need to perform either singular value decomposition or eigenvalue decomposition on a dense complex 5000x5000 (or larger) matrix. This is nearing the edge of suitability for a single machine. Since I have access to a large cluster, I was curious if there were any tools to do a parallel SVD. I found PETSc and similar tools, but it seems their focus is on sparse matrices. I would prefer something that can be used in conjunction with numpy/scipy. Are any of you aware of such tools? Thanks in advance, -Josh From jsseabold at gmail.com Mon Jun 21 17:43:58 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 21 Jun 2010 17:43:58 -0400 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith wrote: > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea > wrote: >> >> Hi All >> I had a look at the scipy.stats documentation and I was not able to find a >> function for >> maximum likelihood parameter estimation. >> Do you know whether is available in some other namespace/library of >> scipy? >> I found on the web few libraries ( this one is an >> example?http://bmnh.org/~pf/p4.html?) having it, >> but I would prefer to start playing with?what scipy already offers by >> default ( if any ). >> Kind Regards >> eo > > scipy.stats.distributions.rv_continuous.fit (I was just working on the > docstring for that; I don't believe my changes have been merged; I believe > Travis recently updated its code...) > This is for fitting the parameters of a distribution via maximum likelihood given that the DGP is the underlying distribution. I don't think it is intended for more complicated likelihood functions (where Nelder-Mead might fail). And in any event it will only find the parameters of the distribution rather than the parameters of some underlying model, if this is what you're after. Skipper From d.l.goldsmith at gmail.com Mon Jun 21 17:55:43 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 21 Jun 2010 14:55:43 -0700 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold wrote: > On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith > wrote: > > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea < > eneide.odissea at gmail.com> > > wrote: > >> > >> Hi All > >> I had a look at the scipy.stats documentation and I was not able to find > a > >> function for > >> maximum likelihood parameter estimation. > >> Do you know whether is available in some other namespace/library of > >> scipy? > >> I found on the web few libraries ( this one is an > >> example http://bmnh.org/~pf/p4.html ) > having it, > >> but I would prefer to start playing with what scipy already offers by > >> default ( if any ). > >> Kind Regards > >> eo > > > > scipy.stats.distributions.rv_continuous.fit (I was just working on the > > docstring for that; I don't believe my changes have been merged; I > believe > > Travis recently updated its code...) > > > > This is for fitting the parameters of a distribution via maximum > likelihood given that the DGP is the underlying distribution. I don't > think it is intended for more complicated likelihood functions (where > Nelder-Mead might fail). And in any event it will only find the > parameters of the distribution rather than the parameters of some > underlying model, if this is what you're after. > > Skipper > OK, but just for clarity in my own mind: are you saying that rv_continuous.fit is _definitely_ inappropriate/inadequate for OP's needs (i.e., am I _completely_ misunderstanding the relationship between the function and OP's stated needs), or are you saying that the function _may_ not be general/robust enough for OP's stated needs? DG PS: Sorry, josef! (who appears to be the most recent worker on rv_continuous.fit - credit where credit is due!) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdekauwe at gmail.com Mon Jun 21 18:11:31 2010 From: mdekauwe at gmail.com (Martin De Kauwe) Date: Mon, 21 Jun 2010 15:11:31 -0700 (PDT) Subject: [SciPy-User] Building Scipy for Mac OS X 10.6 In-Reply-To: References: Message-ID: Hi, I had to reinstall all my python libs as my hard disk blew up...and I would suggest macports is the easiest way to go. But equally I managed to build scipy from the svn. Follow the advice on this website, v.good. http://blog.hyperjeff.net/?p=160 Only thing I would add...for me to get scipy to build I had to do this python setup.py build --fcompiler=gnu95 instead of python setup.py build. Also remember you need to build a new 64bit version of python (2.6.5) not the one that ships with your mac. You can get a nice dmg file from here, http://blog.jbhannah.net/p/565, though make sure you follow all the instructions. Saying all of this I had issues compiling matplotlib, so as I said at the outset go with the macports build. Martin On Jun 3, 4:00?am, Scott Stephens wrote: > I'm attempting to build/install scipy from source on Mac OS X 10.6 (on > intel hardware) and am getting failures on imports. ?I've compiled > python 2.6.4 as a framework; I've built both it and numpy as > x86_64-only applications, and am trying to build scipy the same way > (in other words, I'm not trying to do a multi-architecture universal > build). ?I ran the numpy test suite and got one known fail and one > skipped test. > > I built scipy like this: > FFLAGS="-arch x86_64 -fPIC" LDFLAGS="-Wall -arch x86_64 -undefined > dynamic_lookup" python setup.py build > python setup.py install > > I also tried the build without overriding the compile and link flags, > but that leads to producing libraries that are universal 32-bit > ppc/x86, rather than the desired 64 bit x86_64. > > When I do import scipy.fftpack, I get: > Traceback (most recent call last): > ? File "", line 1, in > ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packa ges/scipy/fftpack/__init__.py", > line 10, in > ? ? from basic import * > ? File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packa ges/scipy/fftpack/basic.py", > line 13, in > ? ? import _fftpack as fftpack > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site -packages/scipy/fftpack/_fftpack.so, > 2): no suitable image found. ?Did find: > ? ? ? ? /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packag es/scipy/fftpack/_fftpack.so: > can't map > > Running scipy.test() generates 19 test failures, most of which are > similar to the above. ?The obvious checks for architecture and > dependencies doesn't show anything wrong: > > ----- > file /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packag es/scipy/fftpack/_fftpack.so > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packag es/scipy/fftpack/_fftpack.so: > Mach-O 64-bit executable x86_64 > ----- > otool -L /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packag es/scipy/fftpack/_fftpack.so > /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packag es/scipy/fftpack/_fftpack.so: > ? ? ? ? /usr/local/lib/libgfortran.2.dylib (compatibility version 3.0.0, > current version 3.0.0) > ? ? ? ? /usr/local/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current > version 1.0.0) > ? ? ? ? /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current > version 125.0.1) > ----- > > General system info: > os.name: 'posix' > sys.platform: 'darwin' > sys.version: '2.6.4 (r264:75706, Mar 27 2010, 11:45:57) \n[GCC 4.2.1 > (Apple Inc. build 5646) (dot 1)]' > numpy.version.version: '1.3.0' > gcc --version: i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5659) > gfortran --version: GNU Fortran (GCC) 4.2.3 > uname -a: Darwin indy.local 10.3.0 Darwin Kernel Version 10.3.0: Fri > Feb 26 11:58:09 PST 2010; root:xnu-1504.3.12~1/RELEASE_I386 i386 > > Any ideas? ?I'm pretty stumped. > > Thanks, > > Scott > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From jsseabold at gmail.com Mon Jun 21 18:17:40 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 21 Jun 2010 18:17:40 -0400 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith wrote: > On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold > wrote: >> >> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith >> wrote: >> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea >> > >> > wrote: >> >> >> >> Hi All >> >> I had a look at the scipy.stats documentation and I was not able to >> >> find a >> >> function for >> >> maximum likelihood parameter estimation. >> >> Do you know whether is available in some other namespace/library of >> >> scipy? >> >> I found on the web few libraries ( this one is an >> >> example?http://bmnh.org/~pf/p4.html?) having it, >> >> but I would prefer to start playing with?what scipy already offers by >> >> default ( if any ). >> >> Kind Regards >> >> eo >> > >> > scipy.stats.distributions.rv_continuous.fit (I was just working on the >> > docstring for that; I don't believe my changes have been merged; I >> > believe >> > Travis recently updated its code...) >> > >> >> This is for fitting the parameters of a distribution via maximum >> likelihood given that the DGP is the underlying distribution. ?I don't >> think it is intended for more complicated likelihood functions (where >> Nelder-Mead might fail). ?And in any event it will only find the >> parameters of the distribution rather than the parameters of some >> underlying model, if this is what you're after. >> >> Skipper > > OK, but just for clarity in my own mind: are you saying that > rv_continuous.fit is _definitely_ inappropriate/inadequate for OP's needs > (i.e., am I _completely_ misunderstanding the relationship between the > function and OP's stated needs), or are you saying that the function _may_ > not be general/robust enough for OP's stated needs? Well, I guess it depends on exactly what kind of likelihood function is being optimized. That's why I asked. My experience with stats.distributions is all of about a week, so I could be wrong. But here it goes... rv_continuous is not intended to be used on its own but rather as the base class for any distribution. So if you believe that your data came from say an Gaussian distribution, then you could use norm.fit(data) (with other options as needed) to get back estimates of scale and location. So In [31]: from scipy.stats import norm In [32]: import numpy as np In [33]: x = np.random.normal(loc=0,scale=1,size=1000) In [34]: norm.fit(x) Out[34]: (-0.043364692830314848, 1.0205901804210851) Which is close to our given location and scale. But if you had in mind some kind of data generating process for your model based on some other observed data and you were interested in the marginal effects of changes in the observed data on the outcome, then it would be cumbersome I think to use the fit in distributions. It may not be possible. Also, as mentioned, fit only uses Nelder-Mead (optimize.fmin with the default parameters, which I've found to be inadequate for even fairly basic likelihood based models), so it may not be robust enough. At the moment, I can't think of a way to fit a parameterized model as fit is written now. Come to think of it though I don't think it would be much work to extend the fit method to work for something like a linear regression model. Skipper From josef.pktd at gmail.com Mon Jun 21 18:51:21 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 21 Jun 2010 18:51:21 -0400 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 6:17 PM, Skipper Seabold wrote: > On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith > wrote: >> On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold >> wrote: >>> >>> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith >>> wrote: >>> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea >>> > >>> > wrote: >>> >> >>> >> Hi All >>> >> I had a look at the scipy.stats documentation and I was not able to >>> >> find a >>> >> function for >>> >> maximum likelihood parameter estimation. >>> >> Do you know whether is available in some other namespace/library of >>> >> scipy? >>> >> I found on the web few libraries ( this one is an >>> >> example?http://bmnh.org/~pf/p4.html?) having it, >>> >> but I would prefer to start playing with?what scipy already offers by >>> >> default ( if any ). >>> >> Kind Regards >>> >> eo >>> > >>> > scipy.stats.distributions.rv_continuous.fit (I was just working on the >>> > docstring for that; I don't believe my changes have been merged; I >>> > believe >>> > Travis recently updated its code...) >>> > >>> >>> This is for fitting the parameters of a distribution via maximum >>> likelihood given that the DGP is the underlying distribution. ?I don't >>> think it is intended for more complicated likelihood functions (where >>> Nelder-Mead might fail). ?And in any event it will only find the >>> parameters of the distribution rather than the parameters of some >>> underlying model, if this is what you're after. >>> >>> Skipper >> >> OK, but just for clarity in my own mind: are you saying that >> rv_continuous.fit is _definitely_ inappropriate/inadequate for OP's needs >> (i.e., am I _completely_ misunderstanding the relationship between the >> function and OP's stated needs), or are you saying that the function _may_ >> not be general/robust enough for OP's stated needs? > > Well, I guess it depends on exactly what kind of likelihood function > is being optimized. ?That's why I asked. > > My experience with stats.distributions is all of about a week, so I > could be wrong. But here it goes... rv_continuous is not intended to > be used on its own but rather as the base class for any distribution. > So if you believe that your data came from say an Gaussian > distribution, then you could use norm.fit(data) (with other options as > needed) to get back estimates of scale and location. ?So > > In [31]: from scipy.stats import norm > > In [32]: import numpy as np > > In [33]: x = np.random.normal(loc=0,scale=1,size=1000) > > In [34]: norm.fit(x) > Out[34]: (-0.043364692830314848, 1.0205901804210851) > > Which is close to our given location and scale. > > But if you had in mind some kind of data generating process for your > model based on some other observed data and you were interested in the > marginal effects of changes in the observed data on the outcome, then > it would be cumbersome I think to use the fit in distributions. It may > not be possible. ? Also, as mentioned, fit only uses Nelder-Mead > (optimize.fmin with the default parameters, which I've found to be > inadequate for even fairly basic likelihood based models), so it may > not be robust enough. ?At the moment, I can't think of a way to fit a > parameterized model as fit is written now. ?Come to think of it though > I don't think it would be much work to extend the fit method to work > for something like a linear regression model. rephrasing this a bit and adding some comments: the fit of the distributions, estimate the parameters, shapes, loc and scale directly, while often we want the distribution parameters, especially loc (or mean) to depend on some explanatory variables. Generalized Linear Models does this for the exponential family of distributions. R has a package where any distribution parameter can be parameterized as a (linear) function of some explanatory variables. This would not be too difficult to implement, but I'm not sure how well established the theory and algorithms is outside of the exponential family and some specific distributions. Also in many cases it will not be obvious that the likelihood function is well (enough) behaved. I looked at the case for the t distribution, because I want it for GARCH, but even there it is not completely clear whether the parameterization of the t distribution should use the standard t-distribution or the standardized t-distribution (scale=var=1). It would be easy to do a quick job, but more time consuming to get it to work correctly for many cases/distributions. Josef BTW: I haven't touched fit in stats.distributions in a long time, the new version is all Travis' > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From d.l.goldsmith at gmail.com Mon Jun 21 19:03:23 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 21 Jun 2010 16:03:23 -0700 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 3:17 PM, Skipper Seabold wrote: > On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith > wrote: > > On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold > > wrote: > >> > >> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith > >> wrote: > >> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea > >> > > >> > wrote: > >> >> > >> >> Hi All > >> >> I had a look at the scipy.stats documentation and I was not able to > >> >> find a > >> >> function for > >> >> maximum likelihood parameter estimation. > >> >> Do you know whether is available in some other namespace/library of > >> >> scipy? > >> >> I found on the web few libraries ( this one is an > >> >> example http://bmnh.org/~pf/p4.html ) > having it, > >> >> but I would prefer to start playing with what scipy already offers by > >> >> default ( if any ). > >> >> Kind Regards > >> >> eo > >> > > >> > scipy.stats.distributions.rv_continuous.fit (I was just working on the > >> > docstring for that; I don't believe my changes have been merged; I > >> > believe > >> > Travis recently updated its code...) > >> > > >> > >> This is for fitting the parameters of a distribution via maximum > >> likelihood given that the DGP is the underlying distribution. I don't > >> think it is intended for more complicated likelihood functions (where > >> Nelder-Mead might fail). And in any event it will only find the > >> parameters of the distribution rather than the parameters of some > >> underlying model, if this is what you're after. > >> > >> Skipper > > > > OK, but just for clarity in my own mind: are you saying that > > rv_continuous.fit is _definitely_ inappropriate/inadequate for OP's needs > > (i.e., am I _completely_ misunderstanding the relationship between the > > function and OP's stated needs), or are you saying that the function > _may_ > > not be general/robust enough for OP's stated needs? > > Well, I guess it depends on exactly what kind of likelihood function > is being optimized. That's why I asked. > > My experience with stats.distributions is all of about a week, so I > could be wrong. But here it goes... rv_continuous is not intended to > be used on its own but rather as the base class for any distribution. > So if you believe that your data came from say an Gaussian > distribution, then you could use norm.fit(data) (with other options as > needed) to get back estimates of scale and location. So > > In [31]: from scipy.stats import norm > > In [32]: import numpy as np > > In [33]: x = np.random.normal(loc=0,scale=1,size=1000) > > In [34]: norm.fit(x) > Out[34]: (-0.043364692830314848, 1.0205901804210851) > > Which is close to our given location and scale. > > But if you had in mind some kind of data generating process for your > model based on some other observed data and you were interested in the > marginal effects of changes in the observed data on the outcome, then > it would be cumbersome I think to use the fit in distributions. It may > not be possible. Also, as mentioned, fit only uses Nelder-Mead > (optimize.fmin with the default parameters, which I've found to be > inadequate for even fairly basic likelihood based models), so it may > not be robust enough. At the moment, I can't think of a way to fit a > parameterized model as fit is written now. Come to think of it though > I don't think it would be much work to extend the fit method to work > for something like a linear regression model. > > Skipper > OK, this is all as I thought (e.g., fit only "works" to get the MLE's from data for a *presumed* distribution, but it is all-but-useless if the distribution isn't (believed to be) "known" a priori); just wanted to be sure I was reading you correctly. :-) Thanks! DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Mon Jun 21 19:04:10 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 21 Jun 2010 16:04:10 -0700 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 3:51 PM, wrote: > On Mon, Jun 21, 2010 at 6:17 PM, Skipper Seabold > wrote: > > On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith > > wrote: > >> On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold > >> wrote: > >>> > >>> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith > >>> wrote: > >>> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea > >>> > > >>> > wrote: > >>> >> > >>> >> Hi All > >>> >> I had a look at the scipy.stats documentation and I was not able to > >>> >> find a > >>> >> function for > >>> >> maximum likelihood parameter estimation. > >>> >> Do you know whether is available in some other namespace/library of > >>> >> scipy? > >>> >> I found on the web few libraries ( this one is an > >>> >> example http://bmnh.org/~pf/p4.html ) > having it, > >>> >> but I would prefer to start playing with what scipy already offers > by > >>> >> default ( if any ). > >>> >> Kind Regards > >>> >> eo > >>> > > >>> > scipy.stats.distributions.rv_continuous.fit (I was just working on > the > >>> > docstring for that; I don't believe my changes have been merged; I > >>> > believe > >>> > Travis recently updated its code...) > >>> > > >>> > >>> This is for fitting the parameters of a distribution via maximum > >>> likelihood given that the DGP is the underlying distribution. I don't > >>> think it is intended for more complicated likelihood functions (where > >>> Nelder-Mead might fail). And in any event it will only find the > >>> parameters of the distribution rather than the parameters of some > >>> underlying model, if this is what you're after. > >>> > >>> Skipper > >> > >> OK, but just for clarity in my own mind: are you saying that > >> rv_continuous.fit is _definitely_ inappropriate/inadequate for OP's > needs > >> (i.e., am I _completely_ misunderstanding the relationship between the > >> function and OP's stated needs), or are you saying that the function > _may_ > >> not be general/robust enough for OP's stated needs? > > > > Well, I guess it depends on exactly what kind of likelihood function > > is being optimized. That's why I asked. > > > > My experience with stats.distributions is all of about a week, so I > > could be wrong. But here it goes... rv_continuous is not intended to > > be used on its own but rather as the base class for any distribution. > > So if you believe that your data came from say an Gaussian > > distribution, then you could use norm.fit(data) (with other options as > > needed) to get back estimates of scale and location. So > > > > In [31]: from scipy.stats import norm > > > > In [32]: import numpy as np > > > > In [33]: x = np.random.normal(loc=0,scale=1,size=1000) > > > > In [34]: norm.fit(x) > > Out[34]: (-0.043364692830314848, 1.0205901804210851) > > > > Which is close to our given location and scale. > > > > But if you had in mind some kind of data generating process for your > > model based on some other observed data and you were interested in the > > marginal effects of changes in the observed data on the outcome, then > > it would be cumbersome I think to use the fit in distributions. It may > > not be possible. Also, as mentioned, fit only uses Nelder-Mead > > (optimize.fmin with the default parameters, which I've found to be > > inadequate for even fairly basic likelihood based models), so it may > > not be robust enough. At the moment, I can't think of a way to fit a > > parameterized model as fit is written now. Come to think of it though > > I don't think it would be much work to extend the fit method to work > > for something like a linear regression model. > > rephrasing this a bit and adding some comments: > > the fit of the distributions, estimate the parameters, shapes, loc and > scale directly, while often we want the distribution parameters, > especially loc (or mean) to depend on some explanatory variables. > > Generalized Linear Models does this for the exponential family of > distributions. > > R has a package where any distribution parameter can be parameterized > as a (linear) function of some explanatory variables. This would not > be too difficult to implement, but I'm not sure how well established > the theory and algorithms is outside of the exponential family and > some specific distributions. Also in many cases it will not be obvious > that the likelihood function is well (enough) behaved. > > I looked at the case for the t distribution, because I want it for > GARCH, but even there it is not completely clear whether the > parameterization of the t distribution should use the standard > t-distribution or the standardized t-distribution (scale=var=1). > > It would be easy to do a quick job, but more time consuming to get it > to work correctly for many cases/distributions. > > Josef > BTW: I haven't touched fit in stats.distributions in a long time, the > new version is all Travis' > Ooops, as I originally thought, thanks! DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Jun 21 19:10:21 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 21 Jun 2010 19:10:21 -0400 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 7:03 PM, David Goldsmith wrote: > On Mon, Jun 21, 2010 at 3:17 PM, Skipper Seabold > wrote: >> >> On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith >> wrote: >> > On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold >> > wrote: >> >> >> >> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith >> >> wrote: >> >> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea >> >> > >> >> > wrote: >> >> >> >> >> >> Hi All >> >> >> I had a look at the scipy.stats documentation and I was not able to >> >> >> find a >> >> >> function for >> >> >> maximum likelihood parameter estimation. >> >> >> Do you know whether is available in some other namespace/library of >> >> >> scipy? >> >> >> I found on the web few libraries ( this one is an >> >> >> example?http://bmnh.org/~pf/p4.html?) having it, >> >> >> but I would prefer to start playing with?what scipy already offers >> >> >> by >> >> >> default ( if any ). >> >> >> Kind Regards >> >> >> eo >> >> > >> >> > scipy.stats.distributions.rv_continuous.fit (I was just working on >> >> > the >> >> > docstring for that; I don't believe my changes have been merged; I >> >> > believe >> >> > Travis recently updated its code...) >> >> > >> >> >> >> This is for fitting the parameters of a distribution via maximum >> >> likelihood given that the DGP is the underlying distribution. ?I don't >> >> think it is intended for more complicated likelihood functions (where >> >> Nelder-Mead might fail). ?And in any event it will only find the >> >> parameters of the distribution rather than the parameters of some >> >> underlying model, if this is what you're after. >> >> >> >> Skipper >> > >> > OK, but just for clarity in my own mind: are you saying that >> > rv_continuous.fit is _definitely_ inappropriate/inadequate for OP's >> > needs >> > (i.e., am I _completely_ misunderstanding the relationship between the >> > function and OP's stated needs), or are you saying that the function >> > _may_ >> > not be general/robust enough for OP's stated needs? >> >> Well, I guess it depends on exactly what kind of likelihood function >> is being optimized. ?That's why I asked. >> >> My experience with stats.distributions is all of about a week, so I >> could be wrong. But here it goes... rv_continuous is not intended to >> be used on its own but rather as the base class for any distribution. >> So if you believe that your data came from say an Gaussian >> distribution, then you could use norm.fit(data) (with other options as >> needed) to get back estimates of scale and location. ?So >> >> In [31]: from scipy.stats import norm >> >> In [32]: import numpy as np >> >> In [33]: x = np.random.normal(loc=0,scale=1,size=1000) >> >> In [34]: norm.fit(x) >> Out[34]: (-0.043364692830314848, 1.0205901804210851) >> >> Which is close to our given location and scale. >> >> But if you had in mind some kind of data generating process for your >> model based on some other observed data and you were interested in the >> marginal effects of changes in the observed data on the outcome, then >> it would be cumbersome I think to use the fit in distributions. It may >> not be possible. ? Also, as mentioned, fit only uses Nelder-Mead >> (optimize.fmin with the default parameters, which I've found to be >> inadequate for even fairly basic likelihood based models), so it may >> not be robust enough. ?At the moment, I can't think of a way to fit a >> parameterized model as fit is written now. ?Come to think of it though >> I don't think it would be much work to extend the fit method to work >> for something like a linear regression model. >> >> Skipper > > > OK, this is all as I thought (e.g., fit only "works" to get the MLE's from > data for a *presumed* distribution, but it is all-but-useless if the > distribution isn't (believed to be) "known" a priori); just wanted to be > sure I was reading you correctly. :-)? Thanks! MLE always assumes that the distribution is known, since you need the likelihood function. It's not non- or semi-parametric. Josef > > DG > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From d.l.goldsmith at gmail.com Mon Jun 21 20:03:51 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 21 Jun 2010 17:03:51 -0700 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 4:10 PM, wrote: > On Mon, Jun 21, 2010 at 7:03 PM, David Goldsmith > wrote: > > On Mon, Jun 21, 2010 at 3:17 PM, Skipper Seabold > > wrote: > >> > >> On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith > >> wrote: > >> > On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold > > >> > wrote: > >> >> > >> >> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith > >> >> wrote: > >> >> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea > >> >> > > >> >> > wrote: > >> >> >> > >> >> >> Hi All > >> >> >> I had a look at the scipy.stats documentation and I was not able > to > >> >> >> find a > >> >> >> function for > >> >> >> maximum likelihood parameter estimation. > >> >> >> Do you know whether is available in some other namespace/library > of > >> >> >> scipy? > >> >> >> I found on the web few libraries ( this one is an > >> >> >> example http://bmnh.org/~pf/p4.html ) > having it, > >> >> >> but I would prefer to start playing with what scipy already offers > >> >> >> by > >> >> >> default ( if any ). > >> >> >> Kind Regards > >> >> >> eo > >> >> > > >> >> > scipy.stats.distributions.rv_continuous.fit (I was just working on > >> >> > the > >> >> > docstring for that; I don't believe my changes have been merged; I > >> >> > believe > >> >> > Travis recently updated its code...) > >> >> > > >> >> > >> >> This is for fitting the parameters of a distribution via maximum > >> >> likelihood given that the DGP is the underlying distribution. I > don't > >> >> think it is intended for more complicated likelihood functions (where > >> >> Nelder-Mead might fail). And in any event it will only find the > >> >> parameters of the distribution rather than the parameters of some > >> >> underlying model, if this is what you're after. > >> >> > >> >> Skipper > >> > > >> > OK, but just for clarity in my own mind: are you saying that > >> > rv_continuous.fit is _definitely_ inappropriate/inadequate for OP's > >> > needs > >> > (i.e., am I _completely_ misunderstanding the relationship between the > >> > function and OP's stated needs), or are you saying that the function > >> > _may_ > >> > not be general/robust enough for OP's stated needs? > >> > >> Well, I guess it depends on exactly what kind of likelihood function > >> is being optimized. That's why I asked. > >> > >> My experience with stats.distributions is all of about a week, so I > >> could be wrong. But here it goes... rv_continuous is not intended to > >> be used on its own but rather as the base class for any distribution. > >> So if you believe that your data came from say an Gaussian > >> distribution, then you could use norm.fit(data) (with other options as > >> needed) to get back estimates of scale and location. So > >> > >> In [31]: from scipy.stats import norm > >> > >> In [32]: import numpy as np > >> > >> In [33]: x = np.random.normal(loc=0,scale=1,size=1000) > >> > >> In [34]: norm.fit(x) > >> Out[34]: (-0.043364692830314848, 1.0205901804210851) > >> > >> Which is close to our given location and scale. > >> > >> But if you had in mind some kind of data generating process for your > >> model based on some other observed data and you were interested in the > >> marginal effects of changes in the observed data on the outcome, then > >> it would be cumbersome I think to use the fit in distributions. It may > >> not be possible. Also, as mentioned, fit only uses Nelder-Mead > >> (optimize.fmin with the default parameters, which I've found to be > >> inadequate for even fairly basic likelihood based models), so it may > >> not be robust enough. At the moment, I can't think of a way to fit a > >> parameterized model as fit is written now. Come to think of it though > >> I don't think it would be much work to extend the fit method to work > >> for something like a linear regression model. > >> > >> Skipper > > > > > > OK, this is all as I thought (e.g., fit only "works" to get the MLE's > from > > data for a *presumed* distribution, but it is all-but-useless if the > > distribution isn't (believed to be) "known" a priori); just wanted to be > > sure I was reading you correctly. :-) Thanks! > > MLE always assumes that the distribution is known, since you need the > likelihood function. > I'm not sure what I'm missing here (is it the definition of DGP? the meaning of Nelder-Mead? I want to learn, off-list if this is considered "noise"): according to my reference - Bain & Englehardt, Intro. to Prob. and Math. Stat., 2nd Ed., Duxbury, 1992 - if the underlying population distribution is known, then the likelihood function is well-determined (although the likelihood equation(s) it gives rise to may not be soluble analytically, of course). So why doesn't the OP knowing the underlying distribution (as your comment above implies they should if they seek MLEs) imply that s/he would also "know" what the likelihood function "looks like," (and thus the question isn't so much what the likelihood function "looks like," but what the underlying distribution is, and thence, do we have that distribution implemented yet in scipy.stats)? DG > It's not non- or semi-parametric. > > Josef > > > > > DG > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Mon Jun 21 20:05:02 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 21 Jun 2010 17:05:02 -0700 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: Oh, I just figured out the definition of DGP: David Goldsmith Perplexed! ;-) DG(P) On Mon, Jun 21, 2010 at 5:03 PM, David Goldsmith wrote: > On Mon, Jun 21, 2010 at 4:10 PM, wrote: > >> On Mon, Jun 21, 2010 at 7:03 PM, David Goldsmith >> wrote: >> > On Mon, Jun 21, 2010 at 3:17 PM, Skipper Seabold >> > wrote: >> >> >> >> On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith >> >> wrote: >> >> > On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold < >> jsseabold at gmail.com> >> >> > wrote: >> >> >> >> >> >> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith >> >> >> wrote: >> >> >> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea >> >> >> > >> >> >> > wrote: >> >> >> >> >> >> >> >> Hi All >> >> >> >> I had a look at the scipy.stats documentation and I was not able >> to >> >> >> >> find a >> >> >> >> function for >> >> >> >> maximum likelihood parameter estimation. >> >> >> >> Do you know whether is available in some other namespace/library >> of >> >> >> >> scipy? >> >> >> >> I found on the web few libraries ( this one is an >> >> >> >> example http://bmnh.org/~pf/p4.html ) >> having it, >> >> >> >> but I would prefer to start playing with what scipy already >> offers >> >> >> >> by >> >> >> >> default ( if any ). >> >> >> >> Kind Regards >> >> >> >> eo >> >> >> > >> >> >> > scipy.stats.distributions.rv_continuous.fit (I was just working on >> >> >> > the >> >> >> > docstring for that; I don't believe my changes have been merged; I >> >> >> > believe >> >> >> > Travis recently updated its code...) >> >> >> > >> >> >> >> >> >> This is for fitting the parameters of a distribution via maximum >> >> >> likelihood given that the DGP is the underlying distribution. I >> don't >> >> >> think it is intended for more complicated likelihood functions >> (where >> >> >> Nelder-Mead might fail). And in any event it will only find the >> >> >> parameters of the distribution rather than the parameters of some >> >> >> underlying model, if this is what you're after. >> >> >> >> >> >> Skipper >> >> > >> >> > OK, but just for clarity in my own mind: are you saying that >> >> > rv_continuous.fit is _definitely_ inappropriate/inadequate for OP's >> >> > needs >> >> > (i.e., am I _completely_ misunderstanding the relationship between >> the >> >> > function and OP's stated needs), or are you saying that the function >> >> > _may_ >> >> > not be general/robust enough for OP's stated needs? >> >> >> >> Well, I guess it depends on exactly what kind of likelihood function >> >> is being optimized. That's why I asked. >> >> >> >> My experience with stats.distributions is all of about a week, so I >> >> could be wrong. But here it goes... rv_continuous is not intended to >> >> be used on its own but rather as the base class for any distribution. >> >> So if you believe that your data came from say an Gaussian >> >> distribution, then you could use norm.fit(data) (with other options as >> >> needed) to get back estimates of scale and location. So >> >> >> >> In [31]: from scipy.stats import norm >> >> >> >> In [32]: import numpy as np >> >> >> >> In [33]: x = np.random.normal(loc=0,scale=1,size=1000) >> >> >> >> In [34]: norm.fit(x) >> >> Out[34]: (-0.043364692830314848, 1.0205901804210851) >> >> >> >> Which is close to our given location and scale. >> >> >> >> But if you had in mind some kind of data generating process for your >> >> model based on some other observed data and you were interested in the >> >> marginal effects of changes in the observed data on the outcome, then >> >> it would be cumbersome I think to use the fit in distributions. It may >> >> not be possible. Also, as mentioned, fit only uses Nelder-Mead >> >> (optimize.fmin with the default parameters, which I've found to be >> >> inadequate for even fairly basic likelihood based models), so it may >> >> not be robust enough. At the moment, I can't think of a way to fit a >> >> parameterized model as fit is written now. Come to think of it though >> >> I don't think it would be much work to extend the fit method to work >> >> for something like a linear regression model. >> >> >> >> Skipper >> > >> > >> > OK, this is all as I thought (e.g., fit only "works" to get the MLE's >> from >> > data for a *presumed* distribution, but it is all-but-useless if the >> > distribution isn't (believed to be) "known" a priori); just wanted to be >> > sure I was reading you correctly. :-) Thanks! >> >> MLE always assumes that the distribution is known, since you need the >> likelihood function. >> > > I'm not sure what I'm missing here (is it the definition of DGP? the > meaning of Nelder-Mead? I want to learn, off-list if this is considered > "noise"): according to my reference - Bain & Englehardt, Intro. to Prob. and > Math. Stat., 2nd Ed., Duxbury, 1992 - if the underlying population > distribution is known, then the likelihood function is well-determined > (although the likelihood equation(s) it gives rise to may not be soluble > analytically, of course). So why doesn't the OP knowing the underlying > distribution (as your comment above implies they should if they seek MLEs) > imply that s/he would also "know" what the likelihood function "looks like," > (and thus the question isn't so much what the likelihood function "looks > like," but what the underlying distribution is, and thence, do we have that > distribution implemented yet in scipy.stats)? > > DG > > >> It's not non- or semi-parametric. >> >> Josef >> >> > >> > DG >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Mathematician: noun, someone who disavows certainty when their uncertainty > set is non-empty, even if that set has measure zero. > > Hope: noun, that delusive spirit which escaped Pandora's jar and, with her > lies, prevents mankind from committing a general suicide. (As interpreted > by Robert Graves) > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Jun 21 20:19:29 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 21 Jun 2010 20:19:29 -0400 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 8:03 PM, David Goldsmith wrote: > On Mon, Jun 21, 2010 at 4:10 PM, wrote: >> >> On Mon, Jun 21, 2010 at 7:03 PM, David Goldsmith >> wrote: >> > On Mon, Jun 21, 2010 at 3:17 PM, Skipper Seabold >> > wrote: >> >> >> >> On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith >> >> wrote: >> >> > On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold >> >> > >> >> > wrote: >> >> >> >> >> >> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith >> >> >> wrote: >> >> >> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea >> >> >> > >> >> >> > wrote: >> >> >> >> >> >> >> >> Hi All >> >> >> >> I had a look at the scipy.stats documentation and I was not able >> >> >> >> to >> >> >> >> find a >> >> >> >> function for >> >> >> >> maximum likelihood parameter estimation. >> >> >> >> Do you know whether is available in some other namespace/library >> >> >> >> of >> >> >> >> scipy? >> >> >> >> I found on the web few libraries ( this one is an >> >> >> >> example?http://bmnh.org/~pf/p4.html?) having it, >> >> >> >> but I would prefer to start playing with?what scipy already >> >> >> >> offers >> >> >> >> by >> >> >> >> default ( if any ). >> >> >> >> Kind Regards >> >> >> >> eo >> >> >> > >> >> >> > scipy.stats.distributions.rv_continuous.fit (I was just working on >> >> >> > the >> >> >> > docstring for that; I don't believe my changes have been merged; I >> >> >> > believe >> >> >> > Travis recently updated its code...) >> >> >> > >> >> >> >> >> >> This is for fitting the parameters of a distribution via maximum >> >> >> likelihood given that the DGP is the underlying distribution. ?I >> >> >> don't >> >> >> think it is intended for more complicated likelihood functions >> >> >> (where >> >> >> Nelder-Mead might fail). ?And in any event it will only find the >> >> >> parameters of the distribution rather than the parameters of some >> >> >> underlying model, if this is what you're after. >> >> >> >> >> >> Skipper >> >> > >> >> > OK, but just for clarity in my own mind: are you saying that >> >> > rv_continuous.fit is _definitely_ inappropriate/inadequate for OP's >> >> > needs >> >> > (i.e., am I _completely_ misunderstanding the relationship between >> >> > the >> >> > function and OP's stated needs), or are you saying that the function >> >> > _may_ >> >> > not be general/robust enough for OP's stated needs? >> >> >> >> Well, I guess it depends on exactly what kind of likelihood function >> >> is being optimized. ?That's why I asked. >> >> >> >> My experience with stats.distributions is all of about a week, so I >> >> could be wrong. But here it goes... rv_continuous is not intended to >> >> be used on its own but rather as the base class for any distribution. >> >> So if you believe that your data came from say an Gaussian >> >> distribution, then you could use norm.fit(data) (with other options as >> >> needed) to get back estimates of scale and location. ?So >> >> >> >> In [31]: from scipy.stats import norm >> >> >> >> In [32]: import numpy as np >> >> >> >> In [33]: x = np.random.normal(loc=0,scale=1,size=1000) >> >> >> >> In [34]: norm.fit(x) >> >> Out[34]: (-0.043364692830314848, 1.0205901804210851) >> >> >> >> Which is close to our given location and scale. >> >> >> >> But if you had in mind some kind of data generating process for your >> >> model based on some other observed data and you were interested in the >> >> marginal effects of changes in the observed data on the outcome, then >> >> it would be cumbersome I think to use the fit in distributions. It may >> >> not be possible. ? Also, as mentioned, fit only uses Nelder-Mead >> >> (optimize.fmin with the default parameters, which I've found to be >> >> inadequate for even fairly basic likelihood based models), so it may >> >> not be robust enough. ?At the moment, I can't think of a way to fit a >> >> parameterized model as fit is written now. ?Come to think of it though >> >> I don't think it would be much work to extend the fit method to work >> >> for something like a linear regression model. >> >> >> >> Skipper >> > >> > >> > OK, this is all as I thought (e.g., fit only "works" to get the MLE's >> > from >> > data for a *presumed* distribution, but it is all-but-useless if the >> > distribution isn't (believed to be) "known" a priori); just wanted to be >> > sure I was reading you correctly. :-)? Thanks! >> >> MLE always assumes that the distribution is known, since you need the >> likelihood function. > > I'm not sure what I'm missing here (is it the definition of DGP? the meaning > of Nelder-Mead? I want to learn, off-list if this is considered "noise"): > according to my reference - Bain & Englehardt, Intro. to Prob. and Math. > Stat., 2nd Ed., Duxbury, 1992 - if the underlying population distribution is > known, then the likelihood function is well-determined (although the > likelihood equation(s) it gives rise to may not be soluble analytically, of > course).? So why doesn't the OP knowing the underlying distribution (as your > comment above implies they should if they seek MLEs) imply that s/he would > also "know" what the likelihood function "looks like," (and thus the > question isn't so much what the likelihood function "looks like," but what > the underlying distribution is, and thence, do we have that distribution > implemented yet in scipy.stats)? DGP: data generating process In many cases the assumed distribution of the error or noise variable is just the normal distribution. But what's the overall model that explains the endogenous variable. distribution.fit would just assume that each observations is a random draw from the same population distribution. But you can do MLE on standard linear regression, system of equations, ARIMA or GARCH in time series analysis. For any of this we need to specify what the relationship between the endogenous variable and it's own past and other explanatory variables is. e.g. simplest ARMA A(L) y_t = B(L) e_t with e_t independently and identically distributed (iid.) normal random variable A(L), B(L) lag-polynomials and for the full MLE we would also need to specify initial conditions. simple linear regression with non iid errors y_t = x_t * beta + e_t e = {e_t}_{for all t} distributed N(0, Sigma) plus assumptions on the structure of Sigma in these cases the likelihood function defines a lot more than just the distribution of the error term. short hand: what's the DGP for y_t for all t ? Josef > > DG > >> >> It's not non- or semi-parametric. >> >> Josef >> >> > >> > DG >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Mathematician: noun, someone who disavows certainty when their uncertainty > set is non-empty, even if that set has measure zero. > > Hope: noun, that delusive spirit which escaped Pandora's jar and, with her > lies, prevents mankind from committing a general suicide. ?(As interpreted > by Robert Graves) > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From jsseabold at gmail.com Mon Jun 21 20:22:48 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 21 Jun 2010 20:22:48 -0400 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 8:03 PM, David Goldsmith wrote: > On Mon, Jun 21, 2010 at 4:10 PM, wrote: >> >> On Mon, Jun 21, 2010 at 7:03 PM, David Goldsmith >> wrote: >> > On Mon, Jun 21, 2010 at 3:17 PM, Skipper Seabold >> > wrote: >> >> >> >> On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith >> >> wrote: >> >> > On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold >> >> > >> >> > wrote: >> >> >> >> >> >> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith >> >> >> wrote: >> >> >> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea >> >> >> > >> >> >> > wrote: >> >> >> >> >> >> >> >> Hi All >> >> >> >> I had a look at the scipy.stats documentation and I was not able >> >> >> >> to >> >> >> >> find a >> >> >> >> function for >> >> >> >> maximum likelihood parameter estimation. >> >> >> >> Do you know whether is available in some other namespace/library >> >> >> >> of >> >> >> >> scipy? >> >> >> >> I found on the web few libraries ( this one is an >> >> >> >> example?http://bmnh.org/~pf/p4.html?) having it, >> >> >> >> but I would prefer to start playing with?what scipy already >> >> >> >> offers >> >> >> >> by >> >> >> >> default ( if any ). >> >> >> >> Kind Regards >> >> >> >> eo >> >> >> > >> >> >> > scipy.stats.distributions.rv_continuous.fit (I was just working on >> >> >> > the >> >> >> > docstring for that; I don't believe my changes have been merged; I >> >> >> > believe >> >> >> > Travis recently updated its code...) >> >> >> > >> >> >> >> >> >> This is for fitting the parameters of a distribution via maximum >> >> >> likelihood given that the DGP is the underlying distribution. ?I >> >> >> don't >> >> >> think it is intended for more complicated likelihood functions >> >> >> (where >> >> >> Nelder-Mead might fail). ?And in any event it will only find the >> >> >> parameters of the distribution rather than the parameters of some >> >> >> underlying model, if this is what you're after. >> >> >> >> >> >> Skipper >> >> > >> >> > OK, but just for clarity in my own mind: are you saying that >> >> > rv_continuous.fit is _definitely_ inappropriate/inadequate for OP's >> >> > needs >> >> > (i.e., am I _completely_ misunderstanding the relationship between >> >> > the >> >> > function and OP's stated needs), or are you saying that the function >> >> > _may_ >> >> > not be general/robust enough for OP's stated needs? >> >> >> >> Well, I guess it depends on exactly what kind of likelihood function >> >> is being optimized. ?That's why I asked. >> >> >> >> My experience with stats.distributions is all of about a week, so I >> >> could be wrong. But here it goes... rv_continuous is not intended to >> >> be used on its own but rather as the base class for any distribution. >> >> So if you believe that your data came from say an Gaussian >> >> distribution, then you could use norm.fit(data) (with other options as >> >> needed) to get back estimates of scale and location. ?So >> >> >> >> In [31]: from scipy.stats import norm >> >> >> >> In [32]: import numpy as np >> >> >> >> In [33]: x = np.random.normal(loc=0,scale=1,size=1000) >> >> >> >> In [34]: norm.fit(x) >> >> Out[34]: (-0.043364692830314848, 1.0205901804210851) >> >> >> >> Which is close to our given location and scale. >> >> >> >> But if you had in mind some kind of data generating process for your >> >> model based on some other observed data and you were interested in the >> >> marginal effects of changes in the observed data on the outcome, then >> >> it would be cumbersome I think to use the fit in distributions. It may >> >> not be possible. ? Also, as mentioned, fit only uses Nelder-Mead >> >> (optimize.fmin with the default parameters, which I've found to be >> >> inadequate for even fairly basic likelihood based models), so it may >> >> not be robust enough. ?At the moment, I can't think of a way to fit a >> >> parameterized model as fit is written now. ?Come to think of it though >> >> I don't think it would be much work to extend the fit method to work >> >> for something like a linear regression model. >> >> >> >> Skipper >> > >> > >> > OK, this is all as I thought (e.g., fit only "works" to get the MLE's >> > from >> > data for a *presumed* distribution, but it is all-but-useless if the >> > distribution isn't (believed to be) "known" a priori); just wanted to be >> > sure I was reading you correctly. :-)? Thanks! >> >> MLE always assumes that the distribution is known, since you need the >> likelihood function. Unless you use Empirical Likelihood, of course. > > I'm not sure what I'm missing here (is it the definition of DGP? the meaning > of Nelder-Mead? I want to learn, off-list if this is considered "noise"): > according to my reference - Bain & Englehardt, Intro. to Prob. and Math. > Stat., 2nd Ed., Duxbury, 1992 - if the underlying population distribution is > known, then the likelihood function is well-determined (although the > likelihood equation(s) it gives rise to may not be soluble analytically, of > course).? So why doesn't the OP knowing the underlying distribution (as your > comment above implies they should if they seek MLEs) imply that s/he would > also "know" what the likelihood function "looks like," (and thus the > question isn't so much what the likelihood function "looks like," but what > the underlying distribution is, and thence, do we have that distribution > implemented yet in scipy.stats)? > [Josef's post just came, here's an explanation that is more along the lines of discrete choice models than time series. It's how I first learned maximum likelihood. See http://en.wikipedia.org/wiki/Probit_model] DGP = Data Generating Process. Whether or not these actually exist in the social sciences is up for debate and is really neither here nor there for our purposes. Nelder-Mead is just the default optimizer of rv_distribution.fit. It is one of the simpler solvers and if we have a likelihood function that is really flat or has local minima I would assume Nelder-Mead would choke more often than other methods. I am no optimization jockey though. Someone else may have something to add here. You can certainly use the distributions in stats to build up a log-likelihood. See the third code snippet in the blog post I referenced above where I define the function loglike (but excuse the lack of proper formatting, blogger ate my custom CSS at some point). However, note that in this case the observed Y's are (assumed to be) a function of underlying Xs and it is the influence of the X's on Y via the normal cdf of the probit model that we are interested in. Actually, we are interested in the marginal effects most likely, but you can have a look into this for these models. The issue here is that for independent observations we have likelihood = product(prob(X*Beta)). For my purposes, I am more interested in the unknown beta than in the estimated mean and variance of Y and the fit in distributions is not much help here (maybe it could be with some work, but it's not clear to me how to do this quickly). hth, Skipper From d.l.goldsmith at gmail.com Mon Jun 21 20:41:46 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 21 Jun 2010 17:41:46 -0700 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 5:19 PM, wrote: > On Mon, Jun 21, 2010 at 8:03 PM, David Goldsmith > wrote: > > On Mon, Jun 21, 2010 at 4:10 PM, wrote: > >> > >> On Mon, Jun 21, 2010 at 7:03 PM, David Goldsmith > >> wrote: > >> > On Mon, Jun 21, 2010 at 3:17 PM, Skipper Seabold > > >> > wrote: > >> >> > >> >> On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith > >> >> wrote: > >> >> > On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold > >> >> > > >> >> > wrote: > >> >> >> > >> >> >> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith > >> >> >> wrote: > >> >> >> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea > >> >> >> > > >> >> >> > wrote: > >> >> >> >> > >> >> >> >> Hi All > >> >> >> >> I had a look at the scipy.stats documentation and I was not > able > >> >> >> >> to > >> >> >> >> find a > >> >> >> >> function for > >> >> >> >> maximum likelihood parameter estimation. > >> >> >> >> Do you know whether is available in some other > namespace/library > >> >> >> >> of > >> >> >> >> scipy? > >> >> >> >> I found on the web few libraries ( this one is an > >> >> >> >> example http://bmnh.org/~pf/p4.html ) > having it, > >> >> >> >> but I would prefer to start playing with what scipy already > >> >> >> >> offers > >> >> >> >> by > >> >> >> >> default ( if any ). > >> >> >> >> Kind Regards > >> >> >> >> eo > >> >> >> > > >> >> >> > scipy.stats.distributions.rv_continuous.fit (I was just working > on > >> >> >> > the > >> >> >> > docstring for that; I don't believe my changes have been merged; > I > >> >> >> > believe > >> >> >> > Travis recently updated its code...) > >> >> >> > > >> >> >> > >> >> >> This is for fitting the parameters of a distribution via maximum > >> >> >> likelihood given that the DGP is the underlying distribution. I > >> >> >> don't > >> >> >> think it is intended for more complicated likelihood functions > >> >> >> (where > >> >> >> Nelder-Mead might fail). And in any event it will only find the > >> >> >> parameters of the distribution rather than the parameters of some > >> >> >> underlying model, if this is what you're after. > >> >> >> > >> >> >> Skipper > >> >> > > >> >> > OK, but just for clarity in my own mind: are you saying that > >> >> > rv_continuous.fit is _definitely_ inappropriate/inadequate for OP's > >> >> > needs > >> >> > (i.e., am I _completely_ misunderstanding the relationship between > >> >> > the > >> >> > function and OP's stated needs), or are you saying that the > function > >> >> > _may_ > >> >> > not be general/robust enough for OP's stated needs? > >> >> > >> >> Well, I guess it depends on exactly what kind of likelihood function > >> >> is being optimized. That's why I asked. > >> >> > >> >> My experience with stats.distributions is all of about a week, so I > >> >> could be wrong. But here it goes... rv_continuous is not intended to > >> >> be used on its own but rather as the base class for any distribution. > >> >> So if you believe that your data came from say an Gaussian > >> >> distribution, then you could use norm.fit(data) (with other options > as > >> >> needed) to get back estimates of scale and location. So > >> >> > >> >> In [31]: from scipy.stats import norm > >> >> > >> >> In [32]: import numpy as np > >> >> > >> >> In [33]: x = np.random.normal(loc=0,scale=1,size=1000) > >> >> > >> >> In [34]: norm.fit(x) > >> >> Out[34]: (-0.043364692830314848, 1.0205901804210851) > >> >> > >> >> Which is close to our given location and scale. > >> >> > >> >> But if you had in mind some kind of data generating process for your > >> >> model based on some other observed data and you were interested in > the > >> >> marginal effects of changes in the observed data on the outcome, then > >> >> it would be cumbersome I think to use the fit in distributions. It > may > >> >> not be possible. Also, as mentioned, fit only uses Nelder-Mead > >> >> (optimize.fmin with the default parameters, which I've found to be > >> >> inadequate for even fairly basic likelihood based models), so it may > >> >> not be robust enough. At the moment, I can't think of a way to fit a > >> >> parameterized model as fit is written now. Come to think of it > though > >> >> I don't think it would be much work to extend the fit method to work > >> >> for something like a linear regression model. > >> >> > >> >> Skipper > >> > > >> > > >> > OK, this is all as I thought (e.g., fit only "works" to get the MLE's > >> > from > >> > data for a *presumed* distribution, but it is all-but-useless if the > >> > distribution isn't (believed to be) "known" a priori); just wanted to > be > >> > sure I was reading you correctly. :-) Thanks! > >> > >> MLE always assumes that the distribution is known, since you need the > >> likelihood function. > > > > I'm not sure what I'm missing here (is it the definition of DGP? the > meaning > > of Nelder-Mead? I want to learn, off-list if this is considered "noise"): > > according to my reference - Bain & Englehardt, Intro. to Prob. and Math. > > Stat., 2nd Ed., Duxbury, 1992 - if the underlying population distribution > is > > known, then the likelihood function is well-determined (although the > > likelihood equation(s) it gives rise to may not be soluble analytically, > of > > course). So why doesn't the OP knowing the underlying distribution (as > your > > comment above implies they should if they seek MLEs) imply that s/he > would > > also "know" what the likelihood function "looks like," (and thus the > > question isn't so much what the likelihood function "looks like," but > what > > the underlying distribution is, and thence, do we have that distribution > > implemented yet in scipy.stats)? > > DGP: data generating process > > In many cases the assumed distribution of the error or noise variable > is just the normal distribution. But what's the overall model that > explains the endogenous variable. > distribution.fit would just assume that each observations is a random > draw from the same population distribution. > > But you can do MLE on standard linear regression, system of equations, > ARIMA or GARCH in time series analysis. For any of this we need to > specify what the relationship between the endogenous variable and it's > own past and other explanatory variables is. > e.g. simplest ARMA > > A(L) y_t = B(L) e_t > with e_t independently and identically distributed (iid.) normal > random variable > A(L), B(L) lag-polynomials > and for the full MLE we would also need to specify initial conditions. > > simple linear regression with non iid errors > y_t = x_t * beta + e_t e = {e_t}_{for all t} distributed N(0, > Sigma) plus assumptions on the structure of Sigma > > in these cases the likelihood function defines a lot more than just > the distribution of the error term. > Ah, jetzt ich verstehe (ich denke). So in the general case, the procedure needs to "apportion" the information in the data among the parameters of the "mechanistic" part of the model and the parameters of the "random noise" part of the model, and the Maximum Likelihood Equations give you the values of all these parameters (the mechanistic ones and noise ones) that maximize the likelihood of observing the data one observed, correct? DG(NLP?) > > short hand: what's the DGP for y_t for all t ? > > Josef > > > > > DG > > > >> > >> It's not non- or semi-parametric. > >> > >> Josef > >> > >> > > >> > DG > >> > > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > -- > > Mathematician: noun, someone who disavows certainty when their > uncertainty > > set is non-empty, even if that set has measure zero. > > > > Hope: noun, that delusive spirit which escaped Pandora's jar and, with > her > > lies, prevents mankind from committing a general suicide. (As > interpreted > > by Robert Graves) > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Mon Jun 21 21:43:11 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 21 Jun 2010 21:43:11 -0400 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Mon, Jun 21, 2010 at 8:41 PM, David Goldsmith wrote: > On Mon, Jun 21, 2010 at 5:19 PM, wrote: >> >> On Mon, Jun 21, 2010 at 8:03 PM, David Goldsmith >> wrote: >> > On Mon, Jun 21, 2010 at 4:10 PM, wrote: >> >> >> >> On Mon, Jun 21, 2010 at 7:03 PM, David Goldsmith >> >> wrote: >> >> > On Mon, Jun 21, 2010 at 3:17 PM, Skipper Seabold >> >> > >> >> > wrote: >> >> >> >> >> >> On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith >> >> >> wrote: >> >> >> > On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold >> >> >> > >> >> >> > wrote: >> >> >> >> >> >> >> >> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith >> >> >> >> wrote: >> >> >> >> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea >> >> >> >> > >> >> >> >> > wrote: >> >> >> >> >> >> >> >> >> >> Hi All >> >> >> >> >> I had a look at the scipy.stats documentation and I was not >> >> >> >> >> able >> >> >> >> >> to >> >> >> >> >> find a >> >> >> >> >> function for >> >> >> >> >> maximum likelihood parameter estimation. >> >> >> >> >> Do you know whether is available in some other >> >> >> >> >> namespace/library >> >> >> >> >> of >> >> >> >> >> scipy? >> >> >> >> >> I found on the web few libraries ( this one is an >> >> >> >> >> example?http://bmnh.org/~pf/p4.html?) having it, >> >> >> >> >> but I would prefer to start playing with?what scipy already >> >> >> >> >> offers >> >> >> >> >> by >> >> >> >> >> default ( if any ). >> >> >> >> >> Kind Regards >> >> >> >> >> eo >> >> >> >> > >> >> >> >> > scipy.stats.distributions.rv_continuous.fit (I was just working >> >> >> >> > on >> >> >> >> > the >> >> >> >> > docstring for that; I don't believe my changes have been >> >> >> >> > merged; I >> >> >> >> > believe >> >> >> >> > Travis recently updated its code...) >> >> >> >> > >> >> >> >> >> >> >> >> This is for fitting the parameters of a distribution via maximum >> >> >> >> likelihood given that the DGP is the underlying distribution. ?I >> >> >> >> don't >> >> >> >> think it is intended for more complicated likelihood functions >> >> >> >> (where >> >> >> >> Nelder-Mead might fail). ?And in any event it will only find the >> >> >> >> parameters of the distribution rather than the parameters of some >> >> >> >> underlying model, if this is what you're after. >> >> >> >> >> >> >> >> Skipper >> >> >> > >> >> >> > OK, but just for clarity in my own mind: are you saying that >> >> >> > rv_continuous.fit is _definitely_ inappropriate/inadequate for >> >> >> > OP's >> >> >> > needs >> >> >> > (i.e., am I _completely_ misunderstanding the relationship between >> >> >> > the >> >> >> > function and OP's stated needs), or are you saying that the >> >> >> > function >> >> >> > _may_ >> >> >> > not be general/robust enough for OP's stated needs? >> >> >> >> >> >> Well, I guess it depends on exactly what kind of likelihood function >> >> >> is being optimized. ?That's why I asked. >> >> >> >> >> >> My experience with stats.distributions is all of about a week, so I >> >> >> could be wrong. But here it goes... rv_continuous is not intended to >> >> >> be used on its own but rather as the base class for any >> >> >> distribution. >> >> >> So if you believe that your data came from say an Gaussian >> >> >> distribution, then you could use norm.fit(data) (with other options >> >> >> as >> >> >> needed) to get back estimates of scale and location. ?So >> >> >> >> >> >> In [31]: from scipy.stats import norm >> >> >> >> >> >> In [32]: import numpy as np >> >> >> >> >> >> In [33]: x = np.random.normal(loc=0,scale=1,size=1000) >> >> >> >> >> >> In [34]: norm.fit(x) >> >> >> Out[34]: (-0.043364692830314848, 1.0205901804210851) >> >> >> >> >> >> Which is close to our given location and scale. >> >> >> >> >> >> But if you had in mind some kind of data generating process for your >> >> >> model based on some other observed data and you were interested in >> >> >> the >> >> >> marginal effects of changes in the observed data on the outcome, >> >> >> then >> >> >> it would be cumbersome I think to use the fit in distributions. It >> >> >> may >> >> >> not be possible. ? Also, as mentioned, fit only uses Nelder-Mead >> >> >> (optimize.fmin with the default parameters, which I've found to be >> >> >> inadequate for even fairly basic likelihood based models), so it may >> >> >> not be robust enough. ?At the moment, I can't think of a way to fit >> >> >> a >> >> >> parameterized model as fit is written now. ?Come to think of it >> >> >> though >> >> >> I don't think it would be much work to extend the fit method to work >> >> >> for something like a linear regression model. >> >> >> >> >> >> Skipper >> >> > >> >> > >> >> > OK, this is all as I thought (e.g., fit only "works" to get the MLE's >> >> > from >> >> > data for a *presumed* distribution, but it is all-but-useless if the >> >> > distribution isn't (believed to be) "known" a priori); just wanted to >> >> > be >> >> > sure I was reading you correctly. :-)? Thanks! >> >> >> >> MLE always assumes that the distribution is known, since you need the >> >> likelihood function. >> > >> > I'm not sure what I'm missing here (is it the definition of DGP? the >> > meaning >> > of Nelder-Mead? I want to learn, off-list if this is considered >> > "noise"): >> > according to my reference - Bain & Englehardt, Intro. to Prob. and Math. >> > Stat., 2nd Ed., Duxbury, 1992 - if the underlying population >> > distribution is >> > known, then the likelihood function is well-determined (although the >> > likelihood equation(s) it gives rise to may not be soluble analytically, >> > of >> > course).? So why doesn't the OP knowing the underlying distribution (as >> > your >> > comment above implies they should if they seek MLEs) imply that s/he >> > would >> > also "know" what the likelihood function "looks like," (and thus the >> > question isn't so much what the likelihood function "looks like," but >> > what >> > the underlying distribution is, and thence, do we have that distribution >> > implemented yet in scipy.stats)? >> >> DGP: data generating process >> >> In many cases the assumed distribution of the error or noise variable >> is just the normal distribution. But what's the overall model that >> explains the endogenous variable. >> distribution.fit would just assume that each observations is a random >> draw from the same population distribution. >> >> But you can do MLE on standard linear regression, system of equations, >> ARIMA or GARCH in time series analysis. For any of this we need to >> specify what the relationship between the endogenous variable and it's >> own past and other explanatory variables is. >> e.g. simplest ARMA >> >> A(L) y_t = B(L) e_t >> with e_t independently and identically distributed (iid.) normal >> random variable >> A(L), B(L) lag-polynomials >> and for the full MLE we would also need to specify initial conditions. >> >> simple linear regression with non iid errors >> y_t = x_t * beta + e_t ? ? ?e = {e_t}_{for all t} distributed N(0, >> Sigma) ? plus assumptions on the structure of Sigma >> >> in these cases the likelihood function defines a lot more than just >> the distribution of the error term. > > Ah, jetzt ich verstehe (ich denke).? So in the general case, the procedure > needs to "apportion" the information in the data among the parameters of the > "mechanistic" part of the model and the parameters of the "random noise" > part of the model, and the Maximum Likelihood Equations give you the values > of all these parameters (the mechanistic ones and noise ones) that maximize > the likelihood of observing the data one observed, correct? > Yes, I think you've got for the more general case that Josef describes. Skipper From eneide.odissea at gmail.com Tue Jun 22 03:46:28 2010 From: eneide.odissea at gmail.com (eneide.odissea) Date: Tue, 22 Jun 2010 09:46:28 +0200 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: Hi All I need to use max likelihood algorithm for fitting parameters for a GARCH(1,1) model. Is the Distribution to be assumed normal? On Tue, Jun 22, 2010 at 3:43 AM, Skipper Seabold wrote: > On Mon, Jun 21, 2010 at 8:41 PM, David Goldsmith > wrote: > > On Mon, Jun 21, 2010 at 5:19 PM, wrote: > >> > >> On Mon, Jun 21, 2010 at 8:03 PM, David Goldsmith > >> wrote: > >> > On Mon, Jun 21, 2010 at 4:10 PM, wrote: > >> >> > >> >> On Mon, Jun 21, 2010 at 7:03 PM, David Goldsmith > >> >> wrote: > >> >> > On Mon, Jun 21, 2010 at 3:17 PM, Skipper Seabold > >> >> > > >> >> > wrote: > >> >> >> > >> >> >> On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith > >> >> >> wrote: > >> >> >> > On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold > >> >> >> > > >> >> >> > wrote: > >> >> >> >> > >> >> >> >> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith > >> >> >> >> wrote: > >> >> >> >> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea > >> >> >> >> > > >> >> >> >> > wrote: > >> >> >> >> >> > >> >> >> >> >> Hi All > >> >> >> >> >> I had a look at the scipy.stats documentation and I was not > >> >> >> >> >> able > >> >> >> >> >> to > >> >> >> >> >> find a > >> >> >> >> >> function for > >> >> >> >> >> maximum likelihood parameter estimation. > >> >> >> >> >> Do you know whether is available in some other > >> >> >> >> >> namespace/library > >> >> >> >> >> of > >> >> >> >> >> scipy? > >> >> >> >> >> I found on the web few libraries ( this one is an > >> >> >> >> >> example http://bmnh.org/~pf/p4.html ) > having it, > >> >> >> >> >> but I would prefer to start playing with what scipy already > >> >> >> >> >> offers > >> >> >> >> >> by > >> >> >> >> >> default ( if any ). > >> >> >> >> >> Kind Regards > >> >> >> >> >> eo > >> >> >> >> > > >> >> >> >> > scipy.stats.distributions.rv_continuous.fit (I was just > working > >> >> >> >> > on > >> >> >> >> > the > >> >> >> >> > docstring for that; I don't believe my changes have been > >> >> >> >> > merged; I > >> >> >> >> > believe > >> >> >> >> > Travis recently updated its code...) > >> >> >> >> > > >> >> >> >> > >> >> >> >> This is for fitting the parameters of a distribution via > maximum > >> >> >> >> likelihood given that the DGP is the underlying distribution. > I > >> >> >> >> don't > >> >> >> >> think it is intended for more complicated likelihood functions > >> >> >> >> (where > >> >> >> >> Nelder-Mead might fail). And in any event it will only find > the > >> >> >> >> parameters of the distribution rather than the parameters of > some > >> >> >> >> underlying model, if this is what you're after. > >> >> >> >> > >> >> >> >> Skipper > >> >> >> > > >> >> >> > OK, but just for clarity in my own mind: are you saying that > >> >> >> > rv_continuous.fit is _definitely_ inappropriate/inadequate for > >> >> >> > OP's > >> >> >> > needs > >> >> >> > (i.e., am I _completely_ misunderstanding the relationship > between > >> >> >> > the > >> >> >> > function and OP's stated needs), or are you saying that the > >> >> >> > function > >> >> >> > _may_ > >> >> >> > not be general/robust enough for OP's stated needs? > >> >> >> > >> >> >> Well, I guess it depends on exactly what kind of likelihood > function > >> >> >> is being optimized. That's why I asked. > >> >> >> > >> >> >> My experience with stats.distributions is all of about a week, so > I > >> >> >> could be wrong. But here it goes... rv_continuous is not intended > to > >> >> >> be used on its own but rather as the base class for any > >> >> >> distribution. > >> >> >> So if you believe that your data came from say an Gaussian > >> >> >> distribution, then you could use norm.fit(data) (with other > options > >> >> >> as > >> >> >> needed) to get back estimates of scale and location. So > >> >> >> > >> >> >> In [31]: from scipy.stats import norm > >> >> >> > >> >> >> In [32]: import numpy as np > >> >> >> > >> >> >> In [33]: x = np.random.normal(loc=0,scale=1,size=1000) > >> >> >> > >> >> >> In [34]: norm.fit(x) > >> >> >> Out[34]: (-0.043364692830314848, 1.0205901804210851) > >> >> >> > >> >> >> Which is close to our given location and scale. > >> >> >> > >> >> >> But if you had in mind some kind of data generating process for > your > >> >> >> model based on some other observed data and you were interested in > >> >> >> the > >> >> >> marginal effects of changes in the observed data on the outcome, > >> >> >> then > >> >> >> it would be cumbersome I think to use the fit in distributions. It > >> >> >> may > >> >> >> not be possible. Also, as mentioned, fit only uses Nelder-Mead > >> >> >> (optimize.fmin with the default parameters, which I've found to be > >> >> >> inadequate for even fairly basic likelihood based models), so it > may > >> >> >> not be robust enough. At the moment, I can't think of a way to > fit > >> >> >> a > >> >> >> parameterized model as fit is written now. Come to think of it > >> >> >> though > >> >> >> I don't think it would be much work to extend the fit method to > work > >> >> >> for something like a linear regression model. > >> >> >> > >> >> >> Skipper > >> >> > > >> >> > > >> >> > OK, this is all as I thought (e.g., fit only "works" to get the > MLE's > >> >> > from > >> >> > data for a *presumed* distribution, but it is all-but-useless if > the > >> >> > distribution isn't (believed to be) "known" a priori); just wanted > to > >> >> > be > >> >> > sure I was reading you correctly. :-) Thanks! > >> >> > >> >> MLE always assumes that the distribution is known, since you need the > >> >> likelihood function. > >> > > >> > I'm not sure what I'm missing here (is it the definition of DGP? the > >> > meaning > >> > of Nelder-Mead? I want to learn, off-list if this is considered > >> > "noise"): > >> > according to my reference - Bain & Englehardt, Intro. to Prob. and > Math. > >> > Stat., 2nd Ed., Duxbury, 1992 - if the underlying population > >> > distribution is > >> > known, then the likelihood function is well-determined (although the > >> > likelihood equation(s) it gives rise to may not be soluble > analytically, > >> > of > >> > course). So why doesn't the OP knowing the underlying distribution > (as > >> > your > >> > comment above implies they should if they seek MLEs) imply that s/he > >> > would > >> > also "know" what the likelihood function "looks like," (and thus the > >> > question isn't so much what the likelihood function "looks like," but > >> > what > >> > the underlying distribution is, and thence, do we have that > distribution > >> > implemented yet in scipy.stats)? > >> > >> DGP: data generating process > >> > >> In many cases the assumed distribution of the error or noise variable > >> is just the normal distribution. But what's the overall model that > >> explains the endogenous variable. > >> distribution.fit would just assume that each observations is a random > >> draw from the same population distribution. > >> > >> But you can do MLE on standard linear regression, system of equations, > >> ARIMA or GARCH in time series analysis. For any of this we need to > >> specify what the relationship between the endogenous variable and it's > >> own past and other explanatory variables is. > >> e.g. simplest ARMA > >> > >> A(L) y_t = B(L) e_t > >> with e_t independently and identically distributed (iid.) normal > >> random variable > >> A(L), B(L) lag-polynomials > >> and for the full MLE we would also need to specify initial conditions. > >> > >> simple linear regression with non iid errors > >> y_t = x_t * beta + e_t e = {e_t}_{for all t} distributed N(0, > >> Sigma) plus assumptions on the structure of Sigma > >> > >> in these cases the likelihood function defines a lot more than just > >> the distribution of the error term. > > > > Ah, jetzt ich verstehe (ich denke). So in the general case, the > procedure > > needs to "apportion" the information in the data among the parameters of > the > > "mechanistic" part of the model and the parameters of the "random noise" > > part of the model, and the Maximum Likelihood Equations give you the > values > > of all these parameters (the mechanistic ones and noise ones) that > maximize > > the likelihood of observing the data one observed, correct? > > > > Yes, I think you've got for the more general case that Josef describes. > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Jun 22 04:14:07 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 22 Jun 2010 04:14:07 -0400 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: On Tue, Jun 22, 2010 at 3:46 AM, eneide.odissea wrote: > Hi All > I need to use max likelihood algorithm for fitting parameters for a > GARCH(1,1) model. > Is the Distribution to be assumed normal? loglike_GARCH11 assuming normal distribution, and constant or removed mean http://bazaar.launchpad.net/~scipystats/statsmodels/trunk/annotate/head:/scikits/statsmodels/sandbox/regression/mle.py#L1002 simple example for estimation with scipy.optimize.fmin: http://bazaar.launchpad.net/~scipystats/statsmodels/trunk/annotate/head:/scikits/statsmodels/sandbox/examples/example_garch.py#L46 normal distribution is the standard, but there are also several other distributions that are used for garch, e.g. t-distribution. garch11 looks ok in my tests, but overall the garch code is still a mess, and it was written before the recent improvement to mle in statsmodels. If never seen any other GARCH code in python. Josef > > On Tue, Jun 22, 2010 at 3:43 AM, Skipper Seabold > wrote: >> >> On Mon, Jun 21, 2010 at 8:41 PM, David Goldsmith >> wrote: >> > On Mon, Jun 21, 2010 at 5:19 PM, wrote: >> >> >> >> On Mon, Jun 21, 2010 at 8:03 PM, David Goldsmith >> >> wrote: >> >> > On Mon, Jun 21, 2010 at 4:10 PM, wrote: >> >> >> >> >> >> On Mon, Jun 21, 2010 at 7:03 PM, David Goldsmith >> >> >> wrote: >> >> >> > On Mon, Jun 21, 2010 at 3:17 PM, Skipper Seabold >> >> >> > >> >> >> > wrote: >> >> >> >> >> >> >> >> On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith >> >> >> >> wrote: >> >> >> >> > On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold >> >> >> >> > >> >> >> >> > wrote: >> >> >> >> >> >> >> >> >> >> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith >> >> >> >> >> wrote: >> >> >> >> >> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea >> >> >> >> >> > >> >> >> >> >> > wrote: >> >> >> >> >> >> >> >> >> >> >> >> Hi All >> >> >> >> >> >> I had a look at the scipy.stats documentation and I was not >> >> >> >> >> >> able >> >> >> >> >> >> to >> >> >> >> >> >> find a >> >> >> >> >> >> function for >> >> >> >> >> >> maximum likelihood parameter estimation. >> >> >> >> >> >> Do you know whether is available in some other >> >> >> >> >> >> namespace/library >> >> >> >> >> >> of >> >> >> >> >> >> scipy? >> >> >> >> >> >> I found on the web few libraries ( this one is an >> >> >> >> >> >> example?http://bmnh.org/~pf/p4.html?) having it, >> >> >> >> >> >> but I would prefer to start playing with?what scipy already >> >> >> >> >> >> offers >> >> >> >> >> >> by >> >> >> >> >> >> default ( if any ). >> >> >> >> >> >> Kind Regards >> >> >> >> >> >> eo >> >> >> >> >> > >> >> >> >> >> > scipy.stats.distributions.rv_continuous.fit (I was just >> >> >> >> >> > working >> >> >> >> >> > on >> >> >> >> >> > the >> >> >> >> >> > docstring for that; I don't believe my changes have been >> >> >> >> >> > merged; I >> >> >> >> >> > believe >> >> >> >> >> > Travis recently updated its code...) >> >> >> >> >> > >> >> >> >> >> >> >> >> >> >> This is for fitting the parameters of a distribution via >> >> >> >> >> maximum >> >> >> >> >> likelihood given that the DGP is the underlying distribution. >> >> >> >> >> ?I >> >> >> >> >> don't >> >> >> >> >> think it is intended for more complicated likelihood functions >> >> >> >> >> (where >> >> >> >> >> Nelder-Mead might fail). ?And in any event it will only find >> >> >> >> >> the >> >> >> >> >> parameters of the distribution rather than the parameters of >> >> >> >> >> some >> >> >> >> >> underlying model, if this is what you're after. >> >> >> >> >> >> >> >> >> >> Skipper >> >> >> >> > >> >> >> >> > OK, but just for clarity in my own mind: are you saying that >> >> >> >> > rv_continuous.fit is _definitely_ inappropriate/inadequate for >> >> >> >> > OP's >> >> >> >> > needs >> >> >> >> > (i.e., am I _completely_ misunderstanding the relationship >> >> >> >> > between >> >> >> >> > the >> >> >> >> > function and OP's stated needs), or are you saying that the >> >> >> >> > function >> >> >> >> > _may_ >> >> >> >> > not be general/robust enough for OP's stated needs? >> >> >> >> >> >> >> >> Well, I guess it depends on exactly what kind of likelihood >> >> >> >> function >> >> >> >> is being optimized. ?That's why I asked. >> >> >> >> >> >> >> >> My experience with stats.distributions is all of about a week, so >> >> >> >> I >> >> >> >> could be wrong. But here it goes... rv_continuous is not intended >> >> >> >> to >> >> >> >> be used on its own but rather as the base class for any >> >> >> >> distribution. >> >> >> >> So if you believe that your data came from say an Gaussian >> >> >> >> distribution, then you could use norm.fit(data) (with other >> >> >> >> options >> >> >> >> as >> >> >> >> needed) to get back estimates of scale and location. ?So >> >> >> >> >> >> >> >> In [31]: from scipy.stats import norm >> >> >> >> >> >> >> >> In [32]: import numpy as np >> >> >> >> >> >> >> >> In [33]: x = np.random.normal(loc=0,scale=1,size=1000) >> >> >> >> >> >> >> >> In [34]: norm.fit(x) >> >> >> >> Out[34]: (-0.043364692830314848, 1.0205901804210851) >> >> >> >> >> >> >> >> Which is close to our given location and scale. >> >> >> >> >> >> >> >> But if you had in mind some kind of data generating process for >> >> >> >> your >> >> >> >> model based on some other observed data and you were interested >> >> >> >> in >> >> >> >> the >> >> >> >> marginal effects of changes in the observed data on the outcome, >> >> >> >> then >> >> >> >> it would be cumbersome I think to use the fit in distributions. >> >> >> >> It >> >> >> >> may >> >> >> >> not be possible. ? Also, as mentioned, fit only uses Nelder-Mead >> >> >> >> (optimize.fmin with the default parameters, which I've found to >> >> >> >> be >> >> >> >> inadequate for even fairly basic likelihood based models), so it >> >> >> >> may >> >> >> >> not be robust enough. ?At the moment, I can't think of a way to >> >> >> >> fit >> >> >> >> a >> >> >> >> parameterized model as fit is written now. ?Come to think of it >> >> >> >> though >> >> >> >> I don't think it would be much work to extend the fit method to >> >> >> >> work >> >> >> >> for something like a linear regression model. >> >> >> >> >> >> >> >> Skipper >> >> >> > >> >> >> > >> >> >> > OK, this is all as I thought (e.g., fit only "works" to get the >> >> >> > MLE's >> >> >> > from >> >> >> > data for a *presumed* distribution, but it is all-but-useless if >> >> >> > the >> >> >> > distribution isn't (believed to be) "known" a priori); just wanted >> >> >> > to >> >> >> > be >> >> >> > sure I was reading you correctly. :-)? Thanks! >> >> >> >> >> >> MLE always assumes that the distribution is known, since you need >> >> >> the >> >> >> likelihood function. >> >> > >> >> > I'm not sure what I'm missing here (is it the definition of DGP? the >> >> > meaning >> >> > of Nelder-Mead? I want to learn, off-list if this is considered >> >> > "noise"): >> >> > according to my reference - Bain & Englehardt, Intro. to Prob. and >> >> > Math. >> >> > Stat., 2nd Ed., Duxbury, 1992 - if the underlying population >> >> > distribution is >> >> > known, then the likelihood function is well-determined (although the >> >> > likelihood equation(s) it gives rise to may not be soluble >> >> > analytically, >> >> > of >> >> > course).? So why doesn't the OP knowing the underlying distribution >> >> > (as >> >> > your >> >> > comment above implies they should if they seek MLEs) imply that s/he >> >> > would >> >> > also "know" what the likelihood function "looks like," (and thus the >> >> > question isn't so much what the likelihood function "looks like," but >> >> > what >> >> > the underlying distribution is, and thence, do we have that >> >> > distribution >> >> > implemented yet in scipy.stats)? >> >> >> >> DGP: data generating process >> >> >> >> In many cases the assumed distribution of the error or noise variable >> >> is just the normal distribution. But what's the overall model that >> >> explains the endogenous variable. >> >> distribution.fit would just assume that each observations is a random >> >> draw from the same population distribution. >> >> >> >> But you can do MLE on standard linear regression, system of equations, >> >> ARIMA or GARCH in time series analysis. For any of this we need to >> >> specify what the relationship between the endogenous variable and it's >> >> own past and other explanatory variables is. >> >> e.g. simplest ARMA >> >> >> >> A(L) y_t = B(L) e_t >> >> with e_t independently and identically distributed (iid.) normal >> >> random variable >> >> A(L), B(L) lag-polynomials >> >> and for the full MLE we would also need to specify initial conditions. >> >> >> >> simple linear regression with non iid errors >> >> y_t = x_t * beta + e_t ? ? ?e = {e_t}_{for all t} distributed N(0, >> >> Sigma) ? plus assumptions on the structure of Sigma >> >> >> >> in these cases the likelihood function defines a lot more than just >> >> the distribution of the error term. >> > >> > Ah, jetzt ich verstehe (ich denke).? So in the general case, the >> > procedure >> > needs to "apportion" the information in the data among the parameters of >> > the >> > "mechanistic" part of the model and the parameters of the "random noise" >> > part of the model, and the Maximum Likelihood Equations give you the >> > values >> > of all these parameters (the mechanistic ones and noise ones) that >> > maximize >> > the likelihood of observing the data one observed, correct? >> > >> >> Yes, I think you've got for the more general case that Josef describes. >> >> Skipper >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From eneide.odissea at gmail.com Tue Jun 22 04:44:02 2010 From: eneide.odissea at gmail.com (eneide.odissea) Date: Tue, 22 Jun 2010 10:44:02 +0200 Subject: [SciPy-User] max likelihood In-Reply-To: References: Message-ID: Thanks to everybody Eo On Tue, Jun 22, 2010 at 10:14 AM, wrote: > On Tue, Jun 22, 2010 at 3:46 AM, eneide.odissea > wrote: > > Hi All > > I need to use max likelihood algorithm for fitting parameters for a > > GARCH(1,1) model. > > Is the Distribution to be assumed normal? > > loglike_GARCH11 assuming normal distribution, and constant or removed mean > > http://bazaar.launchpad.net/~scipystats/statsmodels/trunk/annotate/head:/scikits/statsmodels/sandbox/regression/mle.py#L1002 > > simple example for estimation with scipy.optimize.fmin: > > http://bazaar.launchpad.net/~scipystats/statsmodels/trunk/annotate/head:/scikits/statsmodels/sandbox/examples/example_garch.py#L46 > > normal distribution is the standard, but there are also several other > distributions that are used for garch, e.g. t-distribution. > > garch11 looks ok in my tests, but overall the garch code is still a > mess, and it was written before the recent improvement to mle in > statsmodels. > > If never seen any other GARCH code in python. > > Josef > > > > > On Tue, Jun 22, 2010 at 3:43 AM, Skipper Seabold > > wrote: > >> > >> On Mon, Jun 21, 2010 at 8:41 PM, David Goldsmith > >> wrote: > >> > On Mon, Jun 21, 2010 at 5:19 PM, wrote: > >> >> > >> >> On Mon, Jun 21, 2010 at 8:03 PM, David Goldsmith > >> >> wrote: > >> >> > On Mon, Jun 21, 2010 at 4:10 PM, wrote: > >> >> >> > >> >> >> On Mon, Jun 21, 2010 at 7:03 PM, David Goldsmith > >> >> >> wrote: > >> >> >> > On Mon, Jun 21, 2010 at 3:17 PM, Skipper Seabold > >> >> >> > > >> >> >> > wrote: > >> >> >> >> > >> >> >> >> On Mon, Jun 21, 2010 at 5:55 PM, David Goldsmith > >> >> >> >> wrote: > >> >> >> >> > On Mon, Jun 21, 2010 at 2:43 PM, Skipper Seabold > >> >> >> >> > > >> >> >> >> > wrote: > >> >> >> >> >> > >> >> >> >> >> On Mon, Jun 21, 2010 at 5:34 PM, David Goldsmith > >> >> >> >> >> wrote: > >> >> >> >> >> > On Mon, Jun 21, 2010 at 2:17 PM, eneide.odissea > >> >> >> >> >> > > >> >> >> >> >> > wrote: > >> >> >> >> >> >> > >> >> >> >> >> >> Hi All > >> >> >> >> >> >> I had a look at the scipy.stats documentation and I was > not > >> >> >> >> >> >> able > >> >> >> >> >> >> to > >> >> >> >> >> >> find a > >> >> >> >> >> >> function for > >> >> >> >> >> >> maximum likelihood parameter estimation. > >> >> >> >> >> >> Do you know whether is available in some other > >> >> >> >> >> >> namespace/library > >> >> >> >> >> >> of > >> >> >> >> >> >> scipy? > >> >> >> >> >> >> I found on the web few libraries ( this one is an > >> >> >> >> >> >> example http://bmnh.org/~pf/p4.html ) > having it, > >> >> >> >> >> >> but I would prefer to start playing with what scipy > already > >> >> >> >> >> >> offers > >> >> >> >> >> >> by > >> >> >> >> >> >> default ( if any ). > >> >> >> >> >> >> Kind Regards > >> >> >> >> >> >> eo > >> >> >> >> >> > > >> >> >> >> >> > scipy.stats.distributions.rv_continuous.fit (I was just > >> >> >> >> >> > working > >> >> >> >> >> > on > >> >> >> >> >> > the > >> >> >> >> >> > docstring for that; I don't believe my changes have been > >> >> >> >> >> > merged; I > >> >> >> >> >> > believe > >> >> >> >> >> > Travis recently updated its code...) > >> >> >> >> >> > > >> >> >> >> >> > >> >> >> >> >> This is for fitting the parameters of a distribution via > >> >> >> >> >> maximum > >> >> >> >> >> likelihood given that the DGP is the underlying > distribution. > >> >> >> >> >> I > >> >> >> >> >> don't > >> >> >> >> >> think it is intended for more complicated likelihood > functions > >> >> >> >> >> (where > >> >> >> >> >> Nelder-Mead might fail). And in any event it will only find > >> >> >> >> >> the > >> >> >> >> >> parameters of the distribution rather than the parameters of > >> >> >> >> >> some > >> >> >> >> >> underlying model, if this is what you're after. > >> >> >> >> >> > >> >> >> >> >> Skipper > >> >> >> >> > > >> >> >> >> > OK, but just for clarity in my own mind: are you saying that > >> >> >> >> > rv_continuous.fit is _definitely_ inappropriate/inadequate > for > >> >> >> >> > OP's > >> >> >> >> > needs > >> >> >> >> > (i.e., am I _completely_ misunderstanding the relationship > >> >> >> >> > between > >> >> >> >> > the > >> >> >> >> > function and OP's stated needs), or are you saying that the > >> >> >> >> > function > >> >> >> >> > _may_ > >> >> >> >> > not be general/robust enough for OP's stated needs? > >> >> >> >> > >> >> >> >> Well, I guess it depends on exactly what kind of likelihood > >> >> >> >> function > >> >> >> >> is being optimized. That's why I asked. > >> >> >> >> > >> >> >> >> My experience with stats.distributions is all of about a week, > so > >> >> >> >> I > >> >> >> >> could be wrong. But here it goes... rv_continuous is not > intended > >> >> >> >> to > >> >> >> >> be used on its own but rather as the base class for any > >> >> >> >> distribution. > >> >> >> >> So if you believe that your data came from say an Gaussian > >> >> >> >> distribution, then you could use norm.fit(data) (with other > >> >> >> >> options > >> >> >> >> as > >> >> >> >> needed) to get back estimates of scale and location. So > >> >> >> >> > >> >> >> >> In [31]: from scipy.stats import norm > >> >> >> >> > >> >> >> >> In [32]: import numpy as np > >> >> >> >> > >> >> >> >> In [33]: x = np.random.normal(loc=0,scale=1,size=1000) > >> >> >> >> > >> >> >> >> In [34]: norm.fit(x) > >> >> >> >> Out[34]: (-0.043364692830314848, 1.0205901804210851) > >> >> >> >> > >> >> >> >> Which is close to our given location and scale. > >> >> >> >> > >> >> >> >> But if you had in mind some kind of data generating process for > >> >> >> >> your > >> >> >> >> model based on some other observed data and you were interested > >> >> >> >> in > >> >> >> >> the > >> >> >> >> marginal effects of changes in the observed data on the > outcome, > >> >> >> >> then > >> >> >> >> it would be cumbersome I think to use the fit in distributions. > >> >> >> >> It > >> >> >> >> may > >> >> >> >> not be possible. Also, as mentioned, fit only uses > Nelder-Mead > >> >> >> >> (optimize.fmin with the default parameters, which I've found to > >> >> >> >> be > >> >> >> >> inadequate for even fairly basic likelihood based models), so > it > >> >> >> >> may > >> >> >> >> not be robust enough. At the moment, I can't think of a way to > >> >> >> >> fit > >> >> >> >> a > >> >> >> >> parameterized model as fit is written now. Come to think of it > >> >> >> >> though > >> >> >> >> I don't think it would be much work to extend the fit method to > >> >> >> >> work > >> >> >> >> for something like a linear regression model. > >> >> >> >> > >> >> >> >> Skipper > >> >> >> > > >> >> >> > > >> >> >> > OK, this is all as I thought (e.g., fit only "works" to get the > >> >> >> > MLE's > >> >> >> > from > >> >> >> > data for a *presumed* distribution, but it is all-but-useless if > >> >> >> > the > >> >> >> > distribution isn't (believed to be) "known" a priori); just > wanted > >> >> >> > to > >> >> >> > be > >> >> >> > sure I was reading you correctly. :-) Thanks! > >> >> >> > >> >> >> MLE always assumes that the distribution is known, since you need > >> >> >> the > >> >> >> likelihood function. > >> >> > > >> >> > I'm not sure what I'm missing here (is it the definition of DGP? > the > >> >> > meaning > >> >> > of Nelder-Mead? I want to learn, off-list if this is considered > >> >> > "noise"): > >> >> > according to my reference - Bain & Englehardt, Intro. to Prob. and > >> >> > Math. > >> >> > Stat., 2nd Ed., Duxbury, 1992 - if the underlying population > >> >> > distribution is > >> >> > known, then the likelihood function is well-determined (although > the > >> >> > likelihood equation(s) it gives rise to may not be soluble > >> >> > analytically, > >> >> > of > >> >> > course). So why doesn't the OP knowing the underlying distribution > >> >> > (as > >> >> > your > >> >> > comment above implies they should if they seek MLEs) imply that > s/he > >> >> > would > >> >> > also "know" what the likelihood function "looks like," (and thus > the > >> >> > question isn't so much what the likelihood function "looks like," > but > >> >> > what > >> >> > the underlying distribution is, and thence, do we have that > >> >> > distribution > >> >> > implemented yet in scipy.stats)? > >> >> > >> >> DGP: data generating process > >> >> > >> >> In many cases the assumed distribution of the error or noise variable > >> >> is just the normal distribution. But what's the overall model that > >> >> explains the endogenous variable. > >> >> distribution.fit would just assume that each observations is a random > >> >> draw from the same population distribution. > >> >> > >> >> But you can do MLE on standard linear regression, system of > equations, > >> >> ARIMA or GARCH in time series analysis. For any of this we need to > >> >> specify what the relationship between the endogenous variable and > it's > >> >> own past and other explanatory variables is. > >> >> e.g. simplest ARMA > >> >> > >> >> A(L) y_t = B(L) e_t > >> >> with e_t independently and identically distributed (iid.) normal > >> >> random variable > >> >> A(L), B(L) lag-polynomials > >> >> and for the full MLE we would also need to specify initial > conditions. > >> >> > >> >> simple linear regression with non iid errors > >> >> y_t = x_t * beta + e_t e = {e_t}_{for all t} distributed N(0, > >> >> Sigma) plus assumptions on the structure of Sigma > >> >> > >> >> in these cases the likelihood function defines a lot more than just > >> >> the distribution of the error term. > >> > > >> > Ah, jetzt ich verstehe (ich denke). So in the general case, the > >> > procedure > >> > needs to "apportion" the information in the data among the parameters > of > >> > the > >> > "mechanistic" part of the model and the parameters of the "random > noise" > >> > part of the model, and the Maximum Likelihood Equations give you the > >> > values > >> > of all these parameters (the mechanistic ones and noise ones) that > >> > maximize > >> > the likelihood of observing the data one observed, correct? > >> > > >> > >> Yes, I think you've got for the more general case that Josef describes. > >> > >> Skipper > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Tue Jun 22 13:49:30 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 22 Jun 2010 13:49:30 -0400 Subject: [SciPy-User] Autocorrelation function: Convolution vs FFT Message-ID: I am trying to compute the autocorrelation via convolution and via fft and am far from an expert in DSP. I'm wondering if someone can spot anything that might introduce numerical inaccuracies or if I'm stuck with the following two being slightly different. Generate some autocorrelated data: import numpy as np nobs = 150000 x = np.zeros((nobs)) for i in range(1,nobs): x[i] = .85 * x[i-1] + np.random.randn() # compute ACF using convolution x0 = x - x.mean() # this takes a while for the big data acf1 = np.correlate(x0,x0,'full')[nobs-1:]/nobs acf1 /= acf1[0] # compute ACF using FFT Frf = np.fft.fft(x0, n=2*nobs) # zero-pad for separability Sf = Frf * Frf.conjugate() acf2 = np.fft.ifft(Sf) acf2 = acf2[1:nobs+1]/nobs acf2 /= acf2[0] acf2 = acf2.real np.linalg.norm(acf1-acf2, ord=2) They are pretty close, but I would expect them to be closer than this. np.max(acf1-acf2) 0.006581962491189159 np.min(acf1-ac2) -0.0062705596399049799 Skipper From josef.pktd at gmail.com Tue Jun 22 15:46:23 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 22 Jun 2010 15:46:23 -0400 Subject: [SciPy-User] Autocorrelation function: Convolution vs FFT In-Reply-To: References: Message-ID: On Tue, Jun 22, 2010 at 1:49 PM, Skipper Seabold wrote: > I am trying to compute the autocorrelation via convolution and via fft > and am far from an expert in DSP. ?I'm wondering if someone can spot > anything that might introduce numerical inaccuracies or if I'm stuck > with the following two being slightly different. > > Generate some autocorrelated data: > > import numpy as np > nobs = 150000 > x = np.zeros((nobs)) > for i in range(1,nobs): > ? ?x[i] = .85 * x[i-1] + np.random.randn() > > # compute ACF using convolution > > x0 = x - x.mean() > > # this takes a while for the big data > acf1 = np.correlate(x0,x0,'full')[nobs-1:]/nobs > acf1 /= acf1[0] > > > # compute ACF using FFT > > Frf = np.fft.fft(x0, n=2*nobs) # zero-pad for separability > Sf = Frf * Frf.conjugate() > acf2 = np.fft.ifft(Sf) > acf2 = acf2[1:nobs+1]/nobs > acf2 /= acf2[0] > acf2 = acf2.real > > np.linalg.norm(acf1-acf2, ord=2) > > They are pretty close, but I would expect them to be closer than this. > > np.max(acf1-acf2) > 0.006581962491189159 > > np.min(acf1-ac2) > -0.0062705596399049799 I don't see anything, but I don't remember these things by heart. Why don't you use scipy.signal.fftconvolve, or steal the source, which I did for some version of fft convolutions. BTW: the best padding is to the power of 2, there is also the issue of one-sided (past) or two-sided (past and future) correlation, but I don't remember whether it's changes anything in this case. (I have to look, I thought I tried or copied acf with fft from somewhere, maybe nitime.) Josef > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From jsseabold at gmail.com Tue Jun 22 16:02:03 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 22 Jun 2010 16:02:03 -0400 Subject: [SciPy-User] Autocorrelation function: Convolution vs FFT In-Reply-To: References: Message-ID: On Tue, Jun 22, 2010 at 3:46 PM, wrote: > On Tue, Jun 22, 2010 at 1:49 PM, Skipper Seabold wrote: >> I am trying to compute the autocorrelation via convolution and via fft >> and am far from an expert in DSP. ?I'm wondering if someone can spot >> anything that might introduce numerical inaccuracies or if I'm stuck >> with the following two being slightly different. >> >> Generate some autocorrelated data: >> >> import numpy as np >> nobs = 150000 >> x = np.zeros((nobs)) >> for i in range(1,nobs): >> ? ?x[i] = .85 * x[i-1] + np.random.randn() >> >> # compute ACF using convolution >> >> x0 = x - x.mean() >> >> # this takes a while for the big data >> acf1 = np.correlate(x0,x0,'full')[nobs-1:]/nobs >> acf1 /= acf1[0] >> >> >> # compute ACF using FFT >> >> Frf = np.fft.fft(x0, n=2*nobs) # zero-pad for separability >> Sf = Frf * Frf.conjugate() >> acf2 = np.fft.ifft(Sf) >> acf2 = acf2[1:nobs+1]/nobs >> acf2 /= acf2[0] >> acf2 = acf2.real >> >> np.linalg.norm(acf1-acf2, ord=2) >> >> They are pretty close, but I would expect them to be closer than this. >> >> np.max(acf1-acf2) >> 0.006581962491189159 >> >> np.min(acf1-ac2) >> -0.0062705596399049799 > > I don't see anything, but I don't remember these things by heart. Why > don't you use scipy.signal.fftconvolve, or steal the source, which I > did for some version of fft convolutions. > Because I don't know what it does? I am also trying to teach myself about Fourier series and fft's, so I am sure there is plenty I am missing. > BTW: the best padding is to the power of 2, there is also the issue of > one-sided (past) or two-sided (past and future) correlation, but I > don't remember whether it's changes anything in this case. The padding here is for the separability of past and future correlations I believe. Last sentence that goes from pp 383-4 > > (I have to look, I thought I tried or copied acf with fft from > somewhere, maybe nitime.) Your version didn't have fft that I saw. Let me know. I added the above fft version, but it only agrees in the worst case for ~5e-3 as above with the correlate version, but I just thought it would be closer. I am surprised that this isn't already somewhere, if anyone knows of an implementation that would be great. Skipper From jsseabold at gmail.com Tue Jun 22 16:14:38 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 22 Jun 2010 16:14:38 -0400 Subject: [SciPy-User] Autocorrelation function: Convolution vs FFT In-Reply-To: References: Message-ID: On Tue, Jun 22, 2010 at 3:46 PM, wrote: > On Tue, Jun 22, 2010 at 1:49 PM, Skipper Seabold wrote: >> I am trying to compute the autocorrelation via convolution and via fft >> and am far from an expert in DSP. ?I'm wondering if someone can spot >> anything that might introduce numerical inaccuracies or if I'm stuck >> with the following two being slightly different. >> >> Generate some autocorrelated data: >> >> import numpy as np >> nobs = 150000 >> x = np.zeros((nobs)) >> for i in range(1,nobs): >> ? ?x[i] = .85 * x[i-1] + np.random.randn() >> >> # compute ACF using convolution >> >> x0 = x - x.mean() >> >> # this takes a while for the big data >> acf1 = np.correlate(x0,x0,'full')[nobs-1:]/nobs >> acf1 /= acf1[0] >> >> >> # compute ACF using FFT >> >> Frf = np.fft.fft(x0, n=2*nobs) # zero-pad for separability >> Sf = Frf * Frf.conjugate() >> acf2 = np.fft.ifft(Sf) >> acf2 = acf2[1:nobs+1]/nobs >> acf2 /= acf2[0] >> acf2 = acf2.real >> >> np.linalg.norm(acf1-acf2, ord=2) >> >> They are pretty close, but I would expect them to be closer than this. >> >> np.max(acf1-acf2) >> 0.006581962491189159 >> >> np.min(acf1-ac2) >> -0.0062705596399049799 > > I don't see anything, but I don't remember these things by heart. Why > don't you use scipy.signal.fftconvolve, or steal the source, which I > did for some version of fft convolutions. > > BTW: the best padding is to the power of 2, there is also the issue of > one-sided (past) or two-sided (past and future) correlation, but I > don't remember whether it's changes anything in this case. > > (I have to look, I thought I tried or copied acf with fft from > somewhere, maybe nitime.) Essentially the same as the above less the normalization http://github.com/fperez/nitime/blob/master/nitime/utils.py#L164 From jsseabold at gmail.com Tue Jun 22 16:21:26 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 22 Jun 2010 16:21:26 -0400 Subject: [SciPy-User] Autocorrelation function: Convolution vs FFT In-Reply-To: References: Message-ID: On Tue, Jun 22, 2010 at 4:14 PM, Skipper Seabold wrote: > On Tue, Jun 22, 2010 at 3:46 PM, ? wrote: >> On Tue, Jun 22, 2010 at 1:49 PM, Skipper Seabold wrote: >>> I am trying to compute the autocorrelation via convolution and via fft >>> and am far from an expert in DSP. ?I'm wondering if someone can spot >>> anything that might introduce numerical inaccuracies or if I'm stuck >>> with the following two being slightly different. >>> >>> Generate some autocorrelated data: >>> >>> import numpy as np >>> nobs = 150000 >>> x = np.zeros((nobs)) >>> for i in range(1,nobs): >>> ? ?x[i] = .85 * x[i-1] + np.random.randn() >>> >>> # compute ACF using convolution >>> >>> x0 = x - x.mean() >>> >>> # this takes a while for the big data >>> acf1 = np.correlate(x0,x0,'full')[nobs-1:]/nobs >>> acf1 /= acf1[0] >>> >>> >>> # compute ACF using FFT >>> >>> Frf = np.fft.fft(x0, n=2*nobs) # zero-pad for separability >>> Sf = Frf * Frf.conjugate() >>> acf2 = np.fft.ifft(Sf) >>> acf2 = acf2[1:nobs+1]/nobs >>> acf2 /= acf2[0] >>> acf2 = acf2.real >>> >>> np.linalg.norm(acf1-acf2, ord=2) >>> >>> They are pretty close, but I would expect them to be closer than this. >>> >>> np.max(acf1-acf2) >>> 0.006581962491189159 >>> >>> np.min(acf1-ac2) >>> -0.0062705596399049799 >> >> I don't see anything, but I don't remember these things by heart. Why >> don't you use scipy.signal.fftconvolve, or steal the source, which I >> did for some version of fft convolutions. >> >> BTW: the best padding is to the power of 2, there is also the issue of >> one-sided (past) or two-sided (past and future) correlation, but I >> don't remember whether it's changes anything in this case. >> >> (I have to look, I thought I tried or copied acf with fft from >> somewhere, maybe nitime.) > > Essentially the same as the above less the normalization > > http://github.com/fperez/nitime/blob/master/nitime/utils.py#L164 > D'oh. I figured it out. Change acf2 = acf2[1:nobs+1]/nobs to acf2 = acf2[:nobs]/nobs np.linalg.norm(acf1-acf2,ord=2) 4.5614763234630347e-15 np.allclose(acf1,acf2) True Sorry for the noise. Skipper From david at silveregg.co.jp Tue Jun 22 20:47:22 2010 From: david at silveregg.co.jp (David) Date: Wed, 23 Jun 2010 09:47:22 +0900 Subject: [SciPy-User] Autocorrelation function: Convolution vs FFT In-Reply-To: References: Message-ID: <4C21599A.5040604@silveregg.co.jp> On 06/23/2010 02:49 AM, Skipper Seabold wrote: > I am trying to compute the autocorrelation via convolution and via fft > and am far from an expert in DSP. I'm wondering if someone can spot > anything that might introduce numerical inaccuracies or if I'm stuck > with the following two being slightly different. > > Generate some autocorrelated data: > > import numpy as np > nobs = 150000 > x = np.zeros((nobs)) > for i in range(1,nobs): > x[i] = .85 * x[i-1] + np.random.randn() > > # compute ACF using convolution > > x0 = x - x.mean() > > # this takes a while for the big data > acf1 = np.correlate(x0,x0,'full')[nobs-1:]/nobs > acf1 /= acf1[0] > > > # compute ACF using FFT > > Frf = np.fft.fft(x0, n=2*nobs) # zero-pad for separability > Sf = Frf * Frf.conjugate() > acf2 = np.fft.ifft(Sf) > acf2 = acf2[1:nobs+1]/nobs > acf2 /= acf2[0] > acf2 = acf2.real > > np.linalg.norm(acf1-acf2, ord=2) > > They are pretty close, but I would expect them to be closer than this. > > np.max(acf1-acf2) > 0.006581962491189159 > > np.min(acf1-ac2) > -0.0062705596399049799 You could look at my scikits talkbox which has both fft-based and brute-force autocorrelation. I don't think they have such a big difference between implementations, David From d.l.goldsmith at gmail.com Thu Jun 17 00:32:08 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Wed, 16 Jun 2010 21:32:08 -0700 Subject: [SciPy-User] SciPy docs marathon: a little more info Message-ID: On Mon, Jun 14, 2010 at 2:05 AM, David Goldsmith wrote: > Hi, all! The scipy doc marathon has gotten off to a very slow start this > summer. We are producing less than 1000 words a week, perhaps because > many universities are still finishing up spring classes. So, this is > a second appeal to everyone to pitch in and help get scipy documented > so that it's easy to learn how to use it. Because some of the > packages are quite specialized, we need both "regular" contributors to > write lots of pages, and some people experienced in using each module > (and the mathematics behind the software) to make sure we don't water > it down or make it wrong in the process. If you can help, please, now is > the > time to step forward. Thanks! > > On behalf of Joe and myself, > > David Goldsmith > Olympia, WA > OK, a few people have come forward. Let me enumerate the categories that still have no "declared" volunteer writer-editors (all categories are in need of leaders): Max. Entropy, Misc., Image Manip. (Milestone 6) Signal processing (Milestone 8) Sparse Matrices (Milestone 9) Spatial Algorithms., Special funcs. (Milestone 10) C/C++ Integration (Milestone 13) As for the rest, only Interpolation (Milestone 3) has more than one person (but I'm one of the two), and I'm the only person on four others. So, hopefully, knowing specifically which areas are in dire need will inspire people skilled in those areas to sign up. Thanks for your time and help, DG PS: For your convenience, here's the link to the scipy Milestonespage. (Note that the Milestones link at the top of each Wiki page links, incorrectly in the case of the SciPy pages, to the NumPy Milestones page, which we are not actively working on in this Marathon; this is a known, reported bug in the Wiki program.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stevenj at alum.mit.edu Thu Jun 17 12:11:38 2010 From: stevenj at alum.mit.edu (Steven G. Johnson) Date: Thu, 17 Jun 2010 09:11:38 -0700 (PDT) Subject: [SciPy-User] [ANN] NLopt, a nonlinear optimization library, now with Python interface Message-ID: <76d97234-00ad-4516-a786-e71ee9a866f4@g19g2000yqc.googlegroups.com> The NLopt library, available from http://ab-initio.mit.edu/nlopt provides a common interface for a large number of algorithms for both global and local nonlinear optimizations, both with and without gradient information, and including both bound constraints and nonlinear equality/inequality constraints. NLopt is written in C, but now includes a Python interface (as well as interfaces for C++, Fortran, Matlab, Octave, and Guile). It is free software under the GNU LGPL. Regards, Steven G. Johnson From 381BDBB9888B58F4 at mytum.de Thu Jun 17 13:28:04 2010 From: 381BDBB9888B58F4 at mytum.de (Marco Halder) Date: Thu, 17 Jun 2010 17:28:04 -0000 Subject: [SciPy-User] get variance covariance matrix from polyfit Message-ID: <20100617172804.20525.28224@urania.ze.tum.de> An HTML attachment was scrubbed... URL: From 381BDBB9888B58F4 at mytum.de Fri Jun 18 07:32:03 2010 From: 381BDBB9888B58F4 at mytum.de (Marco Halder) Date: Fri, 18 Jun 2010 11:32:03 -0000 Subject: [SciPy-User] polyfit how can I get the covariance matrix of the fit coefficients in a linear regression model Message-ID: <20100618113203.43024.21916@klio.ze.tum.de> An HTML attachment was scrubbed... URL: From dp82 at nyu.edu Fri Jun 18 11:08:03 2010 From: dp82 at nyu.edu (David Pine) Date: Fri, 18 Jun 2010 17:08:03 +0200 Subject: [SciPy-User] info scipy.ndimage.filters.maximum_filter Message-ID: How do I get more detailed information about scipy.ndimage.filters.maximum_filter than is available at the Numpy and Scipy Documentation Reference Guide? The guide tells you what the routine does but not how it does it. From jeremy at jeremysanders.net Sun Jun 20 15:45:10 2010 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Sun, 20 Jun 2010 20:45:10 +0100 (BST) Subject: [SciPy-User] ANN: Veusz 1.8 Message-ID: Veusz 1.8 --------- Velvet Ember Under Sky Zenith ----------------------------- http://home.gna.org/veusz/ Veusz is Copyright (C) 2003-2010 Jeremy Sanders Licenced under the GPL (version 2 or greater). Veusz is a Qt4 based scientific plotting package. It is written in Python, using PyQt4 for display and user-interfaces, and numpy for handling the numeric data. Veusz is designed to produce publication-ready Postscript/PDF/SVG output. The user interface aims to be simple, consistent and powerful. Veusz provides a GUI, command line, embedding and scripting interface (based on Python) to its plotting facilities. It also allows for manipulation and editing of datasets. Data can be captured from external sources such as internet sockets or other programs. Changes in 1.8: * Rewritten several inner loops in C++ giving speedups for large datasets * Lines, points and shapes are clipped before plotting, which speeds up plotting with some Qt backends and reduces file sizes * Data histogram feature added for calculating histograms of datasets, including cumulative histograms * Data import plugins allow the user to add support for importing any file type - see Veusz wiki for details * Experimental Bezier curve option for joining data points Minor changes in 1.8: * Fix zoom button default action * Speed up user interface when handling large numbers of datasets * Reset buttons added to several dialog boxes * Add engineering number formatting for axes: %VE giving e.g. 1k or 50m * Add drop down list of number formatting option to axis tick labels * Force Qt dialog boxes to be used instead of KDE ones, as KDE ones are currently broken * Use miter joins for plotting data points for sharper appearance * Add SetAntiAliasing command to command interface to toggle anti aliasing * Fix highlighting of errors when entering settings * Fix conversion of numpy to QVariants for new versions of PyQt * New point styles added for showing limits * Reworked internals of import dialog substantially * Several other minor bug fixes Note for people building from source and package builders: * Veusz now contains C++ code, dependent for building on the development libraries of SIP, PyQt4 and Qt4. Note that Veusz will still work (but more slowly) without this helper library. Features of package: * X-Y plots (with errorbars) * Line and function plots * Contour plots * Images (with colour mappings and colorbars) * Stepped plots (for histograms) * Bar graphs * Plotting dates * Fitting functions to data * Stacked plots and arrays of plots * Plot keys * Plot labels * Shapes and arrows on plots * LaTeX-like formatting for text * EPS/PDF/PNG/SVG/EMF export * Scripting interface * Dataset creation/manipulation * Embed Veusz within other programs * Text, CSV, FITS and user-plugin importing * Data can be captured from external sources * User defined functions, constants and can import external Python functions Requirements for source install: Python (2.4 or greater required) http://www.python.org/ Qt >= 4.3 (free edition) http://www.trolltech.com/products/qt/ PyQt >= 4.3 (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/pyqt/ http://www.riverbankcomputing.co.uk/sip/ numpy >= 1.0 http://numpy.scipy.org/ Optional: Microsoft Core Fonts (recommended for nice output) http://corefonts.sourceforge.net/ PyFITS >= 1.1 (optional for FITS import) http://www.stsci.edu/resources/software_hardware/pyfits pyemf >= 2.0.0 (optional for EMF export) http://pyemf.sourceforge.net/ For EMF and better SVG export, PyQt >= 4.6 or better is required, to fix a bug in the C++ wrapping For documentation on using Veusz, see the "Documents" directory. The manual is in PDF, HTML and text format (generated from docbook). The examples are also useful documentation. Please also see and contribute to the Veusz wiki: http://barmag.net/veusz-wiki/ Issues with the current version: * Plots can sometimes be slow using antialiasing. Go to the preferences dialog or right click on the plot to disable antialiasing. * Some recent versions of PyQt/SIP will causes crashes when exporting SVG files. Update to 4.7.4 (if released) or a recent snapshot to solve this problem. If you enjoy using Veusz, I would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The latest code can always be found in the SVN repository. Jeremy Sanders From jallikattu at googlemail.com Mon Jun 21 01:47:34 2010 From: jallikattu at googlemail.com (morovia morovia) Date: Mon, 21 Jun 2010 11:17:34 +0530 Subject: [SciPy-User] solving linear algebra and substitute in diff equations reg. Message-ID: Hello, I have 2 homogeneous linear algebraic equations and 4 differential equations. Solving the algebraic equations and substituting, I can eliminate 2 variables out of 6, resulting in 4 differential equations which can be written in matrix form for further analysis. Presently I am using the individual elements of the matrix to compute. I am wondering, whether this substitution and solving can be carried out directly through scipy. Or can sympy be used for this purpose. Thanks in advance, Best regards Morovia. -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.statkute at gmail.com Wed Jun 23 03:23:57 2010 From: g.statkute at gmail.com (gintare statkute) Date: Wed, 23 Jun 2010 10:23:57 +0300 Subject: [SciPy-User] Lapack testing Message-ID: I am trying to install Scientific Python. Have to check if LAPACK is working. According manual got several instructions, which i do not understand. from: http://www.netlib.org/lapack/lawn81/node24.html http://www.netlib.org/lapack/lawn41/index.html How to understand sentences: Compile the files xLINTSTF and either SCLNTSTF or DZLNTSTF link them to your matrix generator library, your LAPACK library, and your BLAS library For each of the test programs, associate the appropriate data file with Fortran unit number 5. Associate a suitably named file (e.g., SLINTST.OUT) with this unit number 6. Run the test programs. Under those sentences there must be certain commands: for compilation. linking, association. How to run test programs - there are tens of them in the directories /testing/link /testing/eig regards, gintare statkute -------------- next part -------------- An HTML attachment was scrubbed... URL: From lasagnadavide at gmail.com Wed Jun 23 03:37:17 2010 From: lasagnadavide at gmail.com (davide) Date: Wed, 23 Jun 2010 09:37:17 +0200 Subject: [SciPy-User] Autocorrelation function: Convolution vs FFT In-Reply-To: References: Message-ID: <1277278637.14207.4.camel@antares> Have a look to "Random data analysi" by Piersol and Bendat. Here is some code of mine. Still need some tweaks. To test it try to autocorrelate a long time history of a pure sine wave. Then compare the result with a cosine of the same frequency. def acorr( y, fs=1, maxlags=None, normed=True, full=False ): """ Get the auto-correlation function of a signal. Parameters ---------- y : a one dimensional array maxlags: the maximum number of time delay for which to compute the auto-correlation. normed : a boolean option. If true the normalized auto-correlation function is returned. fs : the sampling frequecy of the data full : if True also a time array is returned for plotting purposes Returns ------- rho : the auto-correlation function t : a time array. Only if full==True Example ------- t = np.arange(2**20) / 1000.0 y = np.sin(2*np.pi*100*t) rho = acorr( y, maxlags=1000 ) """ if not maxlags: maxlags = len(y)/2 if maxlags > len(y)/2: maxlags = len(y)/2 fs = float(fs) # pad with zeros x = np.hstack( (y, np.zeros(len(y))) ) # compute FFT trasform of signal sp = np.fft.rfft( x ) tmp = np.empty_like(sp) tmp = np.conj(sp, tmp) tmp = np.multiply(tmp, sp, tmp) rho = np.fft.irfft( tmp ) # divide by array length rho = np.divide(rho, len(y), rho)[:maxlags] # obtain the unbiased estimate tmp = len(y) / ( len(y) - np.arange(maxlags, dtype=np.float64) ) rho = np.multiply(rho, tmp, rho) if normed: rho = rho / rho[0] if full: t = np.arange(maxlags, dtype=np.float32) / fs return t, rho else: return rho From seb.haase at gmail.com Wed Jun 23 04:22:18 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Wed, 23 Jun 2010 10:22:18 +0200 Subject: [SciPy-User] info scipy.ndimage.filters.maximum_filter In-Reply-To: References: Message-ID: did you try to google for it ? - Sebastian Haase On Fri, Jun 18, 2010 at 5:08 PM, David Pine wrote: > How do I get more detailed information about scipy.ndimage.filters.maximum_filter than is available at the Numpy and Scipy Documentation Reference Guide? ?The guide tells you what the routine does but not how it does it. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From djpine at gmail.com Wed Jun 23 04:54:05 2010 From: djpine at gmail.com (David Pine) Date: Wed, 23 Jun 2010 10:54:05 +0200 Subject: [SciPy-User] info scipy.ndimage.filters.maximum_filter In-Reply-To: References: Message-ID: I did but I tried again and found something called "SciPy Dev Wiki" (http://projects.scipy.org/scipy/browser/trunk/scipy/ndimage/filters.py?rev=6405) which contains the original Python code. I can figure things out from that. It seems odd to me that the there seems to be no link to the source code for these routines on the SciPy documentation site. -David Pine On Jun 23, 2010, at 10:22 AM, Sebastian Haase wrote: > did you try to google for it ? > - Sebastian Haase > > > On Fri, Jun 18, 2010 at 5:08 PM, David Pine wrote: >> How do I get more detailed information about scipy.ndimage.filters.maximum_filter than is available at the Numpy and Scipy Documentation Reference Guide? The guide tells you what the routine does but not how it does it. >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From emmanuelle.gouillart at normalesup.org Wed Jun 23 05:17:41 2010 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Wed, 23 Jun 2010 11:17:41 +0200 Subject: [SciPy-User] info scipy.ndimage.filters.maximum_filter In-Reply-To: References: Message-ID: <20100623091741.GA12170@phare.normalesup.org> Hello, most of scipy.ndimage routines are written in C and then wrapped in Python. This makes code introspection a bit more difficult. For scipy.ndimage.filters.maximum_filter, the core of the routine can be found in the source file scipy/ndimage/src/ni_filters.c (the path of scipy sources depends on your installation), inside the NI_MinOrMaxFilter1D function (I don't know if you're familiar with C). If you're using the Ipython shell, a useful trick is to type '%edit function_name" to open the source code in a text editor, for example >>> from scipy import ndimage >>> %edit ndimage.maximum_filter Editing... done. Executing edited code... will open the the Python file with the wrapper. Of course, this feature is more useful when functions are written only with Python. I don't know if this partly answers your question or not... Cheers, Emmanuelle On Wed, Jun 23, 2010 at 10:54:05AM +0200, David Pine wrote: > I did but I tried again and found something called "SciPy Dev Wiki" (http://projects.scipy.org/scipy/browser/trunk/scipy/ndimage/filters.py?rev=6405) which contains the original Python code. I can figure things out from that. It seems odd to me that the there seems to be no link to the source code for these routines on the SciPy documentation site. > -David Pine > On Jun 23, 2010, at 10:22 AM, Sebastian Haase wrote: > > did you try to google for it ? > > - Sebastian Haase > > On Fri, Jun 18, 2010 at 5:08 PM, David Pine wrote: > >> How do I get more detailed information about scipy.ndimage.filters.maximum_filter than is available at the Numpy and Scipy Documentation Reference Guide? The guide tells you what the routine does but not how it does it. > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From lists at hilboll.de Wed Jun 23 08:11:09 2010 From: lists at hilboll.de (Andreas) Date: Wed, 23 Jun 2010 14:11:09 +0200 (CEST) Subject: [SciPy-User] Question about scikits.timeseries.lib.moving_funcs.mov_average Message-ID: Hi, in the docstring to mov_average() it says: The result will also be masked at i if any of the input values in the slice ``[i-span:i+1]`` are masked Is there any way to prevent this behaviour? I have a very patchy timeseries (one value every 3 to 6 days), and I'd like to use the mov_average function to smooth these data. Any idea how to do that? mov_average(data,20) would have been perfect, if not for the masked values ... What I tried so far is mov_average(data.compressed(),20) but that has the same size as data.compressed(). I would really like to have daily values .. Thanks for your insight! Cheers, Andreas. From pgmdevlist at gmail.com Wed Jun 23 10:40:16 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 23 Jun 2010 10:40:16 -0400 Subject: [SciPy-User] Question about scikits.timeseries.lib.moving_funcs.mov_average In-Reply-To: References: Message-ID: <16B2AD5E-B85B-42E5-A051-07008D4A7E4F@gmail.com> On Jun 23, 2010, at 8:11 AM, Andreas wrote: > Hi, > > in the docstring to mov_average() it says: > > The result will also be masked at i if any of the input values in the > slice ``[i-span:i+1]`` are masked > > Is there any way to prevent this behaviour? Nope. That limits nasty surprises otherwise. > I have a very patchy > timeseries (one value every 3 to 6 days), and I'd like to use the > mov_average function to smooth these data. > Any idea how to do that? > mov_average(data,20) would have been perfect, if not for the masked values > ... > What I tried so far is > > mov_average(data.compressed(),20) > > but that has the same size as data.compressed(). I would really like to > have daily values .. You could try to fill your missing values beforehand, w/ functions like backward_fill and forward_fill, then passing your series to mov_average. Or the reverse way: compress your data to get rid of the missing values, pass it to mov_average, reconvert it to a daily series w/ fill_missing_dates (to get the right number of dates), then fill it w/ backward_fill or forward_fill. Let me know how it goes. From wesmckinn at gmail.com Wed Jun 23 10:48:04 2010 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 23 Jun 2010 10:48:04 -0400 Subject: [SciPy-User] Question about scikits.timeseries.lib.moving_funcs.mov_average In-Reply-To: <16B2AD5E-B85B-42E5-A051-07008D4A7E4F@gmail.com> References: <16B2AD5E-B85B-42E5-A051-07008D4A7E4F@gmail.com> Message-ID: On Wed, Jun 23, 2010 at 10:40 AM, Pierre GM wrote: > > On Jun 23, 2010, at 8:11 AM, Andreas wrote: > >> Hi, >> >> in the docstring to mov_average() it says: >> >> ? The result will also be masked at i if any of the input values in the >> ? slice ``[i-span:i+1]`` are masked >> >> Is there any way to prevent this behaviour? > > Nope. That limits nasty surprises otherwise. > >> I have a very patchy >> timeseries (one value every 3 to 6 days), and I'd like to use the >> mov_average function to smooth these data. >> Any idea how to do that? >> mov_average(data,20) would have been perfect, if not for the masked values >> ... >> What I tried so far is >> >> ? mov_average(data.compressed(),20) >> >> but that has the same size as data.compressed(). I would really like to >> have daily values .. > > You could try to fill your missing values beforehand, w/ functions like backward_fill and forward_fill, then passing your series to mov_average. Or the reverse way: compress your data to get rid of the missing values, pass it to mov_average, reconvert it to a daily series w/ fill_missing_dates (to get the right number of dates), then fill it w/ backward_fill or forward_fill. > > Let me know how it goes. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > In pandas all my moving window functions accept a "min_periods" argument so that it will place a value in a data hole assuming there are sufficient observations in window: In [4]: arr Out[4]: array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., NaN, NaN, NaN, NaN, NaN, 15., 16., 17., 18., 19.]) In [5]: rolling_mean(arr, 10, min_periods=1) Out[5]: array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. , 5.5, 6. , 6.5, 7. , 9. , 11. , 13. , 15. , 17. ]) Have you thought of adding this functionality to scikits.timeseries? From lists at hilboll.de Wed Jun 23 11:45:56 2010 From: lists at hilboll.de (Andreas) Date: Wed, 23 Jun 2010 17:45:56 +0200 (CEST) Subject: [SciPy-User] Question about scikits.timeseries.lib.moving_funcs.mov_average In-Reply-To: <16B2AD5E-B85B-42E5-A051-07008D4A7E4F@gmail.com> References: <16B2AD5E-B85B-42E5-A051-07008D4A7E4F@gmail.com> Message-ID: Thanks a lot for your input! > You could try to fill your missing values beforehand, w/ functions like > backward_fill and forward_fill, then passing your series to mov_average. Well, that's not really what I want. By doing what you suggest, I make the assumption that the value actually changed on the day for which I have the measurement. But each measurement is only one single point in time, so I do not want to make this assumption. > Or the reverse way: compress your data to get rid of the missing values, > pass it to mov_average, reconvert it to a daily series w/ > fill_missing_dates (to get the right number of dates), then fill it w/ > backward_fill or forward_fill. See above. Also not really what I want. Basically, I'm looking for a simple and efficient way to do something like this:: w = 11 # the window size s = (w-1)*.5 for d in data.dates: newdata[d] = data[d-s:d+s+1].mean() Cheers, Andreas. From lists at hilboll.de Wed Jun 23 11:59:24 2010 From: lists at hilboll.de (Andreas) Date: Wed, 23 Jun 2010 17:59:24 +0200 (CEST) Subject: [SciPy-User] Question about scikits.timeseries.lib.moving_funcs.mov_average In-Reply-To: References: <16B2AD5E-B85B-42E5-A051-07008D4A7E4F@gmail.com> Message-ID: Hi, thanks for your input! > In [4]: arr > Out[4]: > array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., NaN, > NaN, NaN, NaN, NaN, 15., 16., 17., 18., 19.]) > > In [5]: rolling_mean(arr, 10, min_periods=1) > Out[5]: > array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , > 4.5, 5. , 5.5, 6. , 6.5, 7. , 9. , 11. , 13. , > 15. , 17. ]) Actually, this is exactly what I'm looking for. Well, almost. For my application, I would need the window to be centered on the current value, and be going only backwards from it. (I'm analyzing atmospheric measurement data.) Any ideas how this can be done? Cheers, Andreas. From wesmckinn at gmail.com Wed Jun 23 12:02:51 2010 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 23 Jun 2010 12:02:51 -0400 Subject: [SciPy-User] Question about scikits.timeseries.lib.moving_funcs.mov_average In-Reply-To: References: <16B2AD5E-B85B-42E5-A051-07008D4A7E4F@gmail.com> Message-ID: On Wed, Jun 23, 2010 at 11:59 AM, Andreas wrote: > Hi, thanks for your input! > >> In [4]: arr >> Out[4]: >> array([ ?0., ? 1., ? 2., ? 3., ? 4., ? 5., ? 6., ? 7., ? 8., ? 9., ?NaN, >> ? ? ? ? NaN, ?NaN, ?NaN, ?NaN, ?15., ?16., ?17., ?18., ?19.]) >> >> In [5]: rolling_mean(arr, 10, min_periods=1) >> Out[5]: >> array([ ?0. , ? 0.5, ? 1. , ? 1.5, ? 2. , ? 2.5, ? 3. , ? 3.5, ? 4. , >> ? ? ? ? ?4.5, ? 5. , ? 5.5, ? 6. , ? 6.5, ? 7. , ? 9. , ?11. , ?13. , >> ? ? ? ? 15. , ?17. ]) > > Actually, this is exactly what I'm looking for. Well, almost. For my > application, I would need the window to be centered on the current value, > and be going only backwards from it. (I'm analyzing atmospheric > measurement data.) > > Any ideas how this can be done? > > Cheers, > > Andreas. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Could you clarify what you mean by "centered on" (i.e. at time T which data points are included relative to T)? I work mostly with financial data and there it only makes sense to include trailing observations-- so the window at time T includes periods T back to T - window + 1. From wesmckinn at gmail.com Wed Jun 23 12:08:26 2010 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 23 Jun 2010 12:08:26 -0400 Subject: [SciPy-User] Question about scikits.timeseries.lib.moving_funcs.mov_average In-Reply-To: References: <16B2AD5E-B85B-42E5-A051-07008D4A7E4F@gmail.com> Message-ID: On Wed, Jun 23, 2010 at 12:02 PM, Wes McKinney wrote: > On Wed, Jun 23, 2010 at 11:59 AM, Andreas wrote: >> Hi, thanks for your input! >> >>> In [4]: arr >>> Out[4]: >>> array([ ?0., ? 1., ? 2., ? 3., ? 4., ? 5., ? 6., ? 7., ? 8., ? 9., ?NaN, >>> ? ? ? ? NaN, ?NaN, ?NaN, ?NaN, ?15., ?16., ?17., ?18., ?19.]) >>> >>> In [5]: rolling_mean(arr, 10, min_periods=1) >>> Out[5]: >>> array([ ?0. , ? 0.5, ? 1. , ? 1.5, ? 2. , ? 2.5, ? 3. , ? 3.5, ? 4. , >>> ? ? ? ? ?4.5, ? 5. , ? 5.5, ? 6. , ? 6.5, ? 7. , ? 9. , ?11. , ?13. , >>> ? ? ? ? 15. , ?17. ]) >> >> Actually, this is exactly what I'm looking for. Well, almost. For my >> application, I would need the window to be centered on the current value, >> and be going only backwards from it. (I'm analyzing atmospheric >> measurement data.) >> >> Any ideas how this can be done? >> >> Cheers, >> >> Andreas. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > Could you clarify what you mean by "centered on" (i.e. at time T which > data points are included relative to T)? I work mostly with financial > data and there it only makes sense to include trailing observations-- > so the window at time T includes periods T back to T - window + 1. > Apologies, I see from your prior e-mail. In that case maybe it makes sense to shift the data back by window / 2 periods and then take the moving average? From jh at physics.ucf.edu Wed Jun 23 13:27:00 2010 From: jh at physics.ucf.edu (Joe Harrington) Date: Wed, 23 Jun 2010 13:27:00 -0400 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 Message-ID: I am copying this to scipy-dev, where documentation policy discussions belong, as it proposes a policy decision on the use of trademarked words in the documentation. I have exchanged email with Natasha Hellerich in UCF's Office of General Counsel about how to refer to MATLAB? in our documentation. As noted previously, our use, without their specific permission, falls under "fair use". Fair use requires good faith, which in turn requires that we follow the trademark holder's (reasonable) requests for how to use the term and acknowledge that the trademark is theirs. It also requires that we restrict ourselves to descriptive use of the trademarked term. Calling a module "matlab", even as part of "scipy.io.matlab", might not be in line with the latter requirement, since the thing referred to is not a product of The MathWorks?, but rather our own module for reading the MATLAB file format. Possibilities that seem more nominative to me include "matlabcompat", "matlabfilecompat", or something else that indicates that the module is not MATLAB nor any part of it and was not written by/at The MathWorks. I have not consulted a lawyer about this specific issue but that might be advisable once the community has decided which naming alternatives would be OK. The advantage of doing this now would be to have some time with a deprecation warning on the old name. If we made the change in response to a cease-and-desist letter, we might not have that luxury. Picking a naming convention that works with other software's file formats that we could implement in the future (e.g., for IDL? save files; IDL is a registered trademark of ITT, Inc.) would be good. The two relevant messages from our lawyer (minus lengthy quoted discussion that appeared after her signature) are below, forwarded with her permission. The first includes instructions for the use of "MATLAB" and the text of the trademark statement, which differs from that proposed earlier on scipy-user. Note that the US URL for the page she references is: http://www.mathworks.com/company/pressroom/editorial_guidelines.html I tried hard to get her to agree that we did not need to use the R-in-a-circle symbol at all. However, her second email below notes a case in which someone was forced by a US court to include the R-in-a-circle symbol in a fair use of someone else's registered trademark: G.D. Searle & Co. v. Hudson Pharm Corp 715 F.2d 837, 839( 3rd Cir. 1983) However, use of the symbol is only required on the first use in a document, at least in this case. Regarding placement of the first use of the term and the trademark statement, NumPy is a single entity imported into Python in a single command: import numpy as np The docs are separately available as a single PDF and a book-format HTML document tree. Each entity is thus a single item requiring a single trademark acknowledgement statement and R-in-a-circle symbol. In the PDF and HTML, the right place for the trademark statement and the use with the ? symbol is in the front matter. Then, all the uses in the main text, including all docstrings, are simply "MATLAB", unadorned by the ? symbol. I would suggest putting the trademark statement on page 1 of the PDF version, below the release number and date, right before Chapter 1, but any location would be fine as long as it is before chapter 1 and before any other use of the term. I would also suggest adding a copyright statement and an appropriate Creative Commons or loosely similar license to the PDF and HTML. The help() function in the software itself is merely an index browser into the collection of docs, capable of jumping around in the docs at random but not of actualy reordering the docs. The notion of "first" seems best addressed by the help(np) page (np.__doc__), since the PDF/HTML front matter does not exist in the help() system and since that page offers somewhat of an index into the rest of the docs. I suggest ending np.__doc__ with: The NumPy documentation occasionally refers to MATLAB?, which is a registered trademark of The MathWorks, Inc. The UCF lawyer's recommendation seems well in line with the web sites cited earlier in this thread. If anyone has any reason to object, now is the time to do so. Otherwise, I propose that we make the lawyer's recommendation our policy. Note that this is all based on US trademark law. If the law is different in your country, please speak up now so we can see if there is a policy that satisfies all countries' laws. Thanks, --jh-- Prof. Joseph Harrington Planetary Sciences Group Department of Physics MAP 414 4000 Central Florida Blvd. University of Central Florida Orlando, FL 32816-2385 jh at physics.ucf.edu planets.ucf.edu Date: Mon, 21 Jun 2010 13:14:10 -0400 From: "Natasha Hellerich" To: "gcounsel" , "Tanya Perry" , Subject: Re: Fwd: use of TM and R in computer documentation In-Reply-To: Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Just as is stated on their website http://www.mathworks.co.uk/company/pressroom/editorial_guidelines.html MATLAB? - MATLAB should always be written with all letters in uppercase. Use the ? symbol on the first reference. Also, the statement MATLAB? is a registered trademark of The MathWorks, Inc. should be included. Please do not include any additional statements that were included on your previous email, such as "We use this trademark without permission from The MathWorks, etc." I specifically recommend AGAINST this statement. With respect to the computer documentation the employee is writing, I don't really know what this looks like, so it is difficult for me to judge whether to consider that as a single entity for purposes of following The MathWorks, Inc. guidelines with respect to using the ? symbol on the first reference. You are in a better position to judge whether this documentation constitutes a single entity. If you consider it as such, you can then make that argument. Also, not knowing the particulars surrounding this, my advice is limited to the trademark/symbol usage issues raised. Natasha Date: Wed, 16 Jun 2010 13:24:36 -0400 From: "Natasha Hellerich" To: Cc: "gcounsel" , "Tanya Perry" Subject: Re: Fwd: use of TM and R in computer documentation In-Reply-To: Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Disposition: inline Good afternoon, With respect to your follow-up question, I can offer the following information: As previously recommended, I took a look at the MathWorks website. They in fact provide the specific guidelines as to how they wish for others to refer to their trademarks, including when and how to include the circle R symbol. Please see the link below. http://www.mathworks.co.uk/company/pressroom/editorial_guidelines.html I recommend looking to other companies' web sites for guidance as well or contacting the entity at issue for the use of trademarks other than those owned by MathWorks. Generally, 17 United States Code (USC) Section 107 sets forth the concept of fair use. Fair use would be a defense to someone using another person's trademark. Fair use is the legal concept that would allow you to use another's trademark, within the parameters of fair use. So the fair use defense is one way to argue that you are not infringing a trademark, but it would require that the use is descriptive and in good faith and is used to describe the goods and services of another. If someone knows the other trademark is registered, then good faith use would show the use of the symbol. There is at least one court case G.D. Searle & Co. v. Hudson Pharm Corp 715 F.2d 837, 839( 3rd Cir. 1983) where a defendant in a trademark infringement case was ordered by the court to refer to his competitors' products using the registration symbol of R with a circle. I will ask Tanya Perry, our paralegal, to research additional case law regarding rules with respect to using another person's or entity's trademark in research articles or other commentary, as well as in other scenarios, i.e. whether there is additional case law out there that discusses the use of appropriate trademark symbols every time the trademark is mentioned vs. maybe just once at the beginning. Again, if a company already sets forth its own specific rules regarding use of trademark symbols when referring to their trademarks (such as MathWorks does via their web site), then those rules obviously should be followed. Tanya - please advise on what you can find regarding this issue. Thanks, Natasha From pgmdevlist at gmail.com Wed Jun 23 14:35:35 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 23 Jun 2010 14:35:35 -0400 Subject: [SciPy-User] Question about scikits.timeseries.lib.moving_funcs.mov_average In-Reply-To: References: <16B2AD5E-B85B-42E5-A051-07008D4A7E4F@gmail.com> Message-ID: <37B89DFC-CA3D-46DE-B939-9C2593B35BC6@gmail.com> On Jun 23, 2010, at 11:45 AM, Andreas wrote: > Thanks a lot for your input! > >> You could try to fill your missing values beforehand, w/ functions like >> backward_fill and forward_fill, then passing your series to mov_average. > > Well, that's not really what I want. By doing what you suggest, I make the > assumption that the value actually changed on the day for which I have the > measurement. But each measurement is only one single point in time, so I > do not want to make this assumption. Ah OK. Makes sense. > Basically, I'm looking for a simple and efficient way to do something like > this:: > > w = 11 # the window size > s = (w-1)*.5 > for d in data.dates: > newdata[d] = data[d-s:d+s+1].mean() Ah OK. Note that you should use cmov_mean, then... Well, several possibilities: * Make sure you don't have missing dates (use fill_missing_dates), then construct a list of slices and apply .mean() on the .series (so that you don't use __getitem__ on the whole series, only on the masked data part, saves some time). * Use some tricks: - the moving_funcs functions don't need timeseries as inputs, masked arrays are just fine - compute cmov_mean on the data part (filled w/ 0) - compute cmov_mean on the opposite of the mask (viz, np.logical_not(x.mask) - divide the first by the second. From pgmdevlist at gmail.com Wed Jun 23 14:49:22 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 23 Jun 2010 14:49:22 -0400 Subject: [SciPy-User] Question about scikits.timeseries.lib.moving_funcs.mov_average In-Reply-To: <37B89DFC-CA3D-46DE-B939-9C2593B35BC6@gmail.com> References: <16B2AD5E-B85B-42E5-A051-07008D4A7E4F@gmail.com> <37B89DFC-CA3D-46DE-B939-9C2593B35BC6@gmail.com> Message-ID: On Jun 23, 2010, at 2:35 PM, Pierre GM wrote: > > On Jun 23, 2010, at 11:45 AM, Andreas wrote: > >> Thanks a lot for your input! >> >>> You could try to fill your missing values beforehand, w/ functions like >>> backward_fill and forward_fill, then passing your series to mov_average. >> >> Well, that's not really what I want. By doing what you suggest, I make the >> assumption that the value actually changed on the day for which I have the >> measurement. But each measurement is only one single point in time, so I >> do not want to make this assumption. > > Ah OK. Makes sense. > >> Basically, I'm looking for a simple and efficient way to do something like >> this:: >> >> w = 11 # the window size >> s = (w-1)*.5 >> for d in data.dates: >> newdata[d] = data[d-s:d+s+1].mean() > > Ah OK. Note that you should use cmov_mean, then... > Well, several possibilities: > > * Make sure you don't have missing dates (use fill_missing_dates), then construct a list of slices and apply .mean() on the .series (so that you don't use __getitem__ on the whole series, only on the masked data part, saves some time). > > * Use some tricks: > - the moving_funcs functions don't need timeseries as inputs, masked arrays are just fine > - compute cmov_mean on the data part (filled w/ 0) > - compute cmov_mean on the opposite of the mask (viz, np.logical_not(x.mask) > - divide the first by the second. Here, let's have an example: """ import numpy as np import scikits.timeseries as ts import scikits.timeseries.lib.moving_funcs as mov size=50 x = ts.time_series(np.arange(size, dtype=float), dates=ts.date_array(ts.Date('D', "2001-01-01"), length=size*3)[::3]) xx = x.fill_missing_dates() zdata = mov.mov_sum(xx.filled(0), 20).data zmask = mov.mov_sum(np.logical_not(xx.mask).astype(float), 20).data print zdata[21], zmask[21] print xx[1:22].mean() zdata = mov.cmov_mean(xx.filled(0), 20).data zmask = mov.cmov_mean(np.logical_not(xx.mask).astype(float), 20).data print zdata[21], zmask[21] print xx[11:32].mean() """ When dealing w/ masked arrays, or series of missing dates, it's important to understand how things actually work. ".mean" on a masked array calls ".sum" on the ".data" part then ".count" on the ".mask" part. When dealing w/ a time series, it's usually more efficient to process the .data, the .mask and the .dates separately. So, in your problem, we're computing the centered mean on the data first (viz, the sum divided by the span), then on the (opposite of the) mask, and recompute the result. Note that cmov_ actually calls scipy.convolve, not our own C code like the mov_ functions... From pierre.raybaut at gmail.com Wed Jun 23 16:42:46 2010 From: pierre.raybaut at gmail.com (Pierre Raybaut) Date: Wed, 23 Jun 2010 22:42:46 +0200 Subject: [SciPy-User] Up-to-date SciPy/NumPy docs Message-ID: Hi all, I was planning to update Python(x,y) NumPy and SciPy plugins up to (resp.) v1.4.1 and v0.7.2 for a long time now but I was waiting for the documentation to be updated as well. But when I'm going to SciPy documentation website, there are either still outdated versions of NumPy and Scipy documentation available for download. More precisely, I have the choice to download either outdated .chm (the most interesting format for Windows) or .pdf documentations, or "too recent" html versions (drafts). Wouldn't be more logical to propose a stable version of these documentations along with current stable releases? I'm tired to deliver draft versions with Python(x,y), it simply seems unprofessional... In other words, I would really appreciate .chm docs to be updated to the latest stable releases of NumPy and SciPy. I'm sure that all Windows scientific Python users will appreciate as well! Cheers, Pierre From itsdho at ucla.edu Wed Jun 23 16:57:39 2010 From: itsdho at ucla.edu (David Ho) Date: Wed, 23 Jun 2010 13:57:39 -0700 Subject: [SciPy-User] calculate xcorr/acorr without plotting? Message-ID: Hi all! Is there a way to calculate a cross-correlation (xcorr) or autocorrelation (acorr) without actually plotting a figure? For example, for histograms, I can use matplotlib.pyplot.hist() to plot a histogram, but I can also use numpy.histogram() if I just want to compute the values without plotting a figure. I'd like to do something similar for cross-correlations. Thanks for your help! --David Ho -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Wed Jun 23 18:02:18 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Wed, 23 Jun 2010 15:02:18 -0700 Subject: [SciPy-User] Up-to-date SciPy/NumPy docs In-Reply-To: References: Message-ID: The problem is, the doc is *not* stable: the NumPy doc is quasi-stable (there remain 59 "documents" - docstrings, user guide pages, reference pages, and tutorial pages - in "Being written" status, and 81 in "Needs editing" status, 19 of which are new since the end of April, i.e., NumPy continues to be moving target), and the SciPy doc is in a highly "non-uniform" state, both with respect to quantity and quality, which is putting it euphemistically IMHO: frankly, if you distribute the SciPy doc now as it is, you're doing us a better service by telling people that it *isn't* stable, because saying it *is* in its present state, well, that's what would make us look unprofessional (again, IMO). More to the point, the SciPy doc is presently the focus of an ostensibly community-wide effort to improve it, so, again ostensibly, presently it is anything but "stable." There's a lot of work to be done, but, under the philosophy that some doc is better than no doc at all, the present, dynamic state hasn't stopped us from releasing it "as is" in the past; indeed, in the past, I believe this has been the main source of doc improvement: a user posts a doc deficiency to the list, and that's when it's been taken care of - the "itch-scratching" approach. I *think* we all wish that the doc could be completed yesterday, but the fact of the matter is, until the community steps up and decides that writing good, clear and complete doc for the stuff that's already there is *at least* as important as increasing the code base, the state of the doc will forever be "unstable." DG On Wed, Jun 23, 2010 at 1:42 PM, Pierre Raybaut wrote: > Hi all, > > I was planning to update Python(x,y) NumPy and SciPy plugins up to > (resp.) v1.4.1 and v0.7.2 for a long time now but I was waiting for > the documentation to be updated as well. But when I'm going to SciPy > documentation website, there are either still outdated versions of > NumPy and Scipy documentation available for download. > > More precisely, I have the choice to download either outdated .chm > (the most interesting format for Windows) or .pdf documentations, or > "too recent" html versions (drafts). > > Wouldn't be more logical to propose a stable version of these > documentations along with current stable releases? > I'm tired to deliver draft versions with Python(x,y), it simply seems > unprofessional... > > In other words, I would really appreciate .chm docs to be updated to > the latest stable releases of NumPy and SciPy. I'm sure that all > Windows scientific Python users will appreciate as well! > > Cheers, > Pierre > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Mathematician: noun, someone who disavows certainty when their uncertainty set is non-empty, even if that set has measure zero. Hope: noun, that delusive spirit which escaped Pandora's jar and, with her lies, prevents mankind from committing a general suicide. (As interpreted by Robert Graves) -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Jun 23 18:42:29 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 23 Jun 2010 18:42:29 -0400 Subject: [SciPy-User] calculate xcorr/acorr without plotting? In-Reply-To: References: Message-ID: On Wed, Jun 23, 2010 at 4:57 PM, David Ho wrote: > Hi all! > > Is there a way to calculate a cross-correlation (xcorr) or autocorrelation > (acorr) without actually plotting a figure? > > For example, for histograms, I can use matplotlib.pyplot.hist() to plot a > histogram, but I can also use numpy.histogram() if I just want to compute > the values without plotting a figure. > I'd like to do something similar for cross-correlations. not directly, but it's just a few lines, that can be copied from the source of matplotlib or statsmodels or several other packages eg. acov = numpy.correlate(x,x)/len(x) see the thread "Autocorrelation function: Convolution vs FFT" from the last two days. same works with xcorr Josef > > Thanks for your help! > > --David Ho > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From josef.pktd at gmail.com Wed Jun 23 20:25:13 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 23 Jun 2010 20:25:13 -0400 Subject: [SciPy-User] Up-to-date SciPy/NumPy docs In-Reply-To: References: Message-ID: On Wed, Jun 23, 2010 at 6:02 PM, David Goldsmith wrote: > The problem is, the doc is *not* stable: the NumPy doc is quasi-stable > (there remain 59 "documents" - docstrings, user guide pages, reference > pages, and tutorial pages - in "Being written" status, and 81 in "Needs > editing" status, 19 of which are new since the end of April, i.e., NumPy > continues to be moving target), and the SciPy doc is in a highly > "non-uniform" state, both with respect to quantity and quality, which is > putting it euphemistically IMHO: frankly, if you distribute the SciPy doc > now as it is, you're doing us a better service by telling people that it > *isn't* stable, because saying it *is* in its present state, well, that's > what would make us look unprofessional (again, IMO).? More to the point, the > SciPy doc is presently the focus of an ostensibly community-wide effort to > improve it, so, again ostensibly, presently it is anything but "stable." > > There's a lot of work to be done, but, under the philosophy that some doc is > better than no doc at all, the present, dynamic state hasn't stopped us from > releasing it "as is" in the past; indeed, in the past, I believe this has > been the main source of doc improvement: a user posts a doc deficiency to > the list, and that's when it's been taken care of - the "itch-scratching" > approach.? I *think* we all wish that the doc could be completed yesterday, > but the fact of the matter is, until the community steps up and decides that > writing good, clear and complete doc for the stuff that's already there is > *at least* as important as increasing the code base, the state of the doc > will forever be "unstable." Even if it is permanent work in progress, the current scipy chm is dated 3/8/2009 ! This might correspond to the code of scipy 0.7.x but misses all doc improvements. I downloaded this chm file already several times hoping for an updated version, only to see it's still the same. Last time I tried, it was relatively easy to build with a .bat file on Windows. What's the workflow to keep the chm updated? For multiversion docs, we might still have incomplete number of "changed" and "added" notes to the docstrings. Josef > > DG > > On Wed, Jun 23, 2010 at 1:42 PM, Pierre Raybaut > wrote: >> >> Hi all, >> >> I was planning to update Python(x,y) NumPy and SciPy plugins up to >> (resp.) v1.4.1 and v0.7.2 for a long time now but I was waiting for >> the documentation to be updated as well. But when I'm going to SciPy >> documentation website, there are either still outdated versions of >> NumPy and Scipy documentation available for download. >> >> More precisely, I have the choice to download either outdated .chm >> (the most interesting format for Windows) or .pdf documentations, or >> "too recent" html versions (drafts). >> >> Wouldn't be more logical to propose a stable version of these >> documentations along with current stable releases? >> I'm tired to deliver draft versions with Python(x,y), it simply seems >> unprofessional... >> >> In other words, I would really appreciate .chm docs to be updated to >> the latest stable releases of NumPy and SciPy. I'm sure that all >> Windows scientific Python users will appreciate as well! >> >> Cheers, >> Pierre >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Mathematician: noun, someone who disavows certainty when their uncertainty > set is non-empty, even if that set has measure zero. > > Hope: noun, that delusive spirit which escaped Pandora's jar and, with her > lies, prevents mankind from committing a general suicide. ?(As interpreted > by Robert Graves) > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From pav at iki.fi Thu Jun 24 03:31:56 2010 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 24 Jun 2010 07:31:56 +0000 (UTC) Subject: [SciPy-User] Up-to-date SciPy/NumPy docs References: Message-ID: Wed, 23 Jun 2010 20:25:13 -0400, josef.pktd wrote: [clip] > Even if it is permanent work in progress, the current scipy chm is dated > 3/8/2009 ! > > This might correspond to the code of scipy 0.7.x but misses all doc > improvements. > I downloaded this chm file already several times hoping for an updated > version, only to see it's still the same. It's not updated automatically, since on an Unix machine you need at least Wine to make it work, and setting this up proved to be a bit painful. > Last time I tried, it was relatively easy to build with a .bat file on > Windows. What's the workflow to keep the chm updated? Build it, and send it to me (or someone else) with upload access to the server. > For multiversion docs, we might still have incomplete number of > "changed" and "added" notes to the docstrings. That's pretty likely. -- Pauli Virtanen From pav at iki.fi Thu Jun 24 03:40:23 2010 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 24 Jun 2010 07:40:23 +0000 (UTC) Subject: [SciPy-User] Up-to-date SciPy/NumPy docs References: Message-ID: Hi, Wed, 23 Jun 2010 22:42:46 +0200, Pierre Raybaut wrote: [clip] > Wouldn't be more logical to propose a stable version of these > documentations along with current stable releases? I'm tired to deliver > draft versions with Python(x,y), it simply seems unprofessional... The "stable" versions there are essentially snapshots of the "draft" versions at the time of the release, with just the version numbers changed. Yes, I agree it'd be nice to time these to appear together with the releases. Perhaps we could adjust the release scripts to produce also documentation packages, as well as binary packages. > In other words, I would really appreciate .chm docs to be updated to the > latest stable releases of NumPy and SciPy. I'm sure that all Windows > scientific Python users will appreciate as well! Note that if you wish to ship up-to-date CHM files, you should be able to build them on Windows, in case we're too slow to respond. -- Pauli Virtanen From seb.haase at gmail.com Thu Jun 24 04:14:46 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Thu, 24 Jun 2010 10:14:46 +0200 Subject: [SciPy-User] OpenOpt and Scipy.optimize In-Reply-To: References: Message-ID: Hi all, hi Marcus, trying to evaluate OpenOpt vs. NLopt I found this posting (below). Just wanted send a ping and add myself as being interested in the answer ... Also, the NLopt library ( http://ab-initio.mit.edu/nlopt ) seems like another option ... Thanks, Sebastian Haase On Mon, May 31, 2010 at 7:03 AM, bowie_22 wrote: > Hello together, > > during my evaluation of scipy as subsitute for Matlab I started to look at the > optimization features of sciypy by looking at the optimze module. > > I posted a question and one answer contained a hint to OpenOpt. > > Now I am a little bit unsure how to proceed. Does it make more sense to look at > OpenOpt rather then evaluating scipy.optimize? > > Regrads > > Marcus > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pierre.raybaut at gmail.com Thu Jun 24 04:24:38 2010 From: pierre.raybaut at gmail.com (Pierre Raybaut) Date: Thu, 24 Jun 2010 10:24:38 +0200 Subject: [SciPy-User] Up-to-date SciPy/NumPy docs Message-ID: Thanks for clarifying things regarding SciPy/NumPy docs stability. >From what I understand, I think that the best for Python(x,y) is to keep distributing the outdated versions of the docs. After all, essential features are still the same and were already well documented. Thanks again for your answer. Long live to SciPy! Cheers, Pierre 2010/6/24 : > Date: Wed, 23 Jun 2010 15:02:18 -0700 > From: David Goldsmith > Subject: Re: [SciPy-User] Up-to-date SciPy/NumPy docs > To: SciPy Users List > Message-ID: > ? ? ? ? > Content-Type: text/plain; charset="iso-8859-1" > > The problem is, the doc is *not* stable: the NumPy doc is quasi-stable > (there remain 59 "documents" - docstrings, user guide pages, reference > pages, and tutorial pages - in "Being written" status, and 81 in "Needs > editing" status, 19 of which are new since the end of April, i.e., NumPy > continues to be moving target), and the SciPy doc is in a highly > "non-uniform" state, both with respect to quantity and quality, which is > putting it euphemistically IMHO: frankly, if you distribute the SciPy doc > now as it is, you're doing us a better service by telling people that it > *isn't* stable, because saying it *is* in its present state, well, that's > what would make us look unprofessional (again, IMO). ?More to the point, the > SciPy doc is presently the focus of an ostensibly community-wide effort to > improve it, so, again ostensibly, presently it is anything but "stable." > > There's a lot of work to be done, but, under the philosophy that some doc is > better than no doc at all, the present, dynamic state hasn't stopped us from > releasing it "as is" in the past; indeed, in the past, I believe this has > been the main source of doc improvement: a user posts a doc deficiency to > the list, and that's when it's been taken care of - the "itch-scratching" > approach. ?I *think* we all wish that the doc could be completed yesterday, > but the fact of the matter is, until the community steps up and decides that > writing good, clear and complete doc for the stuff that's already there is > *at least* as important as increasing the code base, the state of the doc > will forever be "unstable." > > DG > > On Wed, Jun 23, 2010 at 1:42 PM, Pierre Raybaut wrote: > >> Hi all, >> >> I was planning to update Python(x,y) NumPy and SciPy plugins up to >> (resp.) v1.4.1 and v0.7.2 for a long time now but I was waiting for >> the documentation to be updated as well. But when I'm going to SciPy >> documentation website, there are either still outdated versions of >> NumPy and Scipy documentation available for download. >> >> More precisely, I have the choice to download either outdated .chm >> (the most interesting format for Windows) or .pdf documentations, or >> "too recent" html versions (drafts). >> >> Wouldn't be more logical to propose a stable version of these >> documentations along with current stable releases? >> I'm tired to deliver draft versions with Python(x,y), it simply seems >> unprofessional... >> >> In other words, I would really appreciate .chm docs to be updated to >> the latest stable releases of NumPy and SciPy. I'm sure that all >> Windows scientific Python users will appreciate as well! >> >> Cheers, >> Pierre >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Mathematician: noun, someone who disavows certainty when their uncertainty > set is non-empty, even if that set has measure zero. > > Hope: noun, that delusive spirit which escaped Pandora's jar and, with her > lies, prevents mankind from committing a general suicide. ?(As interpreted > by Robert Graves) > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20100623/0e5077f6/attachment.html > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 82, Issue 65 > ****************************************** > From pierre.raybaut at gmail.com Thu Jun 24 04:43:25 2010 From: pierre.raybaut at gmail.com (Pierre Raybaut) Date: Thu, 24 Jun 2010 10:43:25 +0200 Subject: [SciPy-User] Up-to-date SciPy/NumPy docs Message-ID: Hi, > Hi, > > Wed, 23 Jun 2010 22:42:46 +0200, Pierre Raybaut wrote: > > In other words, I would really appreciate .chm docs to be updated to the > > latest stable releases of NumPy and SciPy. I'm sure that all Windows > > scientific Python users will appreciate as well! > > Note that if you wish to ship up-to-date CHM files, you should be able to > build them on Windows, in case we're too slow to respond. Of course I can. But if I had to build every single documentation for every single package included in Python(x,y)... well there won't be any Python(x,y)! I can't afford to spend so much time on each library. At some point, every library developer has to his job entirely: writing source code, building binaries and distributing them *and* doing the same for documentation. When building a module distribution like Python(x,y), my job should be to redistribute and to package things, not to build them from scratch. (Of course there are exceptions like VTK and ITK which I build every time from source.) This being said, I'm conscious that documentation may seem like a waste of time for developers, but from the user point of view documentation is almost as important as code itself. Cheers, Pierre > -- > Pauli Virtanen From cr.anil at gmail.com Thu Jun 24 07:15:58 2010 From: cr.anil at gmail.com (Anil C R) Date: Thu, 24 Jun 2010 16:45:58 +0530 Subject: [SciPy-User] SciPy crashes on loading scipy.stats Message-ID: SciPy crashes on loading the scipy.stats module on Win XP and Scipy version 0.7.1 Anil -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Thu Jun 24 09:13:48 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 24 Jun 2010 21:13:48 +0800 Subject: [SciPy-User] Up-to-date SciPy/NumPy docs In-Reply-To: References: Message-ID: On Thu, Jun 24, 2010 at 3:40 PM, Pauli Virtanen wrote: > Hi, > > Wed, 23 Jun 2010 22:42:46 +0200, Pierre Raybaut wrote: > [clip] > > Wouldn't be more logical to propose a stable version of these > > documentations along with current stable releases? I'm tired to deliver > > draft versions with Python(x,y), it simply seems unprofessional... > > The "stable" versions there are essentially snapshots of the "draft" > versions at the time of the release, with just the version numbers > changed. > > Yes, I agree it'd be nice to time these to appear together with the > releases. Perhaps we could adjust the release scripts to produce also > documentation packages, as well as binary packages. > > We already build the pdf docs for a release, I think it makes sense to put them on the sourceforge download page as a separate download. Adding html docs is also straightforward. I've never tried to build the docs in .chm format, I'll see if it works under wine. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Jun 24 09:21:11 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 24 Jun 2010 09:21:11 -0400 Subject: [SciPy-User] Up-to-date SciPy/NumPy docs In-Reply-To: References: Message-ID: On Thu, Jun 24, 2010 at 9:13 AM, Ralf Gommers wrote: > > > On Thu, Jun 24, 2010 at 3:40 PM, Pauli Virtanen wrote: >> >> Hi, >> >> Wed, 23 Jun 2010 22:42:46 +0200, Pierre Raybaut wrote: >> [clip] >> > Wouldn't be more logical to propose a stable version of these >> > documentations along with current stable releases? I'm tired to deliver >> > draft versions with Python(x,y), it simply seems unprofessional... >> >> The "stable" versions there are essentially snapshots of the "draft" >> versions at the time of the release, with just the version numbers >> changed. >> >> Yes, I agree it'd be nice to time these to appear together with the >> releases. Perhaps we could adjust the release scripts to produce also >> documentation packages, as well as binary packages. >> > We already build the pdf docs for a release, I think it makes sense to put > them on the sourceforge download page as a separate download. Adding html > docs is also straightforward. I've never tried to build the docs in .chm > format, I'll see if it works under wine. I can also try to build the chm files on Windows in a few days. Josef > > Cheers, > Ralf > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From ralf.gommers at googlemail.com Thu Jun 24 09:24:03 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 24 Jun 2010 21:24:03 +0800 Subject: [SciPy-User] Up-to-date SciPy/NumPy docs In-Reply-To: References: Message-ID: On Thu, Jun 24, 2010 at 4:24 PM, Pierre Raybaut wrote: > Thanks for clarifying things regarding SciPy/NumPy docs stability. > > From what I understand, I think that the best for Python(x,y) is to > keep distributing the outdated versions of the docs. After all, > essential features are still the same and were already well > documented. > There are quite a few improvements especially in the numpy docs. Are you only interested in .chm, or is pdf/html fine. I've got pdf's for 1.4.1 and 0.7.2 on my computer somewhere for sure, so I could send them to you by tomorrow. > > > > On Wed, Jun 23, 2010 at 1:42 PM, Pierre Raybaut < > pierre.raybaut at gmail.com>wrote: > > > >> Hi all, > >> > >> I was planning to update Python(x,y) NumPy and SciPy plugins up to > >> (resp.) v1.4.1 and v0.7.2 for a long time now but I was waiting for > >> the documentation to be updated as well. But when I'm going to SciPy > >> documentation website, there are either still outdated versions of > >> NumPy and Scipy documentation available for download. > >> > If you haven't started yet, I would strongly suggest to use scipy 0.8.0b1. Or if you really want a final release, wait a week (or two at most). Despite the recent date of release of 0.7.2 the code it's based on is from Jan 2009, 0.8.0 is far better even in beta form. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Jun 24 09:33:16 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 24 Jun 2010 07:33:16 -0600 Subject: [SciPy-User] SciPy crashes on loading scipy.stats In-Reply-To: References: Message-ID: On Thu, Jun 24, 2010 at 5:15 AM, Anil C R wrote: > SciPy crashes on loading the scipy.stats module on Win XP and Scipy version > 0.7.1 > > That might be the numpy version. What numpy version are you running? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From djpine at gmail.com Thu Jun 24 12:37:07 2010 From: djpine at gmail.com (David Pine) Date: Thu, 24 Jun 2010 18:37:07 +0200 Subject: [SciPy-User] info scipy.ndimage.filters.maximum_filter In-Reply-To: <20100623091741.GA12170@phare.normalesup.org> References: <20100623091741.GA12170@phare.normalesup.org> Message-ID: <5E29A463-B16B-48EE-BF64-91C3E6FF1240@gmail.com> Emmanuelle, Nifty trick with IPython to see the wrapper. Thanks for the tip. I found the C code on the web -- on the "SciPy Dev Wiki" -- but I was unable to locate it on my computer. I use the Enthought distribution for the Mac. Thanks again. Dave On Jun 23, 2010, at 11:17 AM, Emmanuelle Gouillart wrote: > Hello, > > most of scipy.ndimage routines are written in C and then wrapped in > Python. This makes code introspection a bit more difficult. For > scipy.ndimage.filters.maximum_filter, the core of the routine can be > found in the source file scipy/ndimage/src/ni_filters.c (the path of > scipy sources depends on your installation), inside the > NI_MinOrMaxFilter1D function (I don't know if you're familiar with C). > > If you're using the Ipython shell, a useful trick is to type '%edit > function_name" to open the source code in a text editor, for example >>>> from scipy import ndimage >>>> %edit ndimage.maximum_filter > Editing... done. Executing edited code... > > will open the the Python file with the wrapper. Of course, this feature > is more useful when functions are written only with Python. > > I don't know if this partly answers your question or not... > > Cheers, > > Emmanuelle > > On Wed, Jun 23, 2010 at 10:54:05AM +0200, David Pine wrote: >> I did but I tried again and found something called "SciPy Dev Wiki" (http://projects.scipy.org/scipy/browser/trunk/scipy/ndimage/filters.py?rev=6405) which contains the original Python code. I can figure things out from that. It seems odd to me that the there seems to be no link to the source code for these routines on the SciPy documentation site. > >> -David Pine > >> On Jun 23, 2010, at 10:22 AM, Sebastian Haase wrote: > >>> did you try to google for it ? >>> - Sebastian Haase > > >>> On Fri, Jun 18, 2010 at 5:08 PM, David Pine wrote: >>>> How do I get more detailed information about scipy.ndimage.filters.maximum_filter than is available at the Numpy and Scipy Documentation Reference Guide? The guide tells you what the routine does but not how it does it. >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From R.Springuel at umit.maine.edu Thu Jun 24 13:27:05 2010 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Thu, 24 Jun 2010 13:27:05 -0400 Subject: [SciPy-User] corrcoef and dump Message-ID: <4C239569.70709@umit.maine.edu> Is there a version of corrcoef in numpy that doesn't return a matrix output? I'm trying to calculate a 15x15 matrix of correlation coefficients, but my program is crashing with a MemoryError during the calculations. As a result, I'd like to break up the calculations to ease the memory load. If I feed corrcoef the lists in a pairwise fashion, then I don't need the matrix format of the output, and indeed would find it slightly easier if it wasn't matrix formated (though ultimately I can deal with the matrix format if needed). Also, if I dump an ndarray to file and that array later changes in a program, will the file be updated to reflect the changes? I didn't think so originally, but I have a program that dumps an array to file which is working in a DropBox folder and got a notice that the file was written, and then a couple of minutes later that it was updated. Since there is only one dump command in the program, this makes me suspect that the mutable nature of the array is being played with by the program (accidentally, but that's a separate issue) and then redumping the array. -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By Appointment Only From charlesr.harris at gmail.com Thu Jun 24 13:44:17 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 24 Jun 2010 11:44:17 -0600 Subject: [SciPy-User] corrcoef and dump In-Reply-To: <4C239569.70709@umit.maine.edu> References: <4C239569.70709@umit.maine.edu> Message-ID: On Thu, Jun 24, 2010 at 11:27 AM, R. Padraic Springuel < R.Springuel at umit.maine.edu> wrote: > Is there a version of corrcoef in numpy that doesn't return a matrix > output? I'm trying to calculate a 15x15 matrix of correlation > coefficients, but my program is crashing with a MemoryError during the > calculations. As a result, I'd like to break up the calculations to > ease the memory load. If I feed corrcoef the lists in a pairwise > fashion, then I don't need the matrix format of the output, and indeed > would find it slightly easier if it wasn't matrix formated (though > ultimately I can deal with the matrix format if needed). > > Out of curiosity, what is the size of the inputs to corrcoef? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.raybaut at gmail.com Thu Jun 24 17:16:55 2010 From: pierre.raybaut at gmail.com (Pierre Raybaut) Date: Thu, 24 Jun 2010 23:16:55 +0200 Subject: [SciPy-User] [ANN] Spyder v1.1.0 released Message-ID: Hi all, I'm pleased to announce here that Spyder version 1.1.0 has been released: http://packages.python.org/spyder Spyder (the Scientific PYthon Development EnviRonment) is a free open-source Python development environment providing MATLAB-like features in a simple and light-weighted software, available for Windows XP/Vista/7, GNU/Linux and MacOS X: * advanced code editing features (code analysis, ...) * interactive console with MATLAB-like workspace (with GUI-based list, dictionary, tuple, text and array editors -- screenshots: http://packages.python.org/spyder/console.html#the-workspace) and integrated matplotlib figures * external console to open an interpreter or run a script in a separate process (with a global variable explorer providing the same features as the interactive console's workspace) * code analysis with pyflakes and pylint * search in files features * object inspector: automatically retrieves docstrings or source code of the function/class called in the interactive/external console * online documentation viewer (pydoc) * integrated file/directories explorer * MATLAB-like path management * project management ...and more! Spyder is part of spyderlib, a Python module based on PyQt4 and QScintilla2 which provides powerful console-related PyQt4 widgets. Some of the major changes since v1.0.0 (433 commits!): * A lot of bugfixes! * IPython integration within the external console (still experimental) * QScintilla2 is now optional (a whole pure PyQt4 code editor -faster than its QScintilla's counterpart- has been implemented): brings code folding and code completion * Improved Matplotlib's figure options feature (added support for image parameters, added an "Apply" button) * Added: Project Explorer plugin (Pydev projects may be imported) * Added: Online help browser plugin (based on pydoc) * Editor new features: * Unlimited horizontal/vertical splitting: each new editor panel is a clone of the first panel, allowing comparing two parts of the same file * Unlimited independent editor windows creation * Flag vertical scrollbar area: shows warnings, TODOs, FIXMEs and occurrence highlighting of the whole file * External console: added import/export features to the variable explorer Cheers, Pierre From cr.anil at gmail.com Fri Jun 25 00:51:11 2010 From: cr.anil at gmail.com (Anil C R) Date: Fri, 25 Jun 2010 10:21:11 +0530 Subject: [SciPy-User] SciPy crashes on loading scipy.stats In-Reply-To: References: Message-ID: I'm running numpy 1.3.0, actually I'm using Python(x,y) could that be a problem? Anil On Thu, Jun 24, 2010 at 7:03 PM, Charles R Harris wrote: > > > On Thu, Jun 24, 2010 at 5:15 AM, Anil C R wrote: > >> SciPy crashes on loading the scipy.stats module on Win XP and Scipy >> version 0.7.1 >> >> > That might be the numpy version. What numpy version are you running? > > Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Jun 25 09:27:44 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 25 Jun 2010 07:27:44 -0600 Subject: [SciPy-User] SciPy crashes on loading scipy.stats In-Reply-To: References: Message-ID: On Thu, Jun 24, 2010 at 10:51 PM, Anil C R wrote: > I'm running numpy 1.3.0, actually I'm using Python(x,y) could that be a > problem? > Anil > ' > Don't know, but IIRC, there was a problem with extensions compiled against 1.3 crashing when run on earlier versions of numpy (they should raise an error). Do any other imports crash? Do you have two versions of numpy installed by any chance? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cr.anil at gmail.com Fri Jun 25 12:17:38 2010 From: cr.anil at gmail.com (Anil C R) Date: Fri, 25 Jun 2010 21:47:38 +0530 Subject: [SciPy-User] SciPy crashes on loading scipy.stats In-Reply-To: References: Message-ID: On Fri, Jun 25, 2010 at 6:57 PM, Charles R Harris wrote: > > Don't know, but IIRC, there was a problem with extensions compiled against > 1.3 crashing when run on earlier versions of numpy (they should raise an > error). Do any other imports crash? Do you have two versions of numpy > installed by any chance? > > Chuck > > not the ones I've used... scipy.ndimage, scipy.special and scipy.misc don't seem to... and these are the only one's I've used on Windows... Anil -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Fri Jun 25 12:28:03 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 25 Jun 2010 12:28:03 -0400 Subject: [SciPy-User] Autocorrelation function: Convolution vs FFT In-Reply-To: <1277278637.14207.4.camel@antares> References: <1277278637.14207.4.camel@antares> Message-ID: On Wed, Jun 23, 2010 at 3:37 AM, davide wrote: > Have a look to "Random data analysi" by Piersol and Bendat. > Thanks for the reference. > Here is some code of mine. Still need some tweaks. > Thanks. This is similar to what I came up with, though maybe David's from talkbox is more general. > To test it try to autocorrelate a long time history of a pure sine wave. > Then compare the result with a cosine of the same frequency. > > def acorr( y, fs=1, maxlags=None, normed=True, full=False ): > ? ?""" > ? ?Get the auto-correlation function of a signal. > > ? ?Parameters > ? ?---------- > > ? ?y ? ? ?: a one dimensional array > ? ?maxlags: the maximum number of time delay for which > ? ? ? ? ? ? to compute the auto-correlation. > ? ?normed : a boolean option. If true the normalized > ? ? ? ? ? ? auto-correlation function is returned. > ? ?fs ? ? : the sampling frequecy of the data > ? ?full ? : if True also a time array is returned for > ? ? ? ? ? ? plotting purposes > > ? ?Returns > ? ?------- > ? ?rho ? ?: the auto-correlation function > ? ?t ? ? ?: a time array. Only if full==True > > ? ?Example > ? ?------- > > ? ?t = np.arange(2**20) / 1000.0 > ? ?y = np.sin(2*np.pi*100*t) > ? ?rho = acorr( y, maxlags=1000 ) > > ? ?""" > > ? ?if not maxlags: > ? ? ? ?maxlags = len(y)/2 > > ? ?if maxlags > len(y)/2: > ? ? ? ?maxlags = len(y)/2 > > ? ?fs = float(fs) > > ? ?# pad with zeros > ? ?x = np.hstack( (y, np.zeros(len(y))) ) > > ? ?# compute FFT trasform of signal > ? ?sp = np.fft.rfft( x ) > ? ?tmp = np.empty_like(sp) > ? ?tmp = np.conj(sp, tmp) > ? ?tmp = np.multiply(tmp, sp, tmp) > ? ?rho = np.fft.irfft( tmp ) > > ? ?# divide by array length > ? ?rho = np.divide(rho, len(y), rho)[:maxlags] > > ? ?# obtain the unbiased estimate > ? ?tmp = len(y) / ( len(y) - np.arange(maxlags, dtype=np.float64) ) > ? ?rho = np.multiply(rho, tmp, rho) > > > ? ?if normed: > ? ? ? ?rho = rho / rho[0] > > ? ?if full: > ? ? ? ?t = np.arange(maxlags, dtype=np.float32) / fs > ? ? ? ?return t, rho > ? ?else: > ? ? ? ?return rho > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From chris.d.burns at gmail.com Fri Jun 25 12:43:51 2010 From: chris.d.burns at gmail.com (Christopher Burns) Date: Fri, 25 Jun 2010 09:43:51 -0700 Subject: [SciPy-User] checklist script error In-Reply-To: <20100625063458.ABO14926@comet.stsci.edu> References: <20100625063458.ABO14926@comet.stsci.edu> Message-ID: Hey Kevin, Setuptools can be installed from pypi: http://pypi.python.org/pypi/setuptools/0.6c8 Mayavi is more of a challenge due to it's dependencies. You have two choices here: 1) The Easy Way: Installing one of the all-in-one distributions like EPD or pythonxy. These will install all the dependencies (and then sum) in a new python installation. http://www.enthought.com/products/epd.php http://www.pythonxy.com/ 2) The Hard Way: Installing mayavi and it's dependencies yourself. Download mayavi from pypi: http://pypi.python.org/pypi/Mayavi/3.3.2 Mayavi depends on VTK and wxPython, there are links for these under the Prerequisites section on the pypi page. For wxPython, there is a dmg installer in their downloads page. It's worth a simple check to see if you have it installed already. If you do, this command will print out a version number: python -c "import wx; print wx.__version__" There is no binary for VTK, so you need to download the source (tarball or zip) and build it using CMake. This can go easy or hard depending on your comfort level with building C code. There are compilation instructions in the VTK Readme.html. Chris On Fri, Jun 25, 2010 at 3:34 AM, Kevin wrote: > The entire output of that script upon attempting to run it is the following: > > Running tests: > __main__.test_imports('setuptools', None) ... ERROR > __main__.test_imports('IPython', None) ... MOD: IPython, version: 0.9.1 > ok > __main__.test_imports('numpy', None) ... MOD: numpy, version: 1.3.0 > ok > __main__.test_imports('scipy', None) ... MOD: scipy, version: 0.7.1 > ok > __main__.test_imports('scipy.io', None) ... MOD: scipy.io, version: *no info* > ok > __main__.test_imports('matplotlib', ) ... MOD: matplotlib, version: 0.99.0 > ok > __main__.test_imports('pylab', None) ... MOD: pylab, version: *no info* > ok > __main__.test_imports('enthought.mayavi.api', None) ... ERROR > __main__.test_loadtxt(array([[ 0., ?1.], ... ok > __main__.test_loadtxt(array([('M', 21, 72.0), ('F', 35, 58.0)], ... ok > __main__.test_loadtxt(array([ 1., ?3.]), array([ 1., ?3.])) ... ok > __main__.test_loadtxt(array([ 2., ?4.]), array([ 2., ?4.])) ... ok > Simple plot generation. ... ok > Plots with math ... ok > > ====================================================================== > ERROR: __main__.test_imports('setuptools', None) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ?File "/usr/stsci/pyssg/2.5.4/nose/case.py", line 183, in runTest > ? ?self.test(*self.arg) > ?File "intro_tut_checklist.py", line 95, in check_import > ? ?exec "import %s as m" % mnames > ?File "", line 1, in > ImportError: No module named setuptools > > ====================================================================== > ERROR: __main__.test_imports('enthought.mayavi.api', None) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ?File "/usr/stsci/pyssg/2.5.4/nose/case.py", line 183, in runTest > ? ?self.test(*self.arg) > ?File "intro_tut_checklist.py", line 95, in check_import > ? ?exec "import %s as m" % mnames > ?File "", line 1, in > ImportError: No module named enthought.mayavi.api > > ---------------------------------------------------------------------- > Ran 14 tests in 10.766s > > FAILED (errors=2) > Cleanup - removing temp directory: /Users/lindsay/tmp-testdata-etwtf9 > > *************************************************************************** > ? ? ? ? ? ? ? ? ? ? ? ? ? TESTS FINISHED > *************************************************************************** > > If the printout above did not finish in 'OK' but instead says 'FAILED', copy > and send the *entire* output, including the system information below, for help. > We'll do our best to assist you. ?You can send your message to the Scipy user > mailing list: > > ? ?http://mail.scipy.org/mailman/listinfo/scipy-user > > but feel free to also CC directly: ?cburns at berkeley dot edu > > > ================== > System information > ================== > os.name ? ? ?: posix > os.uname ? ? : ('Darwin', 'mooseman.home', '9.8.0', 'Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386', 'i386') > platform ? ? : darwin > platform+ ? ?: Darwin-9.8.0-i386-32bit > prefix ? ? ? : /usr/stsci/pyssg/Python-2.5.4 > exec_prefix ?: /usr/stsci/pyssg/Python-2.5.4 > executable ? : /usr/stsci/pyssg/Python-2.5.4//bin/python > version_info : (2, 5, 4, 'final', 0) > version ? ? ?: 2.5.4 (r254:67916, Nov ?6 2009, 11:35:14) > [GCC 4.0.1 (Apple Inc. build 5465)] > ================== > From R.Springuel at umit.maine.edu Fri Jun 25 13:04:09 2010 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Fri, 25 Jun 2010 13:04:09 -0400 Subject: [SciPy-User] corrcoef and dump In-Reply-To: References: Message-ID: <4C24E189.3000906@umit.maine.edu> Each of the 15 arrays between which I want the calculations has 6,695,970 entries in it. I tried feeding corrcoef just two of the lists and while I don't get a MemoryError (the python exception), I do get a couple of these errors: Python(337) malloc: *** mmap(size=2678390784) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug They don't seem to make the program fail, however, as I still get a result from corrcoef. I found a function I'd written some time before to calculate the correlation coefficient that doesn't raise that error and its results agree with those from corrcoef, so there has to be an implementation thing going on here with memory usage. -- R. Padraic Springuel Research Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By Appointment Only From andrew at andrewschein.com Fri Jun 25 13:09:16 2010 From: andrew at andrewschein.com (Andrew Schein) Date: Fri, 25 Jun 2010 10:09:16 -0700 Subject: [SciPy-User] scipy.sparse.csr_matrix.matmat deprecation question Message-ID: I would like to perform a matrix multiplication of the form A * B where A is dense and B is sparse CSR or COO. Does scipy.sparse have this capability and will it in the future? How fast is the scipy implementation in comparison to INTEL MKL? It appears that there is a .matmat function that has been deprecated. Does this reflect a retreat, or is the functionality found in some other place? Thanks, Andrew -- Andrew I. Schein www.andrewschein.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Fri Jun 25 15:04:25 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 25 Jun 2010 15:04:25 -0400 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: Hi, > It also requires that we restrict ourselves to descriptive use of the > trademarked term. ?Calling a module "matlab", even as part of > "scipy.io.matlab", might not be in line with the latter requirement, I'm going to suggest 'scipy.io.matfiles' again. Then there's no trademark in the name, as far as I am aware. > I tried hard to get her to agree that we did not need to use the > R-in-a-circle symbol at all. ?However, her second email below notes a > case in which someone was forced by a US court to include the > R-in-a-circle symbol in a fair use of someone else's registered > trademark: > > G.D. Searle & Co. v. Hudson Pharm Corp 715 F.2d 837, 839( 3rd > Cir. 1983) http://openjurist.org/715/f2d/837/gd-searle-co-v-hudson-pharmaceutical-corporation-gd-and-82-5600-82-5621 In that case, the judge concluded that one pharmaceutical company had deliberately changed its packaging to make their product look more like that of a competitor, and, as part of that packaging, had not made clear that the competitor's trademark was a trademark. The initial restraining order did insist on (TM) and more information next to the trademark, but the subsequent decision was only to agree that there had been an attempt to confuse as to trademark ownership, and that the effects of the restraining order had adequately rectified that. The decision does not set it as a point of principle that (R) or (TM) should be next to a trademark. Our job (legally and in order to make the documentation clear) is to make sure that when we say MATLAB, it's absolutely clear that MATLAB means the Mathwork's software. I continue to think the MATLAB [1] (etc) approach is less embarrassing. I think it covers the grounds of the complaint in the cited case. It also matches with the reasonable sounding advice we got earlier from Jonathan Guyer about the approach taken at NIST. Joe - maybe you could ask whether your advisors agree? Best, Matthew From ben.root at ou.edu Fri Jun 25 15:14:44 2010 From: ben.root at ou.edu (Benjamin Root) Date: Fri, 25 Jun 2010 14:14:44 -0500 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: On Fri, Jun 25, 2010 at 2:04 PM, Matthew Brett wrote: > Hi, > > > It also requires that we restrict ourselves to descriptive use of the > > trademarked term. Calling a module "matlab", even as part of > > "scipy.io.matlab", might not be in line with the latter requirement, > > I'm going to suggest 'scipy.io.matfiles' again. Then there's no > trademark in the name, as far as I am aware. > > +1 I am working on the documentation and I was just about to suggest something like "scipy.io.matfile" because that is what the file type is called. This would be consistent with the other modules in scipy.io such as netcdf and arff. This would also make it very self-documenting. Users will not be confused that there isn't anything more related to matlab in the scipy.io.matlab section. I propose moving the scipy/io/matlab directory to scipy/io/matfile and have the __init__.py file for scipy.io to import scipy.io.matfile as matlab. I don't know if that works for all of the ways one would call that module. Also, is there any sort of way to make a deprecation warning fire for importing scipy.io.matlab, but not for scipy.io.matfile? I never had to do any sort of fancy module setup, so I am not sure what is best. Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.statkute at gmail.com Fri Jun 25 16:01:56 2010 From: g.statkute at gmail.com (gintare statkute) Date: Fri, 25 Jun 2010 23:01:56 +0300 Subject: [SciPy-User] installation missing files, ICC compiler Message-ID: Hello, I am not able to install ATLAS. 1) My computer has no ICC compiler and The ICC compiler from Intel seems too modern for my proccesor acording description. I try to use gcc-4.2 instead of ICC * working:/usr/ATLAS# /opt/pages/ATLAS/configure -b 64 --prefix=/opt/pages/ATLAS -Ss kern /usr/bin/gcc-4.2 -C ic gcc-4.2 -Fa alg -fPIC --with-netlib-lapack=$/opt/lapack-3.1.1/lapack_LINUX.a * I am getting error: *working:/usr/ATLAS# /opt/pages/ATLAS/configure -b 64 --prefix=/opt/pages/ATLAS -Ss kern /usr/bin/gcc-4.2 -C ic gcc-4.2 -Fa alg -fPIC --with-netlib-lapack=$/opt/lapack-3.1.1/lapack_LINUX.a * 2) Another error is about missing files: */usr/include/gnu/stubs.h:9:27: error: gnu/stubs-64.h: No such file or directory * Full code: *working:/usr/ATLAS# /opt/pages/ATLAS/configure -b 64 --prefix=/opt/pages/ATLAS -Ss kern /usr/bin/gcc-4.2 -C ic gcc-4.2 -Fa alg -fPIC --with-netlib-lapack=$/opt/lapack-3.1.1/lapack_LINUX.a make: `xconfig' is up to date. ./xconfig -d s /opt/pages/ATLAS/ -d b /usr/ATLAS -b 64 -Ss kern /usr/bin/gcc-4.2 -C ic gcc-4.2 -Fa alg -fPIC -Si lapackref 1 OS configured as Linux (1) Assembly configured as GAS_x8632 (1) Vector ISA Extension configured as SSE3 (2,28) Architecture configured as Core2 (15) Clock rate configured as 1667Mhz Maximum number of threads configured as 2 Parallel make command configured as '$(MAKE) -j 2' Cannot detect CPU throttling. rm -f config1.out make atlas_run atldir=/usr/ATLAS exe=xprobe_comp args="-v 0 -o atlconf.txt -O 1 -A 15 -Si nof77 0 -C ic 'gcc-4.2' -Fa ic '-fPIC' -C sm '/usr/bin/gcc-4.2' -Fa sm '-fPIC' -C dm '/usr/bin/gcc-4.2' -Fa dm '-fPIC' -C sk '/usr/bin/gcc-4.2' -Fa sk '-fPIC' -C dk '/usr/bin/gcc-4.2' -Fa dk '-fPIC' -Fa xc '-fPIC' -Fa if '-fPIC' -b 64" \ redir=config1.out make[1]: Entering directory `/usr/ATLAS' cd /usr/ATLAS ; ./xprobe_comp -v 0 -o atlconf.txt -O 1 -A 15 -Si nof77 0 -C ic 'gcc-4.2' -Fa ic '-fPIC' -C sm '/usr/bin/gcc-4.2' -Fa sm '-fPIC' -C dm '/usr/bin/gcc-4.2' -Fa dm '-fPIC' -C sk '/usr/bin/gcc-4.2' -Fa sk '-fPIC' -C dk '/usr/bin/gcc-4.2' -Fa dk '-fPIC' -Fa xc '-fPIC' -Fa if '-fPIC' -b 64 > config1.out In file included from /usr/include/features.h:354, from /usr/include/stdio.h:28, from /opt/pages/ATLAS//CONFIG/src/backend/comptestC.c:1: /usr/include/gnu/stubs.h:9:27: error: gnu/stubs-64.h: No such file or directory make[2]: *** [IRunCComp] Error 1 Unable to find usable compiler for ICC; abortingMake sure compilers are in your path, and specify good compilers to configure (see INSTALL.txt or 'configure --help' for details)make[1]: *** [atlas_run] Error 1 make[1]: Leaving directory `/usr/ATLAS' make: *** [IRun_comp] Error 2 xconfig: /opt/pages/ATLAS//CONFIG/src/config.c:125: ProbeComp: Assertion `!system(ln)' failed. /bin/sh: line 1: 24985 Aborted ./xconfig -d s /opt/pages/ATLAS/ -d b /usr/ATLAS -b 64 -Ss kern /usr/bin/gcc-4.2 -C ic gcc-4.2 -Fa alg -fPIC -Si lapackref 1 -D c -DATL_FULL_LAPACK xconfig exited with 134 working:/usr/ATLAS# /opt/pages/ATLAS/configure -b 64 --prefix=/opt/pages/ATLAS -Ss kern /usr/bin/gcc-4.2 -C ic gcc-4.2 -Fa alg -fPIC --with-netlib-lapack=$/opt/lapack-3.1.1/lapack_LINUX.a* regards, gintare statkute -------------- next part -------------- An HTML attachment was scrubbed... URL: From eike.welk at gmx.net Fri Jun 25 19:41:38 2010 From: eike.welk at gmx.net (Eike Welk) Date: Sat, 26 Jun 2010 01:41:38 +0200 Subject: [SciPy-User] installation missing files, ICC compiler In-Reply-To: References: Message-ID: <201006260141.38421.eike.welk@gmx.net> On Friday June 25 2010 22:01:56 gintare statkute wrote: > In file included from /usr/include/features.h:354, > from /usr/include/stdio.h:28, > from /opt/pages/ATLAS//CONFIG/src/backend/comptestC.c:1: > /usr/include/gnu/stubs.h:9:27: error: gnu/stubs-64.h: No such file or > directory You probably have to install one more development package. Look at this thread: http://www.mail-archive.com/debian-glibc at lists.debian.org/msg37582.html So if you are using Debian try to install a package named: libc6-dev-amd64 This is apparently done by typing (I'm using Suse): sudo apt-get install libc6-dev-amd64 Eike. From kwgoodman at gmail.com Fri Jun 25 20:30:04 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Fri, 25 Jun 2010 17:30:04 -0700 Subject: [SciPy-User] mary, a masked array Message-ID: An outer join of two data objects (labeled arrays, larrys, in my case) can introduce missing values when one data object contains labels that are not in the other data object. For float data I fill the missing values with NaN. But I couldn't come up with a good fill value for int or bool data. Coverting int and bool to float is one way to go, but not ideal. The obvious solution is to use np.ma to mask the missing values. But my masking needs are modest so I coded up a quick proof of concept for a stripped down masked array class that is tailored to my needs. Here's what I came up with: http://github.com/kwgoodman/mary Comments and suggestions are welcomed. I'm not familiar with np.ma so I imagine there are many issues I haven't thought through. From josef.pktd at gmail.com Sat Jun 26 08:00:26 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 26 Jun 2010 08:00:26 -0400 Subject: [SciPy-User] Saving the world from economic collapse with Python? Message-ID: Just some pieces of information that numpy and scipy are popular http://www.activestate.com/blog/2010/06/must-have-python-packages-finance http://www.activestate.com/blog/2010/06/saving-world-economic-collapse-python http://www.activestate.com/press-releases/activestate-adds-key-python-packages-financial-and-scientific-computing-markets Josef From pgmdevlist at gmail.com Sat Jun 26 12:42:32 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sat, 26 Jun 2010 12:42:32 -0400 Subject: [SciPy-User] mary, a masked array In-Reply-To: References: Message-ID: <75D1317E-ED32-4C68-A9CF-F2073CF73F4C@gmail.com> On Jun 25, 2010, at 8:30 PM, Keith Goodman wrote: > An outer join of two data objects (labeled arrays, larrys, in my case) > can introduce missing values when one data object contains labels that > are not in the other data object. For float data I fill the missing > values with NaN. But I couldn't come up with a good fill value for int > or bool data. Coverting int and bool to float is one way to go, but > not ideal. The obvious solution is to use np.ma to mask the missing > values. But my masking needs are modest so I coded up a quick proof of > concept for a stripped down masked array class that is tailored to my > needs. Here's what I came up with: http://github.com/kwgoodman/mary You're re-implementing the original version of MaskedArray :) (in numpy <1.2, a masked array was the combination of a standard ndarray (your data) and either a boolean ndarray or a boolean (your mask)... That's quite OK, as long as you're not bothered by the fact that a larray/mary is not an array. > Comments and suggestions are welcomed. I'm not familiar with np.ma so > I imagine there are many issues I haven't thought through. What happens if you calculate sqrt(-1) with a mary ? From kwgoodman at gmail.com Sat Jun 26 13:10:27 2010 From: kwgoodman at gmail.com (Keith Goodman) Date: Sat, 26 Jun 2010 10:10:27 -0700 Subject: [SciPy-User] mary, a masked array In-Reply-To: <75D1317E-ED32-4C68-A9CF-F2073CF73F4C@gmail.com> References: <75D1317E-ED32-4C68-A9CF-F2073CF73F4C@gmail.com> Message-ID: On Sat, Jun 26, 2010 at 9:42 AM, Pierre GM wrote: > > On Jun 25, 2010, at 8:30 PM, Keith Goodman wrote: > >> An outer join of two data objects (labeled arrays, larrys, in my case) >> can introduce missing values when one data object contains labels that >> are not in the other data object. For float data I fill the missing >> values with NaN. But I couldn't come up with a good fill value for int >> or bool data. Coverting int and bool to float is one way to go, but >> not ideal. The obvious solution is to use np.ma to mask the missing >> values. But my masking needs are modest so I coded up a quick proof of >> concept for a stripped down masked array class that is tailored to my >> needs. Here's what I came up with: http://github.com/kwgoodman/mary > > You're re-implementing the original version of MaskedArray :) > (in numpy <1.2, a masked array was the combination of a standard ndarray (your data) and either a boolean ndarray or a boolean (your mask)... That's quite OK, as long as you're not bothered by the fact that a larray/mary is not an array. Ah, that's good to know. I'll take a look. Thank you. >> Comments and suggestions are welcomed. I'm not familiar with np.ma so >> I imagine there are many issues I haven't thought through. > > What happens if you calculate sqrt(-1) with a mary ? Same as np.sqrt(-1) which gives NaN. But, as coded, the mask does not get updated even if the marker is NaN. So far only assignment by indexing updates the mask. From ben.root at ou.edu Sat Jun 26 14:13:28 2010 From: ben.root at ou.edu (Benjamin Root) Date: Sat, 26 Jun 2010 13:13:28 -0500 Subject: [SciPy-User] Saving the world from economic collapse with Python? In-Reply-To: References: Message-ID: "Saving the world from economic collapse with Python" No pressure! -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Jun 26 14:39:45 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 26 Jun 2010 14:39:45 -0400 Subject: [SciPy-User] Saving the world from economic collapse with Python? In-Reply-To: References: Message-ID: On Sat, Jun 26, 2010 at 2:13 PM, Benjamin Root wrote: > "Saving the world from economic collapse with Python" > > No pressure! No need to worry, scipy doesn't have a Gaussian Copula http://www.wired.com/techbiz/it/magazine/17-03/wp_quant http://www.forbes.com/2009/05/07/gaussian-copula-david-x-li-opinions-columnists-risk-debt.html and the test coverage is between zero and one-hundred percent. Josef http://www.economist.com/blogs/freeexchange/2009/04/in_defense_of_copula > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From sebastian.walter at gmail.com Sat Jun 26 15:43:08 2010 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Sat, 26 Jun 2010 21:43:08 +0200 Subject: [SciPy-User] solving linear algebra and substitute in diff equations reg. In-Reply-To: References: Message-ID: There are dedicated solvers for Differential Algebraic Equations (DAEs). However, no such solver is included in scipy, at least not in the version that I got. There is a sprint on ODEs in the upcoming scipy conference. http://conference.scipy.org/scipy2010/sprints.html. I don't know if they are going to discuss DAEs, but in the scikit http://scipy.org/scipy/scikits/browser/trunk/odes/scikits/odes/dae.py there seems to be some support for DAEs. So possibly, DAEs may be supported in scipy in the future. In the meantime, you could try http://pysundials.sourceforge.net/ which are bindings to SUNDIALS. I haven't used pysundails, but it's certainly worth a shot. Personally I'm using Python bindings to SolvIND (http://www.iwr.uni-heidelberg.de/~Jan.Albersmeyer/solvind/). This software is powerful and versatile. However, it is not open source. Sebastian On Mon, Jun 21, 2010 at 7:47 AM, morovia morovia wrote: > Hello, > > ??????? I have 2 homogeneous linear algebraic > equations and 4 differential equations.? Solving > the algebraic equations and substituting, I can > eliminate 2 variables out of 6, resulting in 4 > differential equations which can be written in > matrix form for further analysis. > > Presently I am using the individual elements of > the matrix to compute. > > I am wondering, whether this substitution > and solving can be carried out directly through > scipy.? Or can sympy be used for this purpose. > > Thanks in advance, > > Best regards > Morovia. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From josef.pktd at gmail.com Sat Jun 26 17:59:09 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 26 Jun 2010 17:59:09 -0400 Subject: [SciPy-User] incomplete beta function B(x; k+1, 0) ? Message-ID: Is there a incomplete beta function with a zero argument B(x; k+1, 0) available in scipy? >>> special.betainc( np.arange(5)+1, 0, 0.5) array([ NaN, NaN, NaN, NaN, NaN]) http://en.wikipedia.org/wiki/Logarithmic_distribution has an expression for the cdf of logseries, which is not in scipy.stats.distributions, but I don't find the right incomplete Beta function values. Josef From pav at iki.fi Sat Jun 26 18:56:38 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 26 Jun 2010 22:56:38 +0000 (UTC) Subject: [SciPy-User] incomplete beta function B(x; k+1, 0) ? References: Message-ID: Sat, 26 Jun 2010 17:59:09 -0400, josef.pktd wrote: > Is there a incomplete beta function with a zero argument B(x; k+1, 0) > available in scipy? > >>>> special.betainc( np.arange(5)+1, 0, 0.5) > array([ NaN, NaN, NaN, NaN, NaN]) >>> print special.betainc.__doc__ betainc(x1, x2, x3[, out]) y=betainc(a,b,x) returns the incomplete beta integral of the arguments, evaluated from zero to x: gamma(a+b) / (gamma(a)*gamma(b)) * integral(t**(a-1) (1-t)**(b-1), t=0..x). So the function in Scipy has an additional normalization prefactor. The prefactor however seems to be zero for b=0, so this function in Scipy probably can't easily be used for finding the value of the integral at b=0. But if you just set b=1e-99 (or anything smaller than the machine epsilon) and divide the prefactor away, you should end up with the result you are looking for. At b=0 the integral is logarithmically divergent towards x->1, and a finite b > 0 cuts this off around x ~ 1 - exp(-1/b) --- so it shouldn't matter. The rest of the integrand should also saturate at b ~ machine epsilon. But apparently, there's something fishy in the betainc algorithm for x close to 1 and b close to 0, for example this discontinuity: >>> sc.betainc(1, 1.000002e-6, 1 - 1e-6) 1.3815442755027441e-05 >>> sc.betainc(1, 1.000001e-6, 1 - 1e-6) 1.2397031262149499e-05 So while things in principle are OK, you shouldn't probably trust things beyond x > 1 - 1e-3. -- Pauli Virtanen From vincent at vincentdavis.net Sat Jun 26 20:31:42 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Sat, 26 Jun 2010 18:31:42 -0600 Subject: [SciPy-User] corrcoef and dump In-Reply-To: <4C24E189.3000906@umit.maine.edu> References: <4C24E189.3000906@umit.maine.edu> Message-ID: Her is a by row corr function, You'll need to decide how you want the results. I took this from a larger function, and although it is called by_row_corr() I think it return by col. I was using this on a 120,000 X 120,000 array. Slow but no memory problem. def by_row_corr(anarray, test_array): stdarray = (anarray-anarray.mean(0))/anarray.std(0) #standardize stdtestarray = np.append(anarray,[test_array], axis=0) stdtestarray = (stdtestarray-stdtestarray.mean(0))/stdtestarray.std(0) #standardize nobs, nvars = stdarray.shape #For the test array the noobs will increase by 1 sumcorrdiff = np.empty(nvars) # calculate correlation coefficient for each variable with all others for col in xrange(nvars): corr = np.dot(stdarray[:,col],stdarray)/nobs #print 'corr', corr corrt = np.dot(stdtestarray[:,col],stdtestarray)/(nobs+1) I think you will want a yield statement at the end. As I said I took this from a larger function. Vincent On Fri, Jun 25, 2010 at 11:04 AM, R. Padraic Springuel wrote: > Each of the 15 arrays between which I want the calculations has > 6,695,970 entries in it. ?I tried feeding corrcoef just two of the lists > and while I don't get a MemoryError (the python exception), I do get a > couple of these errors: > > Python(337) malloc: *** mmap(size=2678390784) failed (error code=12) > *** error: can't allocate region > *** set a breakpoint in malloc_error_break to debug > > They don't seem to make the program fail, however, as I still get a > result from corrcoef. > > I found a function I'd written some time before to calculate the > correlation coefficient that doesn't raise that error and its results > agree with those from corrcoef, so there has to be an implementation > thing going on here with memory usage. > -- > > R. Padraic Springuel > Research Assistant > Department of Physics and Astronomy > University of Maine > Bennett 309 > Office Hours: By Appointment Only > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pgmdevlist at gmail.com Sat Jun 26 21:07:05 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sat, 26 Jun 2010 21:07:05 -0400 Subject: [SciPy-User] mary, a masked array In-Reply-To: References: <75D1317E-ED32-4C68-A9CF-F2073CF73F4C@gmail.com> Message-ID: <5B1DCDD5-F50E-4DDF-8ED6-12B3319E5693@gmail.com> On Jun 26, 2010, at 1:10 PM, Keith Goodman wrote: > On Sat, Jun 26, 2010 at 9:42 AM, Pierre GM wrote: >> >> You're re-implementing the original version of MaskedArray :) >> (in numpy <1.2, a masked array was the combination of a standard ndarray (your data) and either a boolean ndarray or a boolean (your mask)... That's quite OK, as long as you're not bothered by the fact that a larray/mary is not an array. > > Ah, that's good to know. I'll take a look. Thank you. You're quite welcome. The reason why I want MaskedArray to be a subclass of ndarray was that it's making things easier to subclass MaskedArray while keeping the functionalities of a ndarray. Now, the good thing is that it works, the bad thing is that it slows things down. At least we have an ideal test suite when MaskedArrays will be ported to C... >>> Comments and suggestions are welcomed. I'm not familiar with np.ma so >>> I imagine there are many issues I haven't thought through. >> >> What happens if you calculate sqrt(-1) with a mary ? > > Same as np.sqrt(-1) which gives NaN. But, as coded, the mask does not > get updated even if the marker is NaN. So far only assignment by > indexing updates the mask. Hey, whatever fits you needs... From josef.pktd at gmail.com Sun Jun 27 12:51:05 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 27 Jun 2010 12:51:05 -0400 Subject: [SciPy-User] incomplete beta function B(x; k+1, 0) ? In-Reply-To: References: Message-ID: On Sat, Jun 26, 2010 at 6:56 PM, Pauli Virtanen wrote: > Sat, 26 Jun 2010 17:59:09 -0400, josef.pktd wrote: >> Is there a incomplete beta function with a zero argument ?B(x; k+1, 0) >> ?available in scipy? >> >>>>> special.betainc( np.arange(5)+1, 0, 0.5) >> array([ NaN, ?NaN, ?NaN, ?NaN, ?NaN]) > >>>> print special.betainc.__doc__ > betainc(x1, x2, x3[, out]) > y=betainc(a,b,x) returns the incomplete beta integral of the > arguments, evaluated from zero to x: gamma(a+b) / (gamma(a)*gamma(b)) > * integral(t**(a-1) (1-t)**(b-1), t=0..x). > > So the function in Scipy has an additional normalization prefactor. The > prefactor however seems to be zero for b=0, so this function in Scipy > probably can't easily be used for finding the value of the integral at > b=0. > > But if you just set b=1e-99 (or anything smaller than the machine > epsilon) and divide the prefactor away, you should end up with the result > you are looking for. > > At b=0 the integral is logarithmically divergent towards x->1, and a > finite b > 0 cuts this off around x ~ 1 - exp(-1/b) --- so it shouldn't > matter. The rest of the integrand should also saturate at b ~ machine > epsilon. > > But apparently, there's something fishy in the betainc algorithm for x > close to 1 and b close to 0, for example this discontinuity: > > ? ? ? ?>>> sc.betainc(1, 1.000002e-6, 1 - 1e-6) > ? ? ? ?1.3815442755027441e-05 > ? ? ? ?>>> sc.betainc(1, 1.000001e-6, 1 - 1e-6) > ? ? ? ?1.2397031262149499e-05 > > So while things in principle are OK, you shouldn't probably trust things > beyond x > 1 - 1e-3. Thanks, it works this way, including low precision for x close to 1 >>> a=np.arange(20) >>> b=1.000002e-16 >>> x=gamma(a+b) / (gamma(a)*gamma(b)) >>> p=0.5; (1 + special.betainc(np.arange(20)+1, 1.000002e-16, p)/x/np.log(1-p))[1:] - stats.logser.cdf(np.arange(1,20),p) array([ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, -1.11022302e-16, -1.11022302e-16, -1.11022302e-16, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.11022302e-16, 1.11022302e-16, 1.11022302e-16, 1.11022302e-16]) >>> p=1-1e-6; (1 + special.betainc(np.arange(20)+1, 1.000002e-16, p)/x/np.log(1-p))[1:] - stats.logser.cdf(np.arange(1,20),p) array([ 0.10246633, 0.10226642, 0.10206728, 0.10186891, 0.1016713 , 0.10147445, 0.10127835, 0.101083 , 0.10088839, 0.10069451, 0.10050137, 0.10030896, 0.10011726, 0.09992629, 0.09973603, 0.09954648, 0.09935764, 0.09916949, 0.09898204]) >>> p=1-1e-3; (1 + special.betainc(np.arange(20)+1, 1.000002e-16, p)/x/np.log(1-p))[1:] - stats.logser.cdf(np.arange(1,20),p) array([ 1.16573418e-14, -1.38777878e-16, -6.77236045e-15, 9.88098492e-15, 3.65263375e-14, -3.55271368e-15, -1.98729921e-14, 1.66533454e-16, 3.55271368e-15, 1.42663659e-14, 7.10542736e-15, -1.14908083e-14, 2.05946371e-14, 1.94289029e-15, 1.62647673e-14, 1.17683641e-14, -2.16493490e-15, 2.55351296e-15, 1.35447209e-14]) >>> p=1-1e-4; (1 + special.betainc(np.arange(20)+1, 1.000002e-16, p)/x/np.log(1-p))[1:] - stats.logser.cdf(np.arange(1,20),p) array([ 3.78329849e-06, 3.70895520e-06, 3.63619061e-06, 3.56496856e-06, 3.49525364e-06, 3.42701179e-06, 3.36020887e-06, 3.29481244e-06, 3.23079046e-06, 3.16811184e-06, 3.10674608e-06, 3.04666343e-06, 2.98783512e-06, 2.93023291e-06, 2.87382897e-06, 2.81859665e-06, 2.76450961e-06, 2.71154229e-06, 2.65966984e-06]) So, I guess this is not really worth it since the generic calculations is just a sum of relatively simpler terms, which only in the case of p close to 1 has a larger number of terms to sum up. Josef > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From wnbell at gmail.com Sun Jun 27 15:05:34 2010 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 27 Jun 2010 15:05:34 -0400 Subject: [SciPy-User] scipy.sparse.csr_matrix.matmat deprecation question In-Reply-To: References: Message-ID: On Fri, Jun 25, 2010 at 1:09 PM, Andrew Schein wrote: > I would like to perform a matrix multiplication of the form > > A * B > > where A is dense and B is sparse CSR or COO.? Does scipy.sparse have this > capability and will it in the future?? How fast is the scipy implementation > in comparison to INTEL MKL? > > It appears that there is a .matmat function that has been deprecated.? Does > this reflect a retreat, or is the functionality found in some other place? > > Thanks, > Hi Andrew, All sparse matrix multiplication functionality is exposed via __mul__() now, so the matmat function is unnecessary. Simply using A*B should do the appropriate thing. I don't know how the speed compares to MKL, but the code is implemented in C++ so it should be reasonably fast. -- Nathan Bell wnbell at gmail.com http://www.wnbell.com/ From seb.haase at gmail.com Sun Jun 27 17:13:44 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Sun, 27 Jun 2010 23:13:44 +0200 Subject: [SciPy-User] Fwd: [Scipy-tickets] [SciPy] #1212: Single precision FFT insufficiently accurate. In-Reply-To: References: Message-ID: this workaround seems in-acceptable if single-precision was used (by the user of SciPy) because of memory constrains .... !!! Regards, -Sebastian Haase On Sun, Jun 27, 2010 at 2:53 PM, ? wrote: > #1212: Single precision FFT insufficiently accurate. > ---------------------------+------------------------------------------------ > ?Reporter: ?rgommers ? ? ? | ? ? ? Owner: ?somebody > ? ? Type: ?defect ? ? ? ? | ? ? ?Status: ?new > ?Priority: ?normal ? ? ? ? | ? Milestone: ?0.8.0 > Component: ?scipy.fftpack ?| ? ? Version: ?0.7.0 > ?Keywords: ? ? ? ? ? ? ? ? | > ---------------------------+------------------------------------------------ > > Comment(by pv): > > ?Work-around for 0.8.x committed in r6570 > > -- > Ticket URL: > SciPy > SciPy is open-source software for mathematics, science, and engineering. > _______________________________________________ > Scipy-tickets mailing list > Scipy-tickets at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-tickets > From pav at iki.fi Sun Jun 27 19:29:27 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 27 Jun 2010 23:29:27 +0000 (UTC) Subject: [SciPy-User] Single precision FFT insufficiently accurate. References: Message-ID: Sun, 27 Jun 2010 23:13:44 +0200, Sebastian Haase wrote: > this workaround seems in-acceptable if single-precision was used (by the > user of SciPy) because of memory constrains .... !!! Suggest a better one, then. We cannot just return incorrect results, which FFTPACK seems to produce, especially as this easily occurs in the size range where memory constraints would be important. Moreover, single-precision FFT is a new feature in 0.8.x, so it probably does not have an exceedingly large number of users yet who rely on it. -- Pauli Virtanen From vincent at vincentdavis.net Sun Jun 27 20:41:54 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Sun, 27 Jun 2010 18:41:54 -0600 Subject: [SciPy-User] Installing on osx 10.6, py 2.6 Message-ID: Ok I am at it again. trying to build/install from source. It fails, full details available here http://pastebin.com/KT08eLiZ The last bit is below. I am trying to learn more about this but not there is a lot I don't know. Thanks Vincent ld: in /usr/lib/libSystem.B.dylib, missing required architecture ppc64 in file for architecture ppc64 collect2: ld returned 1 exit status lipo: can't open input file: /var/folders/2f/2fiXYQSSE+CgAzDQPp9+k++++TI/-Tmp-//ccs45Qej.out (No such file or directory) error: Command "/usr/local/bin/gfortran -Wall -arch ppc -arch i686 -arch x86_64 -arch ppc64 -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.4-x86_64-2.6/build/src.macosx-10.4-x86_64-2.6/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.4-x86_64-2.6/scipy/fftpack/src/zfft.o build/temp.macosx-10.4-x86_64-2.6/scipy/fftpack/src/drfft.o build/temp.macosx-10.4-x86_64-2.6/scipy/fftpack/src/zrfft.o build/temp.macosx-10.4-x86_64-2.6/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.4-x86_64-2.6/build/src.macosx-10.4-x86_64-2.6/scipy/fftpack/src/dct.o build/temp.macosx-10.4-x86_64-2.6/build/src.macosx-10.4-x86_64-2.6/fortranobject.o -Lbuild/temp.macosx-10.4-x86_64-2.6 -ldfftpack -lfftpack -lgfortran -o build/lib.macosx-10.4-x86_64-2.6/scipy/fftpack/_fftpack.so" failed with exit status 1 MacBookPro-new-2:scipy-vmd-dev vmd$ From cournape at gmail.com Sun Jun 27 22:30:55 2010 From: cournape at gmail.com (David Cournapeau) Date: Mon, 28 Jun 2010 11:30:55 +0900 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: References: Message-ID: On Mon, Jun 28, 2010 at 8:29 AM, Pauli Virtanen wrote: > Sun, 27 Jun 2010 23:13:44 +0200, Sebastian Haase wrote: >> this workaround seems in-acceptable if single-precision was used (by the >> user of SciPy) because of memory constrains .... !!! > > Suggest a better one, then. > > We cannot just return incorrect results, which FFTPACK seems to produce, > especially as this easily occurs in the size range where memory > constraints would be important. > > Moreover, single-precision FFT is a new feature in 0.8.x, so it probably > does not have an exceedingly large number of users yet who rely on it. Nevertheless, Sebasiten remark just made me realize that using double instead of single in some cases which are input-dependent is not that great. It means that some program may work in some cases, but will not in other (because of memory issues). Maybe it is better to remove it altogether in 0.8.0 - I will try to implement Bluestein algo during euroscipy, cheers, David From aarchiba at physics.mcgill.ca Sun Jun 27 23:16:31 2010 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Sun, 27 Jun 2010 23:16:31 -0400 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: References: Message-ID: On 27 June 2010 22:30, David Cournapeau wrote: > On Mon, Jun 28, 2010 at 8:29 AM, Pauli Virtanen wrote: >> Sun, 27 Jun 2010 23:13:44 +0200, Sebastian Haase wrote: >>> this workaround seems in-acceptable if single-precision was used (by the >>> user of SciPy) because of memory constrains .... !!! >> >> Suggest a better one, then. >> >> We cannot just return incorrect results, which FFTPACK seems to produce, >> especially as this easily occurs in the size range where memory >> constraints would be important. >> >> Moreover, single-precision FFT is a new feature in 0.8.x, so it probably >> does not have an exceedingly large number of users yet who rely on it. > > Nevertheless, Sebasiten remark just made me realize that using double > instead of single in some cases which are input-dependent is not that > great. It means that some program may work in some cases, but will not > in other (because of memory issues). I think falling back to double in this case is perfectly acceptable - after all, any user of the FFT in general has to know that the behaviour is severely data-dependent. In fact, since our FFT for those sizes seems to be O(n**2), they will almost certainly find that speed impels them to switch long before memory becomes an issue: the smallest array where I can imagine a user caring about the usage of a remporary double array is in the tens of millions of elements, and with our current FFTPACK implementation that will be so slow as to be unusable - unless they use a product of powers of 2, 3, and 5. In that case they get an O(n log n) algorithm - and they get direct single computation. > Maybe it is better to remove it altogether in 0.8.0 - I will try to > implement Bluestein algo during euroscipy, This would definitely be a big improvement - I don't care so much about the precision issue, though it is important, but having a decently-fast algorithm for arbitrary sizes would be a good idea. That said, even with FFTW3, which is pretty good about using the best algorithm for your particular case, it often pays to pad rather than use an awkward size (though the best padding is not necessarily power-of-two, according to my time trials): http://lighthouseinthesky.blogspot.com/2010/03/flops-and-fft.html So weird-size FFTs don't completely take the burden of padding off the user (though I suspect that there are some oddball applications where the exact FFT size matters). Anne From cournape at gmail.com Mon Jun 28 00:40:44 2010 From: cournape at gmail.com (David Cournapeau) Date: Mon, 28 Jun 2010 13:40:44 +0900 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: References: Message-ID: On Mon, Jun 28, 2010 at 12:16 PM, Anne Archibald wrote: > > I think falling back to double in this case is perfectly acceptable - > after all, any user of the FFT in general has to know that the > behaviour is severely data-dependent. In fact, since our FFT for those > sizes seems to be O(n**2), they will almost certainly find that speed > impels them to switch long before memory becomes an issue: the > smallest array where I can imagine a user caring about the usage of a > remporary double array is in the tens of millions of elements Or if you run many 1d fft on a 2d array - typical example is in audio processing where each row would be a window of a few hundred samples, and you have as many rows as you can memory-wise. > That said, even with FFTW3, which is pretty good about using the best > algorithm for your particular case, it often pays to pad rather than > use an awkward size (though the best padding is not necessarily > power-of-two, according to my time trials): > http://lighthouseinthesky.blogspot.com/2010/03/flops-and-fft.html > So weird-size FFTs don't completely take the burden of padding off the > user Of, definitely. Any FFT user should know that power of two should be used whenever possible/feasable. But generally, if you care about memory, padding as a significant cost in some cases (like the one I mentioned above). David From vincent at vincentdavis.net Mon Jun 28 01:03:25 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Sun, 27 Jun 2010 23:03:25 -0600 Subject: [SciPy-User] Installing on osx 10.6, py 2.6 In-Reply-To: References: Message-ID: I got it to install on py27 64bit Vincent On Sun, Jun 27, 2010 at 6:41 PM, Vincent Davis wrote: > Ok I am at it again. trying to build/install from source. It fails, > full details available here > http://pastebin.com/KT08eLiZ > > The last bit is below. I am trying to learn more about this but not > there is a lot I don't know. > > Thanks > Vincent > > ld: in /usr/lib/libSystem.B.dylib, missing required architecture ppc64 > in file for architecture ppc64 > collect2: ld returned 1 exit status > lipo: can't open input file: > /var/folders/2f/2fiXYQSSE+CgAzDQPp9+k++++TI/-Tmp-//ccs45Qej.out (No > such file or directory) > error: Command "/usr/local/bin/gfortran -Wall -arch ppc -arch i686 > -arch x86_64 -arch ppc64 -Wall -undefined dynamic_lookup -bundle > build/temp.macosx-10.4-x86_64-2.6/build/src.macosx-10.4-x86_64-2.6/scipy/fftpack/_fftpackmodule.o > build/temp.macosx-10.4-x86_64-2.6/scipy/fftpack/src/zfft.o > build/temp.macosx-10.4-x86_64-2.6/scipy/fftpack/src/drfft.o > build/temp.macosx-10.4-x86_64-2.6/scipy/fftpack/src/zrfft.o > build/temp.macosx-10.4-x86_64-2.6/scipy/fftpack/src/zfftnd.o > build/temp.macosx-10.4-x86_64-2.6/build/src.macosx-10.4-x86_64-2.6/scipy/fftpack/src/dct.o > build/temp.macosx-10.4-x86_64-2.6/build/src.macosx-10.4-x86_64-2.6/fortranobject.o > -Lbuild/temp.macosx-10.4-x86_64-2.6 -ldfftpack -lfftpack -lgfortran -o > build/lib.macosx-10.4-x86_64-2.6/scipy/fftpack/_fftpack.so" failed > with exit status 1 > MacBookPro-new-2:scipy-vmd-dev vmd$ > From aarchiba at physics.mcgill.ca Mon Jun 28 02:05:15 2010 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Mon, 28 Jun 2010 02:05:15 -0400 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: References: Message-ID: On 28 June 2010 00:40, David Cournapeau wrote: > On Mon, Jun 28, 2010 at 12:16 PM, Anne Archibald > wrote: > >> >> I think falling back to double in this case is perfectly acceptable - >> after all, any user of the FFT in general has to know that the >> behaviour is severely data-dependent. In fact, since our FFT for those >> sizes seems to be O(n**2), they will almost certainly find that speed >> impels them to switch long before memory becomes an issue: the >> smallest array where I can imagine a user caring about the usage of a >> remporary double array is in the tens of millions of elements > > Or if you run many 1d fft on a 2d array - typical example is in audio > processing where each row would be a window of a few hundred samples, > and you have as many rows as you can memory-wise. I guess the question here is where we do the conversion - of course it is easiest to do that outside all the FFT loops. But it seems like that iteration over all-but-one dimension of the array will often need to copy the data anyway to get a contiguous 1d array; I'd say that's the natural place to do upconverstion when necessary. (Though, does FFTPACK provide strided/iterated FFTs? My only compiled-language experience is with FFTW, which does.) Failing that, I still think we're better providing single-precision FFTs for easy factorizations than for none at all; my basic point was that the FFT's behaviour already depends very strongly on the size of the input array, so this is not a new form of "data dependence". >> That said, even with FFTW3, which is pretty good about using the best >> algorithm for your particular case, it often pays to pad rather than >> use an awkward size (though the best padding is not necessarily >> power-of-two, according to my time trials): >> http://lighthouseinthesky.blogspot.com/2010/03/flops-and-fft.html >> So weird-size FFTs don't completely take the burden of padding off the >> user > > Of, definitely. Any FFT user should know that power of two should be > used whenever possible/feasable. But generally, if you care about > memory, padding as a significant cost in some cases (like the one I > mentioned above). This fact, which I also "knew", turns out to be false. (For FFTW; can't speak for FFTPACK.) The performance numbers in the blog post I linked to show that padding to the next larger power of two is substantially slower than padding to a number with a more complex factorization, though less padding and more complexity slows it back down a little. This is solely for the FFT, and it's for quite a large FFT. Any subsequent processing will also of course be affected by the size you pad it to. Anyway, my point is: data size dependence is an unavoidable curse of the FFT, and the "right" size to pad to is often not at all obvious. Anne From pav at iki.fi Mon Jun 28 04:29:29 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 28 Jun 2010 08:29:29 +0000 (UTC) Subject: [SciPy-User] Single precision FFT insufficiently accurate. References: Message-ID: Mon, 28 Jun 2010 11:30:55 +0900, David Cournapeau wrote: [clip] > Maybe it is better to remove it altogether in 0.8.0 - I will try to > implement Bluestein algo during euroscipy, Yes, I guess removing the support altogether is the second alternative for 0.8.x. It's not really clear to me which is more useful for the user -- a partially functional feature, or none at all. Ok, and the point that the casting can cause problems because of large- sized axes orthogonal to the FFT direction is taken, I didn't think about that. If we want to evaluate the FFT in blocks in such cases, that probably can be done but is a bit fiddly to get right. -- Pauli Virtanen From cournape at gmail.com Mon Jun 28 06:38:37 2010 From: cournape at gmail.com (David Cournapeau) Date: Mon, 28 Jun 2010 19:38:37 +0900 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: References: Message-ID: On Mon, Jun 28, 2010 at 3:05 PM, Anne Archibald wrote: > > I guess the question here is where we do the conversion - of course it > is easiest to do that outside all the FFT loops. But it seems like > that iteration over all-but-one dimension of the array will often need > to copy the data anyway to get a contiguous 1d array; I'd say that's > the natural place to do upconverstion when necessary. (Though, does > FFTPACK provide strided/iterated FFTs? My only compiled-language > experience is with FFTW, which does.) > > Failing that, I still think we're better providing single-precision > FFTs for easy factorizations than for none at all; my basic point was > that the FFT's behaviour already depends very strongly on the size of > the input array, so this is not a new form of "data dependence". Right, this significantly weakens my objection. David From seb.haase at gmail.com Mon Jun 28 07:21:25 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Mon, 28 Jun 2010 13:21:25 +0200 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: References: Message-ID: On Mon, Jun 28, 2010 at 12:38 PM, David Cournapeau wrote: > On Mon, Jun 28, 2010 at 3:05 PM, Anne Archibald > wrote: > >> >> I guess the question here is where we do the conversion - of course it >> is easiest to do that outside all the FFT loops. But it seems like >> that iteration over all-but-one dimension of the array will often need >> to copy the data anyway to get a contiguous 1d array; I'd say that's >> the natural place to do upconverstion when necessary. (Though, does >> FFTPACK provide strided/iterated FFTs? My only compiled-language >> experience is with FFTW, which does.) >> >> Failing that, I still think we're better providing single-precision >> FFTs for easy factorizations than for none at all; my basic point was >> that the FFT's behaviour already depends very strongly on the size of >> the input array, so this is not a new form of "data dependence". > > Right, this significantly weakens my objection. > What size of error are talking about anyway .. ? Personally I would leave it in, and make a note in the doc-string about expected precision error for non multiple-2 dimensions for single precision float. Maybe one could (for now) even append an option for "workaroundFloat32PrecisionLoss" - Sebastian From pav at iki.fi Mon Jun 28 07:45:07 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 28 Jun 2010 11:45:07 +0000 (UTC) Subject: [SciPy-User] Single precision FFT insufficiently accurate. References: Message-ID: Mon, 28 Jun 2010 13:21:25 +0200, Sebastian Haase wrote: [clip] > What size of error are talking about anyway ..? We are talking about 0.1% ... 5% relative error, import numpy as np from scipy.fftpack import fft, ifft x = np.random.rand(2011).astype(np.float32) np.linalg.norm(x - ifft(fft(x))) / np.linalg.norm(x) # -> 0.001 norm-2 relative error x = np.random.rand(2012).astype(np.float32) np.linalg.norm(x - ifft(fft(x))) / np.linalg.norm(x) # -> 6e-5 norm-2 relative error x = np.random.rand(8923).astype(np.float32) np.linalg.norm(x - ifft(fft(x))) / np.linalg.norm(x) # -> 0.03 norm-2 relative error x = np.random.rand(8925).astype(np.float32) np.linalg.norm(x - ifft(fft(x))) / np.linalg.norm(x) # -> 2.4902545e-07 norm-2 relative error So for "difficult" cases the error is up to several orders of magnitude larger than for the "easy" cases. > Personally I would > leave it in, and make a note in the doc-string about expected precision > error for non multiple-2 dimensions for single precision float. > Maybe one could (for now) even append an option for > "workaroundFloat32PrecisionLoss" Several percent errors are not something I'd like to leave for the users to sort out by themselves, even if mentioned in the documentation. I would perhaps rather drop the feature in 0.8 and wait for a proper fix in 0.9 (hopefully later this year), than add keyword arguments that we have to deprecate later on. -- Pauli Virtanen From ralphkube at googlemail.com Mon Jun 28 08:36:38 2010 From: ralphkube at googlemail.com (Ralph Kube) Date: Mon, 28 Jun 2010 14:36:38 +0200 Subject: [SciPy-User] scipy.optimize.leastsq question Message-ID: <4C289756.4070406@googlemail.com> Hello people, I am having a problem using the leastsq routine. My goal is to determine three parameters r_i, r_s and ppw so that the residuals to a model function a(r_i, r_s, ppw) to a measurement are minimal. When I call the leastsq routine with a good guess of starting values, it iterates 6 times without changing the vales of the initial parameters and then exits without an error. The function a is very complicated and expensive to evaluate. Some evaluation is done by using the subprocess module of python. Can this pose a problem for the leastsq routine? This is in the main routine: import numpy as N for t_idx, t in enumerate(time_var): r_i = 300. r_s = 1.0 ppw=1e-6 sza = 70. wl = N.arange(300., 3001., 1.) albedo_true = compute_albedo(r_i, r_s, ppw, sza, wl) # This emulates the measurement data albedo_meas = albedo_true + 0.01*N.random.randn(len(wl)) print 'Optimizing albedo' p0 = [2.*r_i, 1.4*r_s, 4.*ppw] plsq2 = leastsq(albedo_residual, p0, args=(albedo_meas, sza, wl)) print '... done: ', plsq2[0][0], plsq2[0][1], plsq2[0][2] albedo_model = compute_albedo(plsq2[0][0], plsq2[0][1], plsq2[0][2], sza, wl) The residual function: def albedo_residual(p, y, sza, wvl): r_i, r_s, ppw = p albedo = compute_albedo(r_i, r_s, ppw, sza, wvl) err = albedo - y print 'Albedo for r_i = %4.0f, r_s = %4.2f, ppw = %3.2e \ Residual squared: %5f' % (r_i, r_s, ppw, N.sum(err**2)) return err The definition of the function a(r_i, r_s, ppw) def compute_albedo(radius_ice, radius_soot, ppw, sza, wvl): The output is: Optimizing albedo Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: 0.973819 Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: 0.973819 Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: 0.973819 Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: 0.973819 Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: 0.973819 Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: 0.973819 ... done: 600.0 1.4 4e-06 To check for errors, I implemented the example code from http://www.tau.ac.il/~kineret/amit/scipy_tutorial/ in my code and it runs successfully. I would be glad for any suggestion. Cheers, Ralph From ralf.gommers at googlemail.com Mon Jun 28 08:37:02 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 28 Jun 2010 20:37:02 +0800 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: References: Message-ID: On Mon, Jun 28, 2010 at 7:45 PM, Pauli Virtanen wrote: > Mon, 28 Jun 2010 13:21:25 +0200, Sebastian Haase wrote: > [clip] > > What size of error are talking about anyway ..? > > We are talking about 0.1% ... 5% relative error, > > import numpy as np > from scipy.fftpack import fft, ifft > > x = np.random.rand(2011).astype(np.float32) > np.linalg.norm(x - ifft(fft(x))) / np.linalg.norm(x) > # -> 0.001 norm-2 relative error > > x = np.random.rand(2012).astype(np.float32) > np.linalg.norm(x - ifft(fft(x))) / np.linalg.norm(x) > # -> 6e-5 norm-2 relative error > > x = np.random.rand(8923).astype(np.float32) > np.linalg.norm(x - ifft(fft(x))) / np.linalg.norm(x) > # -> 0.03 norm-2 relative error > > x = np.random.rand(8925).astype(np.float32) > np.linalg.norm(x - ifft(fft(x))) / np.linalg.norm(x) > # -> 2.4902545e-07 norm-2 relative error > > So for "difficult" cases the error is up to several orders of magnitude > larger than for the "easy" cases. > > > Personally I would > > leave it in, and make a note in the doc-string about expected precision > > error for non multiple-2 dimensions for single precision float. > > Maybe one could (for now) even append an option for > > "workaroundFloat32PrecisionLoss" > > Several percent errors are not something I'd like to leave for the users > to sort out by themselves, even if mentioned in the documentation. > > I would perhaps rather drop the feature in 0.8 and wait for a proper fix > in 0.9 (hopefully later this year), than add keyword arguments that we > have to deprecate later on. > > Anne's argument sounded convincing, so I think the way it is now in the 0.8.x branch is fine. Probably good to add a warning in the docstrings like: .. note:: In scipy 0.8.0 `fft` in single precision is available, but *only* for input array sizes which can be factorized into (combinations of) 2, 3 and 5. For other sizes the computation will be done in double precision. Ralf > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Mon Jun 28 09:44:32 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 28 Jun 2010 08:44:32 -0500 Subject: [SciPy-User] scipy.optimize.leastsq question In-Reply-To: <4C289756.4070406@googlemail.com> References: <4C289756.4070406@googlemail.com> Message-ID: <4C28A740.5020806@gmail.com> On 06/28/2010 07:36 AM, Ralph Kube wrote: > Hello people, > I am having a problem using the leastsq routine. My goal is to > determine three parameters r_i, r_s and ppw so that the residuals > to a model function a(r_i, r_s, ppw) to a measurement are minimal. > When I call the leastsq routine with a good guess of starting values, it > iterates 6 times without changing the vales of the initial parameters > and then exits without an error. > The function a is very complicated and expensive to evaluate. Some > evaluation is done by using the subprocess module of python. Can this > pose a problem for the leastsq routine? > > > This is in the main routine: > > import numpy as N > > for t_idx, t in enumerate(time_var): > > r_i = 300. > r_s = 1.0 > ppw=1e-6 > sza = 70. > wl = N.arange(300., 3001., 1.) > > albedo_true = compute_albedo(r_i, r_s, ppw, sza, wl) > # This emulates the measurement data > albedo_meas = albedo_true + 0.01*N.random.randn(len(wl)) > > print 'Optimizing albedo' > p0 = [2.*r_i, 1.4*r_s, 4.*ppw] > plsq2 = leastsq(albedo_residual, p0, args=(albedo_meas, sza, > wl)) > print '... done: ', plsq2[0][0], plsq2[0][1], plsq2[0][2] > albedo_model = compute_albedo(plsq2[0][0], plsq2[0][1], plsq2[0][2], > sza, wl) > > The residual function: > def albedo_residual(p, y, sza, wvl): > r_i, r_s, ppw = p > albedo = compute_albedo(r_i, r_s, ppw, sza, wvl) > err = albedo - y > print 'Albedo for r_i = %4.0f, r_s = %4.2f, ppw = %3.2e \ > Residual squared: %5f' % (r_i, r_s, ppw, N.sum(err**2)) > > return err > > The definition of the function a(r_i, r_s, ppw) > def compute_albedo(radius_ice, radius_soot, ppw, sza, wvl): > > The output is: > Optimizing albedo > Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: > 0.973819 > Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: > 0.973819 > Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: > 0.973819 > Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: > 0.973819 > Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: > 0.973819 > Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: > 0.973819 > ... done: 600.0 1.4 4e-06 > > To check for errors, I implemented the example code from > http://www.tau.ac.il/~kineret/amit/scipy_tutorial/ in my code and it > runs successfully. > > I would be glad for any suggestion. > > > Cheers, Ralph > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > There are other optimization functions available such as those in the 'optimize' subpackage ( http://docs.scipy.org/scipy/docs/scipy.optimize/) that may be more suited to this problem. You probably have a scaling issue because your 'r_i' parameter is huge compared to your 'ppw' parameter (300 vs 0.000001). This is really really important if you model is nonlinear. So please try to standardize your values so that the parameters have similar magnitude - even just division/multiplication by some power of 10 can make a huge difference. If these parameters are so different or you need 'leastsq' then you probably should try either grid searching or fixing one or two parameters at a time. This will at least give you an idea on the possible values. Bruce -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Mon Jun 28 09:54:03 2010 From: sturla at molden.no (Sturla Molden) Date: Mon, 28 Jun 2010 15:54:03 +0200 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: References: Message-ID: <4C28A97B.2030306@molden.no> Den 28.06.2010 08:05, skrev Anne Archibald: > The performance numbers in the blog post I linked to show that padding > to the next larger power of two is substantially slower than padding > to a number with a more complex factorization, though less padding and > more complexity slows it back down a little. That's something FFTW exploits to be fast. It tries various factorizations (and paddings?) and "learns" the fastest. Two other factors that play a role here is data alignment and cache use. It's not just the flop count that matters. At least we should use always buffers aligned to 16 byte boundaries (or a product of 16), so the compiler can be allowed generate simd code (MMX, SSE/SSE2, altivec). We can tell that to the C compiler using __declspec(align(16)) on arrays, possibly with C99 restrict qualifier as well. The differences between these matter a lot to the C compiler: __declspec(align(16)) const double *restrict array; // free to vectorize at will double *array; // aliased? aligned? who can tell? Aligning buffers in NumPy is easy: def aligned_buffer(shape, boundary=4096, dtype=np.float64, order='C'): N,d = np.prod(shape), np.dtype(dtype) tmp = np.empty(N * d.itemsize + boundary, dtype=np.uint8) address = tmp.__array_interface__['data'][0] offset = (boundary - address % boundary) % boundary return tmp[offset:offset+N]\ .view(dtype=d)\ .reshape(shape, order=order) I don't think this is too important though. I usually begin with the smallest possible FFT size, and increment the padding by one until I get a product of 2, 3 or 5. I have yet to see a case where this has not been sufficient for my work. Being a little bit practical is affordable too. :-) Sturla -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Mon Jun 28 09:58:46 2010 From: sturla at molden.no (Sturla Molden) Date: Mon, 28 Jun 2010 15:58:46 +0200 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: References: Message-ID: <4C28AA96.8050902@molden.no> Den 28.06.2010 14:37, skrev Ralf Gommers: > Anne's argument sounded convincing, so I think the way it is now in > the 0.8.x branch is fine. Probably good to add a warning in the > docstrings like: > > .. note:: In scipy 0.8.0 `fft` in single precision is available, > but *only* > for input array sizes which can be factorized into > (combinations of) 2, > 3 and 5. For other sizes the computation will be done in double > precision. This is actually quite diagnostic. 2, 3, and 5 are the only primes FFTPACK support. Larger primes has an O(N**2) fallback. The source of the (rounding?) error must therefore be in the O(N**2) fallback code. Sturla From sturla at molden.no Mon Jun 28 10:01:43 2010 From: sturla at molden.no (Sturla Molden) Date: Mon, 28 Jun 2010 16:01:43 +0200 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: References: Message-ID: <4C28AB47.9060204@molden.no> Den 28.06.2010 08:05, skrev Anne Archibald: > The performance numbers in the blog post I linked to show that padding > to the next larger power of two is substantially slower than padding > to a number with a more complex factorization, though less padding and > more complexity slows it back down a little. That's something FFTW exploits to be fast. It tries various factorizations (and paddings?) and "learns" the fastest. Two other factors that play a role here is data alignment and cache use. It's not just the flop count that matters. At least we should use always buffers aligned to 16 byte boundaries (or a product of 16), so the compiler can be allowed generate simd code (MMX, SSE/SSE2, altivec). We can tell that to the C compiler using __declspec(align(16)) on arrays, possibly with C99 restrict qualifier as well. The differences between these matter a lot to the C compiler: __declspec(align(16)) const double *restrict array; // free to vectorize at will double *array; // aliased? aligned? who can tell? Aligning buffers in NumPy is easy: def aligned_buffer(shape, boundary=4096, dtype=np.float64, order='C'): N,d = np.prod(shape), np.dtype(dtype) tmp = np.empty(N * d.itemsize + boundary, dtype=np.uint8) address = tmp.__array_interface__['data'][0] offset = (boundary - address % boundary) % boundary return tmp[offset:offset+N]\ .view(dtype=d)\ .reshape(shape, order=order) I don't think this is too important though. I usually begin with the smallest possible FFT size, and increment the padding by one until I get a product of 2, 3 or 5. I have yet to see a case where this has not been sufficient for my work. Being a little bit practical is affordable too. :-) Sturla -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Mon Jun 28 10:24:11 2010 From: sturla at molden.no (Sturla Molden) Date: Mon, 28 Jun 2010 16:24:11 +0200 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: References: Message-ID: <4C28B08B.9060203@molden.no> Den 28.06.2010 13:21, skrev Sebastian Haase: > What size of error are talking about anyway .. ? > Personally I would leave it in, > Leave in FFT code that produces 5% relative error? Single-precision is not the solution to memory issues anyway. Get a 64 bit system and buy more RAM. Buying RAM is far cheaper than even re-coding for single precision, if you value the time spent coding, not to mention that the result is far more accurate. Single-precision used to be faster than double precision some 30 years ago. And on 8 bit and 16 bit computers, memory did matter more. For example on a 16 bit CPU with power-of-2 FFT, the largest FFT size would be just 2096 in double precision. With single precision you could get 4096 ... Oorah! Today we rearely see those issues. Python does not even support single precision. Sturla From seb.haase at gmail.com Mon Jun 28 10:48:42 2010 From: seb.haase at gmail.com (Sebastian Haase) Date: Mon, 28 Jun 2010 16:48:42 +0200 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: <4C28B08B.9060203@molden.no> References: <4C28B08B.9060203@molden.no> Message-ID: On Mon, Jun 28, 2010 at 4:24 PM, Sturla Molden wrote: > Den 28.06.2010 13:21, skrev Sebastian Haase: >> What size of error are talking about anyway .. ? >> Personally I would leave it in, >> > > Leave in FFT code that produces 5% relative error? > > Single-precision is not the solution to memory issues anyway. Get a 64 > bit system and buy more RAM. Buying RAM is far cheaper than even > re-coding for single precision, if you value the time spent coding, not > to mention that the result is far more accurate. > > Single-precision used to be faster than double precision some 30 years > ago. And on 8 bit and 16 bit computers, memory did matter more. For > example on a 16 bit CPU with power-of-2 FFT, the largest FFT size would > be just 2096 in double precision. With single precision you could get > 4096 ... Oorah! Today we rearely see those issues. Python does not even > support single precision. That's why "numerical"(!) Python is so great ;-) I'm working with image (sequence) data where the raw data (2-byte unsigned int) approaches often 1GB. To open (memmap) those I already learned liking 64-Linux for a while --- it's really great. Just wanted remind you that data really can get large, such that "just buy more memory" also reaches it's limits. - Sebastian (long time advocate of single precision -- check the archives .... ;-) ) From ralphkube at googlemail.com Mon Jun 28 10:52:16 2010 From: ralphkube at googlemail.com (Ralph Kube) Date: Mon, 28 Jun 2010 16:52:16 +0200 Subject: [SciPy-User] scipy.optimize.leastsq question In-Reply-To: <4C28A740.5020806@gmail.com> References: <4C289756.4070406@googlemail.com> <4C28A740.5020806@gmail.com> Message-ID: <4C28B720.4020902@googlemail.com> Den 28.06.10 15.44, skrev Bruce Southey: > You probably have a scaling issue because your 'r_i' parameter is huge > compared to your 'ppw' parameter (300 vs 0.000001). This is really > really important if you model is nonlinear. So please try to standardize > your values so that the parameters have similar magnitude - even just > division/multiplication by some power of 10 can make a huge difference. > If these parameters are so different or you need 'leastsq' then you > probably should try either grid searching or fixing one or two > parameters at a time. This will at least give you an idea on the > possible values. > > Bruce I have little experience with non-linear optimization so using least squares was a first guess approach. The model is much more sensitive to the r_i and r_s parameters than it is to the ppw parameter. In the approach I use, all quantities are physical units which serve as input parameters to existing routines. They demand the given order of magnitude for r_i, r_s and ppw. I rewrote them, so that the input variable have the same order of magnitude and rescale them when I pass them to these routines. Then I tried to let leastsq now only vary r_i while keeping r_s and ppw fixed. Still, the problem pertains: Optimizing albedo Albedo for r_ice = 4.200000, r_soot = 1.000000, ppw = 1.000000e+00 Residual squared: 0.235837 Albedo for r_ice = 4.200000, r_soot = 1.000000, ppw = 1.000000e+00 Residual squared: 0.235837 Albedo for r_ice = 4.200000, r_soot = 1.000000, ppw = 1.000000e+00 Residual squared: 0.235837 Albedo for r_ice = 4.200000, r_soot = 1.000000, ppw = 1.000000e+00 Residual squared: 0.235837 ... done. Found r_snow = 4.2 Is this still the scaling problem? From sebastian.walter at gmail.com Mon Jun 28 11:13:50 2010 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Mon, 28 Jun 2010 17:13:50 +0200 Subject: [SciPy-User] scipy.optimize.leastsq question In-Reply-To: <4C289756.4070406@googlemail.com> References: <4C289756.4070406@googlemail.com> Message-ID: there may be others who have more experience with scipy.optimize.leastsq. >From the mathematical point of view you should be certain that your function is continuously differentiable or at least (Lipschitz-)continuous. This is because scipy.optimize.leastsq uses the Levenberg-Marquardt algorithm which requires the Jacobian J(x) = dF/dx. You do not provide an analytic Jacobian for scipy.optimize.leastsq. That means that scipy.optimize.leastsq uses some finite differences approximation to approximate the Jacobian J(x). It can happen that this finite differences approximation is so poor that no descent direction for the residual can be found. So the first thing I would check is if the Jacobian J(x) makes sense. You should be able to extract it from scipy.optimize.leastsq's output infodict['fjac']. Then I'd check if F(x + h*v) - F(x)/h, for h \approx 10**-8 gives the same vector as dot(J(x),v) if this doesn't match at all, then your Jacobian is wrong resp. your function is not continuously differentiable. Hope this helps a little, Sebastian On Mon, Jun 28, 2010 at 2:36 PM, Ralph Kube wrote: > Hello people, > I am having a problem using the leastsq routine. My goal is to > determine three parameters r_i, r_s and ppw so that the residuals > to a model function a(r_i, r_s, ppw) to a measurement are minimal. > When I call the leastsq routine with a good guess of starting values, it > iterates 6 times without changing the vales of the initial parameters > and then exits without an error. > The function a is very complicated and expensive to evaluate. Some > evaluation is done by using the subprocess module of python. Can this > pose a problem for the leastsq routine? > > > This is in the main routine: > > import numpy as N > > for t_idx, t in enumerate(time_var): > > ? ? ? ?r_i = 300. > ? ? ? ?r_s = 1.0 > ? ? ? ?ppw=1e-6 > ? ? ? ?sza = 70. > ? ? ? ?wl = N.arange(300., 3001., 1.) > > ? ? ? ?albedo_true = compute_albedo(r_i, r_s, ppw, sza, wl) > ? ? ? ?# This emulates the measurement data > ? ? ? ?albedo_meas = albedo_true + 0.01*N.random.randn(len(wl)) > > ? ? ? ?print 'Optimizing albedo' > ? ? ? ?p0 = [2.*r_i, 1.4*r_s, 4.*ppw] > ? ? ? ?plsq2 = leastsq(albedo_residual, p0, args=(albedo_meas, sza, > wl)) > ? ? ? ?print '... done: ', plsq2[0][0], plsq2[0][1], plsq2[0][2] > ? ? ? ?albedo_model = compute_albedo(plsq2[0][0], plsq2[0][1], plsq2[0][2], > sza, wl) > > The residual function: > def albedo_residual(p, y, sza, wvl): > ? ? ? ?r_i, r_s, ppw = p > ? ? ? ?albedo = compute_albedo(r_i, r_s, ppw, sza, wvl) > ? ? ? ?err = albedo - y > ? ? ? ?print 'Albedo for ?r_i = %4.0f, r_s = %4.2f, ppw = %3.2e \ > ? ? ? ? ? ? ? ?Residual squared: %5f' % (r_i, r_s, ppw, N.sum(err**2)) > > ? ? ? ?return err > > The definition of the function a(r_i, r_s, ppw) > def compute_albedo(radius_ice, radius_soot, ppw, sza, wvl): > > The output is: > Optimizing albedo > Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: > 0.973819 > Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: > 0.973819 > Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: > 0.973819 > Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: > 0.973819 > Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: > 0.973819 > Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: > 0.973819 > ... done: ?600.0 1.4 4e-06 > > To check for errors, I implemented the example code from > http://www.tau.ac.il/~kineret/amit/scipy_tutorial/ in my code and it > runs successfully. > > I would be glad for any suggestion. > > > Cheers, Ralph > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sturla at molden.no Mon Jun 28 11:35:14 2010 From: sturla at molden.no (Sturla Molden) Date: Mon, 28 Jun 2010 17:35:14 +0200 Subject: [SciPy-User] Single precision FFT insufficiently accurate. In-Reply-To: References: <4C28B08B.9060203@molden.no> Message-ID: <4C28C132.7050108@molden.no> Den 28.06.2010 16:48, skrev Sebastian Haase: > To open (memmap) those I already learned liking 64-Linux for a while > --- it's really great. > Memory mapping large files is the reason I switched to 64 bit Windows as well. Mine are about 4.5 GB... While resent Pythons (not 2.6) takes an offset argument to mmap, it's easier to just grab the whole file. :-) Sturla From bsouthey at gmail.com Mon Jun 28 12:03:08 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 28 Jun 2010 11:03:08 -0500 Subject: [SciPy-User] scipy.optimize.leastsq question In-Reply-To: <4C28B720.4020902@googlemail.com> References: <4C289756.4070406@googlemail.com> <4C28A740.5020806@gmail.com> <4C28B720.4020902@googlemail.com> Message-ID: <4C28C7BC.4000208@gmail.com> On 06/28/2010 09:52 AM, Ralph Kube wrote: > > Den 28.06.10 15.44, skrev Bruce Southey: > >> You probably have a scaling issue because your 'r_i' parameter is huge >> compared to your 'ppw' parameter (300 vs 0.000001). This is really >> really important if you model is nonlinear. So please try to standardize >> your values so that the parameters have similar magnitude - even just >> division/multiplication by some power of 10 can make a huge difference. >> If these parameters are so different or you need 'leastsq' then you >> probably should try either grid searching or fixing one or two >> parameters at a time. This will at least give you an idea on the >> possible values. >> >> Bruce >> > I have little experience with non-linear optimization so using least > squares was a first guess approach. > The model is much more sensitive to the r_i and r_s parameters than it > is to the ppw parameter. In the approach I use, all quantities are > physical units which serve as input parameters to existing routines. > They demand the given order of magnitude for r_i, r_s and ppw. > I rewrote them, so that the input variable have the same order of > magnitude and rescale them when I pass them to these routines. > Then I tried to let leastsq now only vary r_i while keeping r_s and ppw > fixed. Still, the problem pertains: > > > Optimizing albedo > Albedo for r_ice = 4.200000, r_soot = 1.000000, ppw = 1.000000e+00 > Residual squared: 0.235837 > Albedo for r_ice = 4.200000, r_soot = 1.000000, ppw = 1.000000e+00 > Residual squared: 0.235837 > Albedo for r_ice = 4.200000, r_soot = 1.000000, ppw = 1.000000e+00 > Residual squared: 0.235837 > Albedo for r_ice = 4.200000, r_soot = 1.000000, ppw = 1.000000e+00 > Residual squared: 0.235837 > ... done. Found r_snow = 4.2 > > > Is this still the scaling problem? > > > We do expect that your data is correct, your 'compute_albedo' is correct, your have suitable starting values etc. As Sebastian says, take a very careful look Jacobian as my guess is that the search space surface is flat. You probably can see that by ploting the data across a grid of parameter values for r_i and r_s - there should be some sort of curvature. for optimization functions to work. Bruce From lists at hilboll.de Mon Jun 28 13:54:54 2010 From: lists at hilboll.de (Andreas) Date: Mon, 28 Jun 2010 18:54:54 +0100 Subject: [SciPy-User] optimization w/ constraints Message-ID: <4C28E1EE.50100@hilboll.de> hi there, i need to minimize a function in 19 variables x_1, ... x_19, with constraints x_i * a = x_{i+8} i=4,...,11 so actually, it's a 12 variable problem, but some of the variables get multiplied with each other ... how would i do that using scipy? i noticed the scipy.optimize.slsqp, where i can specify the f_eqcons parameter. just wanted to make sure i'm actually on the right path ... thanks for you insight! cheers, andreas. From robert.kern at gmail.com Mon Jun 28 14:01:17 2010 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Jun 2010 13:01:17 -0500 Subject: [SciPy-User] optimization w/ constraints In-Reply-To: <4C28E1EE.50100@hilboll.de> References: <4C28E1EE.50100@hilboll.de> Message-ID: On Mon, Jun 28, 2010 at 12:54, Andreas wrote: > hi there, > > i need to minimize a function in 19 variables x_1, ... x_19, with > constraints > > x_i * a = x_{i+8} i=4,...,11 > > so actually, it's a 12 variable problem, but some of the variables get > multiplied with each other ... I highly recommend simply recoding your function (or wrapping it in another function) that will expand from the 12 independent variables to the 19. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthew.brett at gmail.com Mon Jun 28 16:40:34 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 28 Jun 2010 16:40:34 -0400 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: Hi, > I propose moving the scipy/io/matlab directory to scipy/io/matfile and have > the __init__.py file for scipy.io to import scipy.io.matfile as matlab.? I > don't know if that works for all of the ways one would call that module. > Also, is there any sort of way to make a deprecation warning fire for > importing scipy.io.matlab, but not for scipy.io.matfile?? I never had to do > any sort of fancy module setup, so I am not sure what is best. Sorry to be slow to reply - I was offline for a few days. Only to say that - from playing with a toy package - I think that won't work for people who have done things like import scipy.io.matlab or from scipy.io.matlab import loadmat and I'm not sure what would, apart from a symbolic link - but then I don't know how you'd raise the deprecation warning. I still don't know how important the renaming would be - although I take your point about there only being matfile reading in the package. Lacking further votes from the ether, maybe we should defer action until we're in clearer agreement about how urgent all this is. See you, Matthew See you, Matthew From kenneth.arnold at gmail.com Mon Jun 28 20:19:07 2010 From: kenneth.arnold at gmail.com (Kenneth Arnold) Date: Mon, 28 Jun 2010 20:19:07 -0400 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: On Mon, Jun 28, 2010 at 4:40 PM, Matthew Brett wrote: > Hi, > > > I propose moving the scipy/io/matlab directory to scipy/io/matfile and > have > > the __init__.py file for scipy.io to import scipy.io.matfile as matlab. > I > > don't know if that works for all of the ways one would call that module. > > Also, is there any sort of way to make a deprecation warning fire for > > importing scipy.io.matlab, but not for scipy.io.matfile? I never had to > do > > any sort of fancy module setup, so I am not sure what is best. > > I think that won't work > for people who have done things like > > import scipy.io.matlab > > or > > from scipy.io.matlab import loadmat > > and I'm not sure what would, apart from a symbolic link - but then I > don't know how you'd raise the deprecation warning. > You could put the following in scipy/io/matlab.py: import warnings warnings.warn(...) from scipy.io.matfile import * -Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.root at ou.edu Tue Jun 29 00:19:01 2010 From: ben.root at ou.edu (Benjamin Root) Date: Mon, 28 Jun 2010 23:19:01 -0500 Subject: [SciPy-User] Matlab trademark - was: Re: SciPy-User Digest, Vol 82, Issue 49 In-Reply-To: References: Message-ID: On Mon, Jun 28, 2010 at 7:19 PM, Kenneth Arnold wrote: > On Mon, Jun 28, 2010 at 4:40 PM, Matthew Brett wrote: > >> Hi, >> >> > I propose moving the scipy/io/matlab directory to scipy/io/matfile and >> have >> > the __init__.py file for scipy.io to import scipy.io.matfile as >> matlab. I >> > don't know if that works for all of the ways one would call that module. >> > Also, is there any sort of way to make a deprecation warning fire for >> > importing scipy.io.matlab, but not for scipy.io.matfile? I never had to >> do >> > any sort of fancy module setup, so I am not sure what is best. >> >> I think that won't work >> for people who have done things like >> >> import scipy.io.matlab >> >> or >> >> from scipy.io.matlab import loadmat >> >> and I'm not sure what would, apart from a symbolic link - but then I >> don't know how you'd raise the deprecation warning. >> > > You could put the following in scipy/io/matlab.py: > > import warnings > warnings.warn(...) > from scipy.io.matfile import * > > -Ken > > > Ok, I think that would work very nicely for those who directly import scipy.io.matlab. Something else will have to be done for the __init__.py file for scipy.io. If currently imports from "matlab.mio" and "matlab.byteordercodes". Also, it makes available a "matlab" namespace somehow when you import scipy.io. (Note that there is currently a "matlab" directory in scipy/io directory with its own __init__.py and other parts to do file reading and writing.) How does that fit in with everything? Ben Root > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralphkube at googlemail.com Tue Jun 29 03:28:31 2010 From: ralphkube at googlemail.com (Ralph Kube) Date: Tue, 29 Jun 2010 09:28:31 +0200 Subject: [SciPy-User] scipy.optimize.leastsq question In-Reply-To: References: <4C289756.4070406@googlemail.com> Message-ID: <4C29A09F.5010504@googlemail.com> Thank you, I found the error this way. The Jacobian is indeed very hard to compute, and the leastsq routine computes a zero Jacobian. The albedo function I want to minimize does not change for values of h \approx 10**-8, but on scales h \approx 10**-3. I now use the fmin function and working with other functions that do not require any information about the derivative. They seem more appropriate to my problem. Cheers, Ralph Den 28.06.10 17.13, skrev Sebastian Walter: > there may be others who have more experience with scipy.optimize.leastsq. > >> From the mathematical point of view you should be certain that your > function is continuously differentiable or at least > (Lipschitz-)continuous. > This is because scipy.optimize.leastsq uses the Levenberg-Marquardt > algorithm which requires the Jacobian J(x) = dF/dx. > > You do not provide an analytic Jacobian for scipy.optimize.leastsq. > That means that scipy.optimize.leastsq uses some finite differences > approximation to approximate the Jacobian J(x). > It can happen that this finite differences approximation is so poor > that no descent direction for the residual can be found. > > So the first thing I would check is if the Jacobian J(x) makes sense. > You should be able to extract it from > scipy.optimize.leastsq's output infodict['fjac']. > > Then I'd check if > F(x + h*v) - F(x)/h, for h \approx 10**-8 > > gives the same vector as dot(J(x),v) > if this doesn't match at all, then your Jacobian is wrong resp. your > function is not continuously differentiable. > > Hope this helps a little, > Sebastian > > > > On Mon, Jun 28, 2010 at 2:36 PM, Ralph Kube wrote: >> Hello people, >> I am having a problem using the leastsq routine. My goal is to >> determine three parameters r_i, r_s and ppw so that the residuals >> to a model function a(r_i, r_s, ppw) to a measurement are minimal. >> When I call the leastsq routine with a good guess of starting values, it >> iterates 6 times without changing the vales of the initial parameters >> and then exits without an error. >> The function a is very complicated and expensive to evaluate. Some >> evaluation is done by using the subprocess module of python. Can this >> pose a problem for the leastsq routine? >> >> >> This is in the main routine: >> >> import numpy as N >> >> for t_idx, t in enumerate(time_var): >> >> r_i = 300. >> r_s = 1.0 >> ppw=1e-6 >> sza = 70. >> wl = N.arange(300., 3001., 1.) >> >> albedo_true = compute_albedo(r_i, r_s, ppw, sza, wl) >> # This emulates the measurement data >> albedo_meas = albedo_true + 0.01*N.random.randn(len(wl)) >> >> print 'Optimizing albedo' >> p0 = [2.*r_i, 1.4*r_s, 4.*ppw] >> plsq2 = leastsq(albedo_residual, p0, args=(albedo_meas, sza, >> wl)) >> print '... done: ', plsq2[0][0], plsq2[0][1], plsq2[0][2] >> albedo_model = compute_albedo(plsq2[0][0], plsq2[0][1], plsq2[0][2], >> sza, wl) >> >> The residual function: >> def albedo_residual(p, y, sza, wvl): >> r_i, r_s, ppw = p >> albedo = compute_albedo(r_i, r_s, ppw, sza, wvl) >> err = albedo - y >> print 'Albedo for r_i = %4.0f, r_s = %4.2f, ppw = %3.2e \ >> Residual squared: %5f' % (r_i, r_s, ppw, N.sum(err**2)) >> >> return err >> >> The definition of the function a(r_i, r_s, ppw) >> def compute_albedo(radius_ice, radius_soot, ppw, sza, wvl): >> >> The output is: >> Optimizing albedo >> Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: >> 0.973819 >> Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: >> 0.973819 >> Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: >> 0.973819 >> Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: >> 0.973819 >> Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: >> 0.973819 >> Albedo for r_i = 600, r_s = 1.40, ppw = 4.00e-06 Residual squared: >> 0.973819 >> ... done: 600.0 1.4 4e-06 >> >> To check for errors, I implemented the example code from >> http://www.tau.ac.il/~kineret/amit/scipy_tutorial/ in my code and it >> runs successfully. >> >> I would be glad for any suggestion. >> >> >> Cheers, Ralph >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Cheers, Ralph From sebastian.walter at gmail.com Tue Jun 29 03:46:09 2010 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Tue, 29 Jun 2010 09:46:09 +0200 Subject: [SciPy-User] scipy.optimize.leastsq question In-Reply-To: <4C29A09F.5010504@googlemail.com> References: <4C289756.4070406@googlemail.com> <4C29A09F.5010504@googlemail.com> Message-ID: On Tue, Jun 29, 2010 at 9:28 AM, Ralph Kube wrote: > Thank you, I found the error this way. The Jacobian is indeed very > hard to compute, and the leastsq routine computes a zero Jacobian. > The albedo function I want to minimize does not change for values > of h \approx 10**-8, but on scales h \approx 10**-3. > I now use the fmin function and working with other functions that do not > require any information about the derivative. They seem more appropriate > to my problem. Only use derivative free optimization methods if your problem is not continuous. If your problem is differentiable, you should compute the Jacobian yourself, e.g. with def myJacobian(x): h = 10**-3 # do finite differences approximation return .... and provide the Jacobian to scipy.optimize.leastsq(..., Dfun = myJacobian) This should work much better/reliable/faster than any of the alternatives. Also, using Algorithmic Differentiation to compute the Jacobian would probably help in terms of robustness and convergence speed of leastsq. Sebastian > > Cheers, Ralph > > Den 28.06.10 17.13, skrev Sebastian Walter: >> there may be others who have more experience with scipy.optimize.leastsq. >> >>> From the mathematical point of view you should be certain that your >> function is continuously differentiable or at least >> (Lipschitz-)continuous. >> This is because ?scipy.optimize.leastsq ?uses the Levenberg-Marquardt >> algorithm which requires the Jacobian J(x) = dF/dx. >> >> You do not provide an analytic Jacobian for scipy.optimize.leastsq. >> That means that scipy.optimize.leastsq uses some finite differences >> approximation to approximate the Jacobian J(x). >> It can happen that this finite differences approximation is so poor >> that no descent direction for the residual can be found. >> >> So the first thing I would check is if the Jacobian J(x) makes sense. >> You should be able to extract it from >> scipy.optimize.leastsq's output infodict['fjac']. >> >> Then I'd check if >> F(x + h*v) - F(x)/h, for h \approx 10**-8 >> >> gives the same vector as ? dot(J(x),v) >> if this doesn't match at all, then your Jacobian is wrong resp. your >> function is not continuously differentiable. >> >> Hope this helps a little, >> Sebastian >> >> >> >> On Mon, Jun 28, 2010 at 2:36 PM, Ralph Kube ?wrote: >>> Hello people, >>> I am having a problem using the leastsq routine. My goal is to >>> determine three parameters r_i, r_s and ppw so that the residuals >>> to a model function a(r_i, r_s, ppw) to a measurement are minimal. >>> When I call the leastsq routine with a good guess of starting values, it >>> iterates 6 times without changing the vales of the initial parameters >>> and then exits without an error. >>> The function a is very complicated and expensive to evaluate. Some >>> evaluation is done by using the subprocess module of python. Can this >>> pose a problem for the leastsq routine? >>> >>> >>> This is in the main routine: >>> >>> import numpy as N >>> >>> for t_idx, t in enumerate(time_var): >>> >>> ? ? ? ? r_i = 300. >>> ? ? ? ? r_s = 1.0 >>> ? ? ? ? ppw=1e-6 >>> ? ? ? ? sza = 70. >>> ? ? ? ? wl = N.arange(300., 3001., 1.) >>> >>> ? ? ? ? albedo_true = compute_albedo(r_i, r_s, ppw, sza, wl) >>> ? ? ? ? # This emulates the measurement data >>> ? ? ? ? albedo_meas = albedo_true + 0.01*N.random.randn(len(wl)) >>> >>> ? ? ? ? print 'Optimizing albedo' >>> ? ? ? ? p0 = [2.*r_i, 1.4*r_s, 4.*ppw] >>> ? ? ? ? plsq2 = leastsq(albedo_residual, p0, args=(albedo_meas, sza, >>> wl)) >>> ? ? ? ? print '... done: ', plsq2[0][0], plsq2[0][1], plsq2[0][2] >>> ? ? ? ? albedo_model = compute_albedo(plsq2[0][0], plsq2[0][1], plsq2[0][2], >>> sza, wl) >>> >>> The residual function: >>> def albedo_residual(p, y, sza, wvl): >>> ? ? ? ? r_i, r_s, ppw = p >>> ? ? ? ? albedo = compute_albedo(r_i, r_s, ppw, sza, wvl) >>> ? ? ? ? err = albedo - y >>> ? ? ? ? print 'Albedo for ?r_i = %4.0f, r_s = %4.2f, ppw = %3.2e \ >>> ? ? ? ? ? ? ? ? Residual squared: %5f' % (r_i, r_s, ppw, N.sum(err**2)) >>> >>> ? ? ? ? return err >>> >>> The definition of the function a(r_i, r_s, ppw) >>> def compute_albedo(radius_ice, radius_soot, ppw, sza, wvl): >>> >>> The output is: >>> Optimizing albedo >>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>> 0.973819 >>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>> 0.973819 >>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>> 0.973819 >>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>> 0.973819 >>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>> 0.973819 >>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>> 0.973819 >>> ... done: ?600.0 1.4 4e-06 >>> >>> To check for errors, I implemented the example code from >>> http://www.tau.ac.il/~kineret/amit/scipy_tutorial/ in my code and it >>> runs successfully. >>> >>> I would be glad for any suggestion. >>> >>> >>> Cheers, Ralph >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > > Cheers, Ralph > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Tue Jun 29 03:52:04 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 29 Jun 2010 03:52:04 -0400 Subject: [SciPy-User] scipy.optimize.leastsq question In-Reply-To: References: <4C289756.4070406@googlemail.com> <4C29A09F.5010504@googlemail.com> Message-ID: On Tue, Jun 29, 2010 at 3:46 AM, Sebastian Walter wrote: > On Tue, Jun 29, 2010 at 9:28 AM, Ralph Kube wrote: >> Thank you, I found the error this way. The Jacobian is indeed very >> hard to compute, and the leastsq routine computes a zero Jacobian. >> The albedo function I want to minimize does not change for values >> of h \approx 10**-8, but on scales h \approx 10**-3. >> I now use the fmin function and working with other functions that do not >> require any information about the derivative. They seem more appropriate >> to my problem. > > Only use derivative free optimization methods if your problem is not continuous. > If your problem is differentiable, you should compute the Jacobian > yourself, e.g. with > > def myJacobian(x): > ? ? h = 10**-3 > ? ? # do finite differences approximation > ? ? return .... > > and provide the Jacobian to > scipy.optimize.leastsq(..., Dfun = myJacobian) > This should work much better/reliable/faster than any of the alternatives. Maybe increasing the step length in the options to leastsq also works: epsfcn ? A suitable step length for the forward-difference approximation of the Jacobian (for Dfun=None). I don't think I have tried for leastsq, but for some fmin it works much better with larger step length for the finite difference approximation. Josef > > Also, using Algorithmic Differentiation to compute the Jacobian would > probably help in terms of robustness and convergence speed of leastsq. > > Sebastian > > > > > >> >> Cheers, Ralph >> >> Den 28.06.10 17.13, skrev Sebastian Walter: >>> there may be others who have more experience with scipy.optimize.leastsq. >>> >>>> From the mathematical point of view you should be certain that your >>> function is continuously differentiable or at least >>> (Lipschitz-)continuous. >>> This is because ?scipy.optimize.leastsq ?uses the Levenberg-Marquardt >>> algorithm which requires the Jacobian J(x) = dF/dx. >>> >>> You do not provide an analytic Jacobian for scipy.optimize.leastsq. >>> That means that scipy.optimize.leastsq uses some finite differences >>> approximation to approximate the Jacobian J(x). >>> It can happen that this finite differences approximation is so poor >>> that no descent direction for the residual can be found. >>> >>> So the first thing I would check is if the Jacobian J(x) makes sense. >>> You should be able to extract it from >>> scipy.optimize.leastsq's output infodict['fjac']. >>> >>> Then I'd check if >>> F(x + h*v) - F(x)/h, for h \approx 10**-8 >>> >>> gives the same vector as ? dot(J(x),v) >>> if this doesn't match at all, then your Jacobian is wrong resp. your >>> function is not continuously differentiable. >>> >>> Hope this helps a little, >>> Sebastian >>> >>> >>> >>> On Mon, Jun 28, 2010 at 2:36 PM, Ralph Kube ?wrote: >>>> Hello people, >>>> I am having a problem using the leastsq routine. My goal is to >>>> determine three parameters r_i, r_s and ppw so that the residuals >>>> to a model function a(r_i, r_s, ppw) to a measurement are minimal. >>>> When I call the leastsq routine with a good guess of starting values, it >>>> iterates 6 times without changing the vales of the initial parameters >>>> and then exits without an error. >>>> The function a is very complicated and expensive to evaluate. Some >>>> evaluation is done by using the subprocess module of python. Can this >>>> pose a problem for the leastsq routine? >>>> >>>> >>>> This is in the main routine: >>>> >>>> import numpy as N >>>> >>>> for t_idx, t in enumerate(time_var): >>>> >>>> ? ? ? ? r_i = 300. >>>> ? ? ? ? r_s = 1.0 >>>> ? ? ? ? ppw=1e-6 >>>> ? ? ? ? sza = 70. >>>> ? ? ? ? wl = N.arange(300., 3001., 1.) >>>> >>>> ? ? ? ? albedo_true = compute_albedo(r_i, r_s, ppw, sza, wl) >>>> ? ? ? ? # This emulates the measurement data >>>> ? ? ? ? albedo_meas = albedo_true + 0.01*N.random.randn(len(wl)) >>>> >>>> ? ? ? ? print 'Optimizing albedo' >>>> ? ? ? ? p0 = [2.*r_i, 1.4*r_s, 4.*ppw] >>>> ? ? ? ? plsq2 = leastsq(albedo_residual, p0, args=(albedo_meas, sza, >>>> wl)) >>>> ? ? ? ? print '... done: ', plsq2[0][0], plsq2[0][1], plsq2[0][2] >>>> ? ? ? ? albedo_model = compute_albedo(plsq2[0][0], plsq2[0][1], plsq2[0][2], >>>> sza, wl) >>>> >>>> The residual function: >>>> def albedo_residual(p, y, sza, wvl): >>>> ? ? ? ? r_i, r_s, ppw = p >>>> ? ? ? ? albedo = compute_albedo(r_i, r_s, ppw, sza, wvl) >>>> ? ? ? ? err = albedo - y >>>> ? ? ? ? print 'Albedo for ?r_i = %4.0f, r_s = %4.2f, ppw = %3.2e \ >>>> ? ? ? ? ? ? ? ? Residual squared: %5f' % (r_i, r_s, ppw, N.sum(err**2)) >>>> >>>> ? ? ? ? return err >>>> >>>> The definition of the function a(r_i, r_s, ppw) >>>> def compute_albedo(radius_ice, radius_soot, ppw, sza, wvl): >>>> >>>> The output is: >>>> Optimizing albedo >>>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>>> 0.973819 >>>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>>> 0.973819 >>>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>>> 0.973819 >>>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>>> 0.973819 >>>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>>> 0.973819 >>>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>>> 0.973819 >>>> ... done: ?600.0 1.4 4e-06 >>>> >>>> To check for errors, I implemented the example code from >>>> http://www.tau.ac.il/~kineret/amit/scipy_tutorial/ in my code and it >>>> runs successfully. >>>> >>>> I would be glad for any suggestion. >>>> >>>> >>>> Cheers, Ralph >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> -- >> >> Cheers, Ralph >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sebastian.walter at gmail.com Tue Jun 29 04:59:21 2010 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Tue, 29 Jun 2010 10:59:21 +0200 Subject: [SciPy-User] scipy.optimize.leastsq question In-Reply-To: References: <4C289756.4070406@googlemail.com> <4C29A09F.5010504@googlemail.com> Message-ID: On Tue, Jun 29, 2010 at 9:52 AM, wrote: > On Tue, Jun 29, 2010 at 3:46 AM, Sebastian Walter > wrote: >> On Tue, Jun 29, 2010 at 9:28 AM, Ralph Kube wrote: >>> Thank you, I found the error this way. The Jacobian is indeed very >>> hard to compute, and the leastsq routine computes a zero Jacobian. >>> The albedo function I want to minimize does not change for values >>> of h \approx 10**-8, but on scales h \approx 10**-3. >>> I now use the fmin function and working with other functions that do not >>> require any information about the derivative. They seem more appropriate >>> to my problem. >> >> Only use derivative free optimization methods if your problem is not continuous. >> If your problem is differentiable, you should compute the Jacobian >> yourself, e.g. with >> >> def myJacobian(x): >> ? ? h = 10**-3 >> ? ? # do finite differences approximation >> ? ? return .... >> >> and provide the Jacobian to >> scipy.optimize.leastsq(..., Dfun = myJacobian) >> This should work much better/reliable/faster than any of the alternatives. > > Maybe increasing the step length in the options to leastsq also works: > > epsfcn ? A suitable step length for the forward-difference > approximation of the Jacobian (for Dfun=None). > > I don't think I have tried for leastsq, but for some fmin it works > much better with larger step length for the finite difference > approximation. choosing the right "step length" h is an art that I don't know much about. But apparently one rule of thumb is to use h = abs(x)* sqrt(numpy.finfo(float).eps) to compute f'(x) = (f(x+h) - f(x))/h i.e. if one has x = [1,10**-3, 10**4] one would have to scale h with 1, 10**-3 and 10**4. Regarding epsfcn: I find the documentation of leastsq a "little" confusing. epsfcn -- A suitable step length for the forward-difference approximation of the Jacobian (for Dfun=None). If epsfcn is less than the machine precision, it is assumed that the relative errors in the functions are of the order of the machine precision. In particular I don't quite get what is meant by "relative errors in the functions". Which "functions" does it refer to? Sebastian > > Josef > > > >> >> Also, using Algorithmic Differentiation to compute the Jacobian would >> probably help in terms of robustness and convergence speed of leastsq. >> >> Sebastian >> >> >> >> >> >>> >>> Cheers, Ralph >>> >>> Den 28.06.10 17.13, skrev Sebastian Walter: >>>> there may be others who have more experience with scipy.optimize.leastsq. >>>> >>>>> From the mathematical point of view you should be certain that your >>>> function is continuously differentiable or at least >>>> (Lipschitz-)continuous. >>>> This is because ?scipy.optimize.leastsq ?uses the Levenberg-Marquardt >>>> algorithm which requires the Jacobian J(x) = dF/dx. >>>> >>>> You do not provide an analytic Jacobian for scipy.optimize.leastsq. >>>> That means that scipy.optimize.leastsq uses some finite differences >>>> approximation to approximate the Jacobian J(x). >>>> It can happen that this finite differences approximation is so poor >>>> that no descent direction for the residual can be found. >>>> >>>> So the first thing I would check is if the Jacobian J(x) makes sense. >>>> You should be able to extract it from >>>> scipy.optimize.leastsq's output infodict['fjac']. >>>> >>>> Then I'd check if >>>> F(x + h*v) - F(x)/h, for h \approx 10**-8 >>>> >>>> gives the same vector as ? dot(J(x),v) >>>> if this doesn't match at all, then your Jacobian is wrong resp. your >>>> function is not continuously differentiable. >>>> >>>> Hope this helps a little, >>>> Sebastian >>>> >>>> >>>> >>>> On Mon, Jun 28, 2010 at 2:36 PM, Ralph Kube ?wrote: >>>>> Hello people, >>>>> I am having a problem using the leastsq routine. My goal is to >>>>> determine three parameters r_i, r_s and ppw so that the residuals >>>>> to a model function a(r_i, r_s, ppw) to a measurement are minimal. >>>>> When I call the leastsq routine with a good guess of starting values, it >>>>> iterates 6 times without changing the vales of the initial parameters >>>>> and then exits without an error. >>>>> The function a is very complicated and expensive to evaluate. Some >>>>> evaluation is done by using the subprocess module of python. Can this >>>>> pose a problem for the leastsq routine? >>>>> >>>>> >>>>> This is in the main routine: >>>>> >>>>> import numpy as N >>>>> >>>>> for t_idx, t in enumerate(time_var): >>>>> >>>>> ? ? ? ? r_i = 300. >>>>> ? ? ? ? r_s = 1.0 >>>>> ? ? ? ? ppw=1e-6 >>>>> ? ? ? ? sza = 70. >>>>> ? ? ? ? wl = N.arange(300., 3001., 1.) >>>>> >>>>> ? ? ? ? albedo_true = compute_albedo(r_i, r_s, ppw, sza, wl) >>>>> ? ? ? ? # This emulates the measurement data >>>>> ? ? ? ? albedo_meas = albedo_true + 0.01*N.random.randn(len(wl)) >>>>> >>>>> ? ? ? ? print 'Optimizing albedo' >>>>> ? ? ? ? p0 = [2.*r_i, 1.4*r_s, 4.*ppw] >>>>> ? ? ? ? plsq2 = leastsq(albedo_residual, p0, args=(albedo_meas, sza, >>>>> wl)) >>>>> ? ? ? ? print '... done: ', plsq2[0][0], plsq2[0][1], plsq2[0][2] >>>>> ? ? ? ? albedo_model = compute_albedo(plsq2[0][0], plsq2[0][1], plsq2[0][2], >>>>> sza, wl) >>>>> >>>>> The residual function: >>>>> def albedo_residual(p, y, sza, wvl): >>>>> ? ? ? ? r_i, r_s, ppw = p >>>>> ? ? ? ? albedo = compute_albedo(r_i, r_s, ppw, sza, wvl) >>>>> ? ? ? ? err = albedo - y >>>>> ? ? ? ? print 'Albedo for ?r_i = %4.0f, r_s = %4.2f, ppw = %3.2e \ >>>>> ? ? ? ? ? ? ? ? Residual squared: %5f' % (r_i, r_s, ppw, N.sum(err**2)) >>>>> >>>>> ? ? ? ? return err >>>>> >>>>> The definition of the function a(r_i, r_s, ppw) >>>>> def compute_albedo(radius_ice, radius_soot, ppw, sza, wvl): >>>>> >>>>> The output is: >>>>> Optimizing albedo >>>>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>>>> 0.973819 >>>>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>>>> 0.973819 >>>>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>>>> 0.973819 >>>>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>>>> 0.973819 >>>>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>>>> 0.973819 >>>>> Albedo for r_i = ?600, r_s = 1.40, ppw = 4.00e-06 ? ? ? ? ? ? ? Residual squared: >>>>> 0.973819 >>>>> ... done: ?600.0 1.4 4e-06 >>>>> >>>>> To check for errors, I implemented the example code from >>>>> http://www.tau.ac.il/~kineret/amit/scipy_tutorial/ in my code and it >>>>> runs successfully. >>>>> >>>>> I would be glad for any suggestion. >>>>> >>>>> >>>>> Cheers, Ralph >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> -- >>> >>> Cheers, Ralph >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From rcsqtc at iqac.csic.es Tue Jun 29 07:03:51 2010 From: rcsqtc at iqac.csic.es (Ramon Crehuet) Date: Tue, 29 Jun 2010 13:03:51 +0200 Subject: [SciPy-User] f2py: questions on array arguments Message-ID: <4C29D317.2010305@iqac.csic.es> Dear all, I have a couple of question on f2py. 1. The follwing fortran function: function f2(x,y) implicit none real,intent(in):: x,y real, dimension(3):: f2 f2(1)=x+y**2 f2(2)=sin(x*y) f2(3)=2*x-y end function f2 gives a segmentation fault when called from python if it is not in a fortran module. If it is contained in a fortran module, it works fine and returns an array. That makes sense because fortran modules automatically generate an interface. However, I don't see that reflected in the .pyf files generated by f2py. So, is there a way to "correct" the function outside the module to work with f2py? 2. I have read in several posts that automatic arrays do not work with f2py. So that something like: real function trace(m) real, dimension(:,:), intent(in) :: m Has to be converted into: real function trace2(m,n) integer, intent(in) :: n !f2py integer,intent(hide),depend(m) :: n=shape(m,0) real, dimension(n,n), intent(in) :: m which works fine with f2py but is not nice in fortran. I've tried to do something like: real function trace(m) !f2py integer,depend(m) :: n=shape(m,0) !f2py real, dimension(n,n), intent(in) :: m real, dimension(:,:), intent(in) :: m But it does not work. Is there a workaround to avoid passing the dimension of the matrix as a fortran argument? Thanks in advance! Ramon From almar.klein at gmail.com Tue Jun 29 07:21:44 2010 From: almar.klein at gmail.com (Almar Klein) Date: Tue, 29 Jun 2010 13:21:44 +0200 Subject: [SciPy-User] ANN: Visvis version 1.3 (includes meshes and lighting) Message-ID: Hi all, I am exited to announce version 1.3 of Visvis, the object oriented approach to visualization. Website: http://code.google.com/p/visvis/ Discussion group: http://groups.google.com/group/visvis/ Documentation: http://code.google.com/p/visvis/wiki/Visvis_basics The largest improvement is the Mesh class to represent triangular and quad meshes and surface data. The Axes class got a property to access 8 different light sources. These improvements enable numerous new possibilities to visualize data using Visvis. Further changes include the introduction of polar plotting and 3D bar charts. For a (more) complete list of changes see the release notes . === Description === Visvis is a pure Python library for visualization of 1D to 4D data in an object oriented way. Essentially, visvis is an object oriented layer of Python on top of OpenGl? , thereby combining the power of OpenGl?with the usability of Python. A Matlab-like interface in the form of a set of functions allows easy creation of objects (e.g. plot(), imshow(), volshow(), surf()). Regards, Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralphkube at googlemail.com Tue Jun 29 08:02:51 2010 From: ralphkube at googlemail.com (Ralph Kube) Date: Tue, 29 Jun 2010 14:02:51 +0200 Subject: [SciPy-User] scipy.optimize.leastsq question In-Reply-To: References: <4C289756.4070406@googlemail.com> <4C29A09F.5010504@googlemail.com> Message-ID: <4C29E0EB.3020608@googlemail.com> >> Only use derivative free optimization methods if your problem is not continuous. >> If your problem is differentiable, you should compute the Jacobian >> yourself, e.g. with >> >> def myJacobian(x): >> h = 10**-3 >> # do finite differences approximation >> return .... >> >> and provide the Jacobian to >> scipy.optimize.leastsq(..., Dfun = myJacobian) >> This should work much better/reliable/faster than any of the alternatives. > > Maybe increasing the step length in the options to leastsq also works: > > epsfcn ? A suitable step length for the forward-difference > approximation of the Jacobian (for Dfun=None). > > I don't think I have tried for leastsq, but for some fmin it works > much better with larger step length for the finite difference > approximation. > > Josef Okay, I got leastsq working when I manually compute the Jacobian. The function i want to compute has non-trivial dependencies on its input parameters and the jacobian has some regions where it does not change at all. But manually specifying the step length for the finite difference scheme in the jacobian helps. Cheers, Ralph From nwagner at iam.uni-stuttgart.de Wed Jun 30 02:45:20 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 30 Jun 2010 08:45:20 +0200 Subject: [SciPy-User] How can I build an rpm of lapack Message-ID: Hi all, SUSE (and Red Hat) regularly shipped versions of the BLAS library where some functions were missing. Hence I would like to build my own rpm's of lapack and blas. Where can I find some instructions to build rpms of lapack and blas ? Any pointer would be appreciated. Nils From david at silveregg.co.jp Wed Jun 30 03:11:36 2010 From: david at silveregg.co.jp (David) Date: Wed, 30 Jun 2010 16:11:36 +0900 Subject: [SciPy-User] How can I build an rpm of lapack In-Reply-To: References: Message-ID: <4C2AEE28.7010801@silveregg.co.jp> On 06/30/2010 03:45 PM, Nils Wagner wrote: > Hi all, > > SUSE (and Red Hat) regularly shipped versions of the BLAS > library where some functions were missing. Hence I would > like to build my own rpm's of lapack and blas. > Where can I find some instructions to build rpms of lapack > and blas ? > > Any pointer would be appreciated. http://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html/RPM_Guide/index.html Note that the learning curve is pretty involved. Packaging your own packages only makese sense if you need to install the same software on many machines. Besides rpm, you need to know the conventions of your distribution (a RPM for SUSE is not the same as an RPM for RH which is itself different from a rpm for Fedora). David From nwagner at iam.uni-stuttgart.de Wed Jun 30 03:52:24 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 30 Jun 2010 09:52:24 +0200 Subject: [SciPy-User] How can I build an rpm of lapack In-Reply-To: <4C2AEE28.7010801@silveregg.co.jp> References: <4C2AEE28.7010801@silveregg.co.jp> Message-ID: On Wed, 30 Jun 2010 16:11:36 +0900 David wrote: > On 06/30/2010 03:45 PM, Nils Wagner wrote: >> Hi all, >> >> SUSE (and Red Hat) regularly shipped versions of the >>BLAS >> library where some functions were missing. Hence I would >> like to build my own rpm's of lapack and blas. >> Where can I find some instructions to build rpms of >>lapack >> and blas ? >> >> Any pointer would be appreciated. > > > http://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html/RPM_Guide/index.html > > Note that the learning curve is pretty involved. >Packaging your own > packages only makese sense if you need to install the >same software on > many machines. > > Besides rpm, you need to know the conventions of your >distribution (a > RPM for SUSE is not the same as an RPM for RH which is >itself different > from a rpm for Fedora). > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Hi David, Thank you for your hint. I would like to install lapack on many machines ( CentOS release 5.2 (Final) ) Actually, I was thinking of a "buidbot" system to automate the compile cycle. I just started with a shell script. Is it possible to add some lines in order to build a rpm ? wget http://www.netlib.org/lapack/lapack-3.1.1.tgz tar zxvf lapack-3.1.1.tgz cd lapack-3.1.1 cp INSTALL/make.inc.gfortran make.inc # # Now, you must edit the make.inc file to ensure that the OPTS and NOOPT # lines both contain the flag for compiling position-independent code on your platform (e.g. with gcc/gfortran it is -fPIC). # cd SRC make ... Nils From david at silveregg.co.jp Wed Jun 30 04:24:15 2010 From: david at silveregg.co.jp (David) Date: Wed, 30 Jun 2010 17:24:15 +0900 Subject: [SciPy-User] How can I build an rpm of lapack In-Reply-To: References: <4C2AEE28.7010801@silveregg.co.jp> Message-ID: <4C2AFF2F.5080400@silveregg.co.jp> On 06/30/2010 04:52 PM, Nils Wagner wrote: > > Hi David, > > Thank you for your hint. I would like to install lapack on > many machines ( CentOS release 5.2 (Final) ) > Actually, I was thinking of a "buidbot" system to automate > the compile cycle. > I just started with a shell script. Is it possible to add > some lines in order to build a rpm ? You should really look at the build service from open suse, and get your rpms from there: https://build.opensuse.org/project/show?project=science cheers, David From nwagner at iam.uni-stuttgart.de Wed Jun 30 04:42:12 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 30 Jun 2010 10:42:12 +0200 Subject: [SciPy-User] How can I build an rpm of lapack In-Reply-To: <4C2AFF2F.5080400@silveregg.co.jp> References: <4C2AEE28.7010801@silveregg.co.jp> <4C2AFF2F.5080400@silveregg.co.jp> Message-ID: On Wed, 30 Jun 2010 17:24:15 +0900 David wrote: > On 06/30/2010 04:52 PM, Nils Wagner wrote: > >> >> Hi David, >> >> Thank you for your hint. I would like to install lapack >>on >> many machines ( CentOS release 5.2 (Final) ) >> Actually, I was thinking of a "buidbot" system to >>automate >> the compile cycle. >> I just started with a shell script. Is it possible to >>add >> some lines in order to build a rpm ? > > You should really look at the build service from open >suse, and get your > rpms from there: > > https://build.opensuse.org/project/show?project=science > > cheers, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Hm, can I use those rpm's for CentOS as well ? Nils From denis-bz-gg at t-online.de Wed Jun 30 06:06:45 2010 From: denis-bz-gg at t-online.de (denis) Date: Wed, 30 Jun 2010 03:06:45 -0700 (PDT) Subject: [SciPy-User] interpolation with inverse-distance weighting + KDTree Message-ID: <14ef8436-9ad9-45a8-b606-495b575a1f2a@y11g2000yqm.googlegroups.com> Folks, here's a tiny class Invdisttree for interpolation with inverse- distance weighting + KDTree. It's solid, pretty fast, local, works for scattered data in any number of dimensions, leverages the excellent KDTree module. Comments would be welcome; real test cases, 3+d, most welcome. (For interpolating 2d data to a fine uniform grid, matplotlib._delaunay.nn_interpolate_grid is ~ 10 times faster (on my old mac ppc One reason is that the dot( 1/dist, z[ix] ) takes over half the time; another may be that nn_grid caches the current triangle ? ) cheers -- denis import numpy as np from scipy.spatial import cKDTree as KDTree class Invdisttree: """ inverse-distance-weighted interpolation using KDTree: invdisttree = Invdisttree( X, z ) -- points, values interpol = invdisttree( q, k=6, eps=0 ) -- interpolate z from the 6 points nearest each q; q may be one point, or a batch of points """ def __init__( self, X, z, leafsize=10 ): self.tree = KDTree( X, leafsize=leafsize ) # build the tree self.z = z def __call__( self, q, k=6, eps=0 ): # k nearest neighbours of each query point -- self.distances, self.ix = self.tree.query( q, k=k, eps=eps ) interpol = [] # np.zeros( (len(self.distances),) + np.shape(z[0]) ) for dist, ix in zip( self.distances, self.ix ): if dist[0] > 1e-10: w = 1 / dist wz = np.dot( w, self.z[ix] ) / np.sum(w) # weight z s by 1/dist else: wz = self.z[ix[0]] interpol.append( wz ) return interpol From lists at hilboll.de Wed Jun 30 06:24:03 2010 From: lists at hilboll.de (Andreas) Date: Wed, 30 Jun 2010 12:24:03 +0200 (CEST) Subject: [SciPy-User] optimization w/ constraints In-Reply-To: References: <4C28E1EE.50100@hilboll.de> Message-ID: <41e3cac7e841b950a0cfa0e240018eb2.squirrel@srv2.hilboll.net> >> i need to minimize a function in 19 variables x_1, ... x_19, with >> constraints >> >> x_i * a = x_{i+8} i=4,...,11 >> >> so actually, it's a 12 variable problem, but some of the variables get >> multiplied with each other ... > > I highly recommend simply recoding your function (or wrapping it in > another function) that will expand from the 12 independent variables > to the 19. Yes, of course. I was thinking too complicated ... did it using leastsq() on the 12 independent variables. Thanks for your suggestion, which pointed me to my mistake! cheers, andreas. From david.mrva at isamfunds.com Wed Jun 30 08:02:50 2010 From: david.mrva at isamfunds.com (David Mrva) Date: Wed, 30 Jun 2010 07:02:50 -0500 Subject: [SciPy-User] Error calling mov_max() on scikits.timeseries object Message-ID: Hello All, As a new user to scikits.timeseries, I started with a simple piece of code: read a one column timeseries data from a CSV file and find moving maxima. How should I correctly use the mov_max() function with a timeseries object? When I call the mov_max() function, I keep getting an exception: >>> import numpy as np >>> import scikits.timeseries as ts >>> import scikits.timeseries.lib.moving_funcs as mf >>> b=ts.tsfromtxt("test4.csv", delimiter=',', names='price', datecols=(0), dtype='float') >>> b timeseries([(5277.0,) (5214.0,) (5180.0,) (5092.5,)], dtype = [('price', '>> c=mf.mov_max(b, 2) Traceback (most recent call last): File "C:\Python26\lib\site-packages\scikits\timeseries\lib\moving_funcs.py", line 228, in mov_max return _moving_func(data, MA_mov_max, kwargs) File "C:\Python26\lib\site-packages\scikits\timeseries\lib\moving_funcs.py", line 121, in _moving_func data = ma.fix_invalid(data) File "C:\Python26\lib\site-packages\numpy\ma\core.py", line 516, in fix_invalid invalid = np.logical_not(np.isfinite(a._data)) AttributeError: logical_not >>> Where the contents of the test4.csv file is: 24/06/2010 09:10,5092.5 23/06/2010 09:10,5180 22/06/2010 09:10,5214 21/06/2010 09:10,5277 Calling mov_max() on a list of numbers works fine. Many thanks for any tips, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Wed Jun 30 13:28:49 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 30 Jun 2010 13:28:49 -0400 Subject: [SciPy-User] Error calling mov_max() on scikits.timeseries object In-Reply-To: References: Message-ID: On Jun 30, 2010, at 8:02 AM, David Mrva wrote: > Hello All, > > As a new user to scikits.timeseries, I started with a simple piece of code: read a one column timeseries data from a CSV file and find moving maxima. > > How should I correctly use the mov_max() function with a timeseries object? > > When I call the mov_max() function, I keep getting an exception: > > >>> import numpy as np > >>> import scikits.timeseries as ts > >>> import scikits.timeseries.lib.moving_funcs as mf > >>> b=ts.tsfromtxt("test4.csv", delimiter=',', names='price', datecols=(0), dtype='float') > >>> b > timeseries([(5277.0,) (5214.0,) (5180.0,) (5092.5,)], > dtype = [('price', ' dates = [737791 738156 738521 738886], > freq = U) > > >>> c=mf.mov_max(b, 2) > Traceback (most recent call last): > File "C:\Python26\lib\site-packages\scikits\timeseries\lib\moving_funcs.py", line 228, in mov_max > return _moving_func(data, MA_mov_max, kwargs) > File "C:\Python26\lib\site-packages\scikits\timeseries\lib\moving_funcs.py", line 121, in _moving_func > data = ma.fix_invalid(data) > File "C:\Python26\lib\site-packages\numpy\ma\core.py", line 516, in fix_invalid > invalid = np.logical_not(np.isfinite(a._data)) > AttributeError: logical_not > >>> > > Where the contents of the test4.csv file is: > 24/06/2010 09:10,5092.5 > 23/06/2010 09:10,5180 > 22/06/2010 09:10,5214 > 21/06/2010 09:10,5277 > > Calling mov_max() on a list of numbers works fine. The moving functions don't require that the input is a time_series (a standard ndarray or MaskedArray works frine), but you can't use a series w/ a structured dtype (that is, w/ named fields, like the one you have). Instead, you should use >>> c=mf.mov_max(b['price'], 2) I'm a tad surprised by the exception you're getting. Which version of timeseries/numpy are you using ? Mine give a NotImplementedError: Not implemented for this type which is far more explanatory. From lists at hilboll.de Wed Jun 30 14:55:47 2010 From: lists at hilboll.de (Andreas) Date: Wed, 30 Jun 2010 19:55:47 +0100 Subject: [SciPy-User] Using leastsq(), fmin(), anneal() to do a least squares fit Message-ID: <4C2B9333.50500@hilboll.de> Hi there, I have an optimization problem in 12 variables. I first wrote a functino toBeMinimized(), which outputs these 12 variables as one array. Trying to solve this problem with leastsq(), I noticed that however i play around with the parameters, the function does not seem to find the global optimum. So I figured I'd try some other functions from scipy.optimize, starting with anneal(). I wrote a wrapper function around my original toBeMinimized(), doing nothing but call np.sum(toBeMinimized(params)**2). Now, however, the results I get from anneal vary widely, and don't seem to have anything in common with the results from leastsq(). Basically the same happens when I use fmin() instead of anneal(). I'm somewhat at a loss here. leastsq() seems to give the most consistent results, but still they vary too much to be too useful for me. Any ideas? Thanks for your insight, Andreas. From matthieu.brucher at gmail.com Wed Jun 30 15:11:08 2010 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 30 Jun 2010 21:11:08 +0200 Subject: [SciPy-User] Using leastsq(), fmin(), anneal() to do a least squares fit In-Reply-To: <4C2B9333.50500@hilboll.de> References: <4C2B9333.50500@hilboll.de> Message-ID: Perhaps your minimum is numerically unstable, or around the global minimum, the cost function is more or less constant? Due to the 12 variables, you may also have several local minimas where you may be trapped. If you want more advanced optimization tools, you may try OpenOpt or scikits.optimization. Matthieu 2010/6/30 Andreas : > Hi there, > > I have an optimization problem in 12 variables. > > I first wrote a functino toBeMinimized(), which outputs these 12 > variables as one array. Trying to solve this problem with leastsq(), I > noticed that however i play around with the parameters, the function > does not seem to find the global optimum. > > So I figured I'd try some other functions from scipy.optimize, starting > with anneal(). I wrote a wrapper function around my original > toBeMinimized(), doing nothing but call > np.sum(toBeMinimized(params)**2). Now, however, the results I get from > anneal vary widely, and don't seem to have anything in common with the > results from leastsq(). > > Basically the same happens when I use fmin() instead of anneal(). > > I'm somewhat at a loss here. leastsq() seems to give the most consistent > results, but still they vary too much to be too useful for me. > > Any ideas? > > Thanks for your insight, > > Andreas. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher From jturner at gemini.edu Wed Jun 30 17:48:23 2010 From: jturner at gemini.edu (James Turner) Date: Wed, 30 Jun 2010 17:48:23 -0400 Subject: [SciPy-User] Co-ordinating Python astronomy libraries? Message-ID: <4C2BBBA7.5060006@gemini.edu> Dear Python users in astronomy, At SciPy 2009, I arranged an astronomy BoF where we discussed the fact that there are now a number of astronomy libraries for Python floating around and maybe it would be good to collect more code into a single place. People seemed receptive to this idea and weren't sure why it hasn't already happened, given that there has been an Astrolib page at SciPy for some years now, with an associated SVN repository: http://scipy.org/AstroLib After the meeting last August, I was supposed to contact the mailing list and some library authors I had talked to previously, to discuss this further. My apologies for taking 10 months to do that! I did draft an email the day after the BoF, but then we ran into a hurdle with setting up new committers to the AstroLib repository (which has taken a lot longer than expected to resolve), so it seemed a bad time to suggest that new people start using it. To discuss these issues further, we'd like to encourage everyone to sign up for the AstroPy mailing list if you are not already on it. The traffic is just a few messages per month. http://lists.astropy.scipy.org/mailman/listinfo/astropy We (the 2009 BoF group) would also like to hear on the list about why people have decided to host their own astronomy library (eg. not being aware of the one at SciPy). Are you interested in contributing to Astrolib? Do you have any other comments or concerns about co-ordinating tools? Our motivation is to make libraries easy to find and install, allow sharing code easily, help rationalize available functionality and fill in what's missing. A standard astronomy library with a single set of documentation should be more coherent and easier to maintain. The idea is not to limit authors' flexibility of take ownership of their code -- the sub-packages can still be maintained by different people. If you're at SciPy this week, Perry Greenfield and I would be happy to talk to you. If you would like to add your existing library to Astrolib, please contact Perry Greenfield or Mark Sienkiewicz at STScI for access (contact details at http://scipy.org/AstroLib). Note that the repository is being moved to a new server this week, after which the URLs will be updated at scipy.org. Thanks! James Turner (Gemini). Bcc: various library authors From erin.sheldon at gmail.com Wed Jun 30 19:37:15 2010 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Wed, 30 Jun 2010 19:37:15 -0400 Subject: [SciPy-User] [AstroPy] Co-ordinating Python astronomy libraries? In-Reply-To: <4C2BBBA7.5060006@gemini.edu> References: <4C2BBBA7.5060006@gemini.edu> Message-ID: Dear James and AstroPy - Thanks for your note, and prompting! My collegues and I have been writing data analysis related tools for a some time now. We are all astronomers, and in addition to general analysis tools we have a growing library of astro utilities. I'd to make others aware of these, both because they may be useful and because more eyes will find more bugs. We would also welcome collaborators. So far we have hosted our own site because the tools are often so general rather than astronomy specific, but if there is interest we could migrate or mirror some of these to the astrolib svn archive. As the tools mature we have been putting them at this google sites repository: http://code.google.com/p/esutil/ Example astronomy codes currently are WCS utilities (wcsutil), cosmology calculations (cosmology), coordinate transformations (coords), and heirarchical triangular mesh sky search tools (htm). Of more general interest may be the numpy_util, stat, random, ostools, io, and integrate sub-packages. In addition to new things, there are a lot of routines derived from IDL, the Goddard IDL astronomy libraries and the SDSSIDL and IDLUTILS packages. In particular, the structure routines in those IDL packages have correspondence to recarray routines in our packages. For those writing C/C++ extensions, the include/NumpyVector.h template class designed to simplify working with 1-d numpy arrays. There is also a NumpyVoidVector for arrays whose type is determined at runtime. The recfile package is incorporated into esutil (http://code.google.com/p/recfile/) and is used for efficient io of rec files (recfile and sfile sub-packages). I would say that a primary focus of the Astro tools is on using numerical python arrays, especially recarrays. For example, the coordinate transformation and WCS codes take arrays and return arrays. For example: l,b = coords.eq2gal(ra,dec) wcs=wcsutil.WCS(fits_header) ra,dec = wcs.image2sky(x,y) where everything here can be an array. This is opposed to the other libraries out that work with coordinates as objects. There are clearly tradeoffs; we generally read data from FITS tables or databases as numerical python arrays and so it is more natural to work with data that way. I would say this is complimentary to the other approach. In addition to above sub-packages, a few more are very close to ready: * pgnumpy: a numerical python interface to postgres * numpydb:a numerical python interface to berkeley db. * columns: A simple, efficient column-oriented, pythonic database with indexing provided by numpydb. * sdsspy: Tools for working with SDSS data. * mangle: tools for working with mangle masks. I hope people find these useful, Erin Scott Sheldon Cosmology Group Brookhaven National Laboratory on behalf of Brian Gerke and Amy Kimball On Wed, Jun 30, 2010 at 5:48 PM, James Turner wrote: > Dear Python users in astronomy, > > At SciPy 2009, I arranged an astronomy BoF where we discussed the > fact that there are now a number of astronomy libraries for Python > floating around and maybe it would be good to collect more code into > a single place. People seemed receptive to this idea and weren't sure > why it hasn't already happened, given that there has been an Astrolib > page at SciPy for some years now, with an associated SVN repository: > > ? http://scipy.org/AstroLib > > After the meeting last August, I was supposed to contact the mailing > list and some library authors I had talked to previously, to discuss > this further. My apologies for taking 10 months to do that! I did > draft an email the day after the BoF, but then we ran into a hurdle > with setting up new committers to the AstroLib repository (which has > taken a lot longer than expected to resolve), so it seemed a bad > time to suggest that new people start using it. > > To discuss these issues further, we'd like to encourage everyone to > sign up for the AstroPy mailing list if you are not already on it. > The traffic is just a few messages per month. > > ? http://lists.astropy.scipy.org/mailman/listinfo/astropy > > We (the 2009 BoF group) would also like to hear on the list about > why people have decided to host their own astronomy library (eg. not > being aware of the one at SciPy). Are you interested in contributing > to Astrolib? Do you have any other comments or concerns about > co-ordinating tools? Our motivation is to make libraries easy to > find and install, allow sharing code easily, help rationalize > available functionality and fill in what's missing. A standard > astronomy library with a single set of documentation should be more > coherent and easier to maintain. The idea is not to limit authors' > flexibility of take ownership of their code -- the sub-packages > can still be maintained by different people. > > If you're at SciPy this week, Perry Greenfield and I would be happy > to talk to you. If you would like to add your existing library to > Astrolib, please contact Perry Greenfield or Mark Sienkiewicz at > STScI for access (contact details at http://scipy.org/AstroLib). > Note that the repository is being moved to a new server this week, > after which the URLs will be updated at scipy.org. > > Thanks! > > James Turner (Gemini). > > Bcc: various library authors > > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://mail.scipy.org/mailman/listinfo/astropy > From apalomba at austin.rr.com Wed Jun 30 22:17:38 2010 From: apalomba at austin.rr.com (Anthony Palomba) Date: Wed, 30 Jun 2010 21:17:38 -0500 Subject: [SciPy-User] looking for a python computational library... Message-ID: Hey scipy-ers, I was wondering if there is some python module out there that does computational geometry that I could use in conjunction with scipy. Thanks, Anthony -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Wed Jun 30 22:27:26 2010 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Wed, 30 Jun 2010 19:27:26 -0700 Subject: [SciPy-User] looking for a python computational library... In-Reply-To: References: Message-ID: <4C2BFD0E.4090606@noaa.gov> Anthony Palomba wrote: > I was wondering if there is some python module out there > that does computational geometry that I could use in > conjunction with scipy. What specific routines do you need? I don't know of a general purpose one, but there is Shapely, which is a wrapper for the geos lib, used mainly for GIS. There are a handful of delauney triangulation codes, too. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From william.ratcliff at gmail.com Wed Jun 30 23:16:00 2010 From: william.ratcliff at gmail.com (william ratcliff) Date: Wed, 30 Jun 2010 23:16:00 -0400 Subject: [SciPy-User] looking for a python computational library... In-Reply-To: <4C2BFD0E.4090606@noaa.gov> References: <4C2BFD0E.4090606@noaa.gov> Message-ID: You may want: http://www.cgal.org/ It has python bindings. Cheers, William (mind the license) On 6/30/10, Christopher Barker wrote: > Anthony Palomba wrote: >> I was wondering if there is some python module out there >> that does computational geometry that I could use in >> conjunction with scipy. > > What specific routines do you need? > > I don't know of a general purpose one, but there is Shapely, which is a > wrapper for the geos lib, used mainly for GIS. > > There are a handful of delauney triangulation codes, too. > > -Chris > > > > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From apalomba at austin.rr.com Wed Jun 30 23:26:19 2010 From: apalomba at austin.rr.com (Anthony Palomba) Date: Wed, 30 Jun 2010 22:26:19 -0500 Subject: [SciPy-User] looking for a python computational library... In-Reply-To: References: <4C2BFD0E.4090606@noaa.gov> Message-ID: Actually I tried CGAL python, it does not run, I get all sorts of errors. And the CGAL python email list is devoid of any responses. I guess I could try it again, maybe they have fixed things. -ap On Wed, Jun 30, 2010 at 10:16 PM, william ratcliff < william.ratcliff at gmail.com> wrote: > You may want: > http://www.cgal.org/ > > It has python bindings. > > > Cheers, > William > (mind the license) > > On 6/30/10, Christopher Barker wrote: > > Anthony Palomba wrote: > >> I was wondering if there is some python module out there > >> that does computational geometry that I could use in > >> conjunction with scipy. > > > > What specific routines do you need? > > > > I don't know of a general purpose one, but there is Shapely, which is a > > wrapper for the geos lib, used mainly for GIS. > > > > There are a handful of delauney triangulation codes, too. > > > > -Chris > > > > > > > > > > -- > > Christopher Barker, Ph.D. > > Oceanographer > > > > Emergency Response Division > > NOAA/NOS/OR&R (206) 526-6959 voice > > 7600 Sand Point Way NE (206) 526-6329 fax > > Seattle, WA 98115 (206) 526-6317 main reception > > > > Chris.Barker at noaa.gov > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tashbean at googlemail.com Thu Jun 24 09:08:49 2010 From: tashbean at googlemail.com (tashbean) Date: Thu, 24 Jun 2010 13:08:49 -0000 Subject: [SciPy-User] more efficient way of dealing with numpy arrays? Message-ID: <7e882c44-6040-406c-b811-19f89be81d33@y11g2000yqm.googlegroups.com> Hi, I would like to pick certain rows of an array based on matching the first column with options contained in another array e.g. I have this array: parameter_list = array([['Q10', 'scipy.stats.uniform(2,10-2)'], ['mpe', 'scipy.stats.uniform(0.,1.)'], ['rdr_a', 'scipy.stats.uniform(5e-5,1.24-5e-5)'], ['rdr_b', 'scipy.stats.uniform(-60.18,-3.41--60.18)']], dtype='|S40') I have an array which contains the strings of the first column which I would like to pick out e.g. param_options = ['Q10' , 'mpe'] My solution of how to do this is as follows: new_params = numpy.array([]) for i in xrange(len(param_options)): new_params = numpy.append(new_params, parameter_list[parameter_list[:,0]==param_options[i]]) Is there a more efficient way of doing this? Thank you for your help! Tash From lindsay at stsci.edu Fri Jun 25 06:35:01 2010 From: lindsay at stsci.edu (Kevin) Date: Fri, 25 Jun 2010 10:35:01 -0000 Subject: [SciPy-User] checklist script error Message-ID: <20100625063458.ABO14926@comet.stsci.edu> The entire output of that script upon attempting to run it is the following: Running tests: __main__.test_imports('setuptools', None) ... ERROR __main__.test_imports('IPython', None) ... MOD: IPython, version: 0.9.1 ok __main__.test_imports('numpy', None) ... MOD: numpy, version: 1.3.0 ok __main__.test_imports('scipy', None) ... MOD: scipy, version: 0.7.1 ok __main__.test_imports('scipy.io', None) ... MOD: scipy.io, version: *no info* ok __main__.test_imports('matplotlib', ) ... MOD: matplotlib, version: 0.99.0 ok __main__.test_imports('pylab', None) ... MOD: pylab, version: *no info* ok __main__.test_imports('enthought.mayavi.api', None) ... ERROR __main__.test_loadtxt(array([[ 0., 1.], ... ok __main__.test_loadtxt(array([('M', 21, 72.0), ('F', 35, 58.0)], ... ok __main__.test_loadtxt(array([ 1., 3.]), array([ 1., 3.])) ... ok __main__.test_loadtxt(array([ 2., 4.]), array([ 2., 4.])) ... ok Simple plot generation. ... ok Plots with math ... ok ====================================================================== ERROR: __main__.test_imports('setuptools', None) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/stsci/pyssg/2.5.4/nose/case.py", line 183, in runTest self.test(*self.arg) File "intro_tut_checklist.py", line 95, in check_import exec "import %s as m" % mnames File "", line 1, in ImportError: No module named setuptools ====================================================================== ERROR: __main__.test_imports('enthought.mayavi.api', None) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/stsci/pyssg/2.5.4/nose/case.py", line 183, in runTest self.test(*self.arg) File "intro_tut_checklist.py", line 95, in check_import exec "import %s as m" % mnames File "", line 1, in ImportError: No module named enthought.mayavi.api ---------------------------------------------------------------------- Ran 14 tests in 10.766s FAILED (errors=2) Cleanup - removing temp directory: /Users/lindsay/tmp-testdata-etwtf9 *************************************************************************** TESTS FINISHED *************************************************************************** If the printout above did not finish in 'OK' but instead says 'FAILED', copy and send the *entire* output, including the system information below, for help. We'll do our best to assist you. You can send your message to the Scipy user mailing list: http://mail.scipy.org/mailman/listinfo/scipy-user but feel free to also CC directly: cburns at berkeley dot edu ================== System information ================== os.name : posix os.uname : ('Darwin', 'mooseman.home', '9.8.0', 'Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386', 'i386') platform : darwin platform+ : Darwin-9.8.0-i386-32bit prefix : /usr/stsci/pyssg/Python-2.5.4 exec_prefix : /usr/stsci/pyssg/Python-2.5.4 executable : /usr/stsci/pyssg/Python-2.5.4//bin/python version_info : (2, 5, 4, 'final', 0) version : 2.5.4 (r254:67916, Nov 6 2009, 11:35:14) [GCC 4.0.1 (Apple Inc. build 5465)] ==================