From gustavo.goretkin at gmail.com Sat Oct 1 04:34:11 2011 From: gustavo.goretkin at gmail.com (Gustavo Goretkin) Date: Sat, 1 Oct 2011 04:34:11 -0400 Subject: [SciPy-User] Masked array along one axis Message-ID: I want to use a 2D array to store a set of points. There are n points in d dimensions, so the array is n-by-d. I want to use the masked array features of NumPy, but it doesn't make sense to mask only some some components of a point. Either one point is entirely masked or it isn't. In other words, the mask is n-by-1 (or equivalent up to singleton dimensions). Does the masked array support this masking only in some axes? Thanks, Gustavo -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Sat Oct 1 05:55:51 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Sat, 1 Oct 2011 11:55:51 +0200 Subject: [SciPy-User] In which numpy modules should MKL be better then ATLAS Message-ID: I had a chance to test this sample on different setups on same PC: import numpy as np A=np.ones((1000,1000)) B=np.ones((1000,1000)) %timeit np.dot(A, B) because of OS reinstalling. 1x = ATLAS on Linux (reference speed) 2x = MKL with GNU compilers on Linux 2x = MKL with Intel compilers on Windows 7 30x = bare numpy I didn't plan to do this so I didn't test additional calculations, and I was using latest version to date for all products. On Internet I usually find that MKL should outperform ATLAS. I'm curious what would linalg module testing give, but as said I didn't test it. So in which modules should user expect impact of MKL over ATLAS? In matrix dot product obviously not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.s.seljebotn at astro.uio.no Sat Oct 1 06:23:37 2011 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Sat, 01 Oct 2011 12:23:37 +0200 Subject: [SciPy-User] In which numpy modules should MKL be better then ATLAS In-Reply-To: References: Message-ID: <4E86EA29.6060007@astro.uio.no> On 10/01/2011 11:55 AM, Klonuo Umom wrote: > I had a chance to test this sample on different setups on same PC: > > import numpy as np > A=np.ones((1000,1000)) > B=np.ones((1000,1000)) > %timeit np.dot(A, B) > > because of OS reinstalling. > > 1x = ATLAS on Linux (reference speed) > 2x = MKL with GNU compilers on Linux > 2x = MKL with Intel compilers on Windows 7 > 30x = bare numpy > > I didn't plan to do this so I didn't test additional calculations, and I > was using latest version to date for all products. > > On Internet I usually find that MKL should outperform ATLAS. I'm curious > what would linalg module testing give, but as said I didn't test it. So > in which modules should user expect impact of MKL over ATLAS? In matrix > dot product obviously not. What CPU are you on? MKL is tuned for Intel CPUs, perhaps ATLAS outperforms it on AMD ones. Dag Sverre From klonuo at gmail.com Sat Oct 1 06:35:24 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Sat, 1 Oct 2011 12:35:24 +0200 Subject: [SciPy-User] In which numpy modules should MKL be better then ATLAS In-Reply-To: <4E86EA29.6060007@astro.uio.no> References: <4E86EA29.6060007@astro.uio.no> Message-ID: It's old Intel P4 3Ghz ATLAS/LAPACK are build from source so maybe more optimized On Sat, Oct 1, 2011 at 12:23 PM, Dag Sverre Seljebotn < d.s.seljebotn at astro.uio.no> wrote: > On 10/01/2011 11:55 AM, Klonuo Umom wrote: > > I had a chance to test this sample on different setups on same PC: > > > > import numpy as np > > A=np.ones((1000,1000)) > > B=np.ones((1000,1000)) > > %timeit np.dot(A, B) > > > > because of OS reinstalling. > > > > 1x = ATLAS on Linux (reference speed) > > 2x = MKL with GNU compilers on Linux > > 2x = MKL with Intel compilers on Windows 7 > > 30x = bare numpy > > > > I didn't plan to do this so I didn't test additional calculations, and I > > was using latest version to date for all products. > > > > On Internet I usually find that MKL should outperform ATLAS. I'm curious > > what would linalg module testing give, but as said I didn't test it. So > > in which modules should user expect impact of MKL over ATLAS? In matrix > > dot product obviously not. > > What CPU are you on? MKL is tuned for Intel CPUs, perhaps ATLAS > outperforms it on AMD ones. > > Dag Sverre > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hturesson at gmail.com Sat Oct 1 06:46:53 2011 From: hturesson at gmail.com (Hjalmar Turesson) Date: Sat, 1 Oct 2011 06:46:53 -0400 Subject: [SciPy-User] Cubic splines - MATLAB vs Scipy.interpolate In-Reply-To: References: Message-ID: Hi, splmake and spleval (in scipy.interpolate) appear to run as fast as spline in matlab. They are approximately 30 times faster than cspline1d and cspline1d_eval. splmake with order = 3 gives the same output as cspline1d. The documentation for splmake and spleval is unfortunately minimal. Best, Hjalmar On Tue, Sep 27, 2011 at 11:08 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Tue, Sep 27, 2011 at 5:50 PM, wrote: > >> On Tue, Sep 27, 2011 at 7:27 PM, Charles R Harris >> wrote: >> > >> > >> > On Tue, Sep 27, 2011 at 2:27 PM, Zachary Pincus < >> zachary.pincus at yale.edu> >> > wrote: >> >> >> >> scipy.signal has some cubic and quadratic spline functions: >> >> cspline1d >> >> cspline1d_eval >> >> cspline2d >> >> >> >> (and replace the c with q for the quadratic versions). >> >> >> >> I have no idea how fast they are, or if they're at all drop-in >> >> replacements for the matlab ones. The stuff in scipy.interpolate is >> >> powerful, but the fitpack spline-fitting operations can be a bit >> >> input-sensitive and prone to strange ringing. >> >> >> >> Zach >> >> >> >> >> >> >> >> On Sep 27, 2011, at 3:32 PM, Charles R Harris wrote: >> >> >> >> > >> >> > >> >> > On Tue, Sep 27, 2011 at 1:04 PM, Jaidev Deshpande >> >> > wrote: >> >> > Hi >> >> > >> >> > The big question: Why does the MATLAB function spline operate faster >> >> > than the cubic spline alternatives in Scipy, especially splrep and >> splev ? >> >> > >> >> > ------ >> >> > >> >> > The context: I'm working on an algorithm that bottlenecks on spline >> >> > interpolation. >> >> > >> >> > Some functions in Scipy return an interpolation object function >> >> > depending on the input data which needs to be evaluated independently >> over >> >> > the whole range. >> >> > >> >> > So I used 'lower order' functions like splrep and splev. Even that >> was >> >> > too slow. >> >> > >> >> > Then I tried to write my own code for cubic splines, generating and >> >> > solving a system of 4N simultaneous equations for interpolation >> between N+1 >> >> > points. >> >> > >> >> > No matter what I do, the code is quite slow. How come the MATLAB >> >> > function spline operate so fast? What am I missing? What can I do to >> speed >> >> > it up? >> >> > >> >> > >> >> > I suspect it is because the scipy routines you reference are based on >> >> > b-splines, which are needed for least squares fits. Simple cubic >> spline >> >> > interpolation through a give set of points tends to be faster and I >> believe >> >> > that is what the Matlab spline function does. To get b-splines in >> Matlab you >> >> > need one of the toolboxes, it doesn't come with the core. I don't >> think >> >> > scipy has a simple cubic spline interpolation, but I may be wrong. >> >> > >> > >> > I believe the splines in signal are periodic and the boundary conditions >> > aren't flexible. The documentation is, um..., well, they are effectively >> > undocumented. We really need better spline support in scipy. >> >> I thought the main difference of the signal compared to interpolate >> splines is that they work only on a regular grid. >> They have a smoothing coefficient lambda, so they don't seem to be >> pure interpolating splines. >> >> > The equally spaced points are what I meant by periodic, i.e., the same > basis function can be repeated. The signal itself it periodic if extended to > twice its length with the mirror symmetry at the ends. I'm not sure how the > smoothing factor works. The ndimage map_coordinates would work for > interpolating equally spaced points and has more boundary conditions, but > they are still can't be arbitrary. I think scipy could use a simple > interpolating spline with not a knot default boundary conditions and no > repeated knot points. The not a knot boundary conditions means to use finite > differences at the ends to estimate the slopes which then become the > boundary conditions. > > Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hturesson at gmail.com Sat Oct 1 06:53:22 2011 From: hturesson at gmail.com (Hjalmar Turesson) Date: Sat, 1 Oct 2011 06:53:22 -0400 Subject: [SciPy-User] fast spline interpolation of multiple equal length waveforms In-Reply-To: References: <4E7C8FA8.6020707@vcn.com> Message-ID: Hi Jonathan, I finally got time to test the polynomial interpolation. Unfortunately, my x_interpret had point slightly outside the interval of the original x. polyterp didn't like extrapolation and became quite slow. On the other hand, splmake and spleval works fine and are very quick. Thus, my problem is solved. Thanks, Hjalmar On Fri, Sep 23, 2011 at 10:21 AM, Hjalmar Turesson wrote: > Thanks for the reply. > > Both x and y values are different, but they have the same length. > I'll try your simple piecewise polynomial interpolation over the weekend, > and report back when I know how well it works. > > Thanks, > Hjalmar > > > On Fri, Sep 23, 2011 at 9:54 AM, Jonathan Stickel wrote: > >> On 9/22/11 20:36 , scipy-user-request at scipy.org wrote: >> >>> Date: Thu, 22 Sep 2011 21:59:59 -0400 >>> From: Hjalmar Turesson >>> Subject: [SciPy-User] fast spline interpolation of multiple equal >>> length waveforms >>> To:scipy-user at scipy.org >>> Message-ID: >>> >> gmail.com >>> > >>> Content-Type: text/plain; charset="iso-8859-1" >>> >>> >>> Hi, >>> I got a data set with hundreds of thousands for 40 point long waveforms. >>> I >>> want to use cubic splines to interpolate these at intermediate time >>> points. >>> However, the points are different all waveforms, only the number of >>> points >>> is the same. In other words, I want to interpolate a large number of >>> equally >>> short waveforms, each to its own grid of x-values/time points, and I want >>> to >>> do this as FAST as possible. >>> >>> Are there any functions that can take a whole array for waveforms and a >>> size >>> matched array of new x-values, and interpolate each waveform at a matched >>> row (or column) of x-values? >>> >>> What I've found, this far, appear to require a loop to one by one go >>> through >>> the waveforms and corresponding grid of x-values. I fear that a long loop >>> will be significantly slower than a direct evaluation of the entire >>> array. >>> >>> Thanks, >>> Hjalmar >>> >> >> For each data set (x,y), are the x-values the same and the y-values >> different? If so, you may find this code useful: >> >> http://scipy-central.org/item/**21/1/simple-piecewise-** >> polynomial-interpolation >> >> It is not splines, but nonetheless provides good quality interpolation and >> is very fast. For given x and x_interp, it can create an interpolation >> matrix P. Then y_interp = P*y. If you have all your y-data in Y, then >> Y_interp = P*Y. >> >> HTH, >> Jonathan >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Oct 1 10:52:18 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 1 Oct 2011 10:52:18 -0400 Subject: [SciPy-User] inverse function of a spline In-Reply-To: References: Message-ID: On Fri, Sep 30, 2011 at 12:37 PM, wrote: > On Thu, Sep 29, 2011 at 12:37 PM, Jeff Brown wrote: > > gmail.com> writes: > > > >> > >> On Fri, May 7, 2010 at 4:37 PM, nicky van foreest > gmail.com> > > wrote: > >> > Hi Josef, > >> > > >> >> If I have a cubic spline, or any other smooth interpolator in scipy, > >> >> is there a way to get the > >> >> inverse function directly? > >> > > >> > How can you ensure that the cubic spline approx is non-decreasing? I > >> > actually wonder whether using cubic splines is the best way to > >> > approximate distribution functions. > >> > >> Now I know it's not, but I was designing the extension to the linear > case > >> on paper instead of in the interpreter, and got stuck on the wrong > >> problem. > >> > > > > There's an algorithm for making constrained-to-be-monotonic spline > interpolants > > (only in one dimension, though). The reference is Dougherty et al 1989 > > Mathematics of Computation, vol 52 no 186 pp 471-494 (April 1989). This > is > > available on-line at www.jstor.org. > > Thanks for the reference. Maybe Ann's interpolators in scipy that take > derivatives could be used for this. > trying out how PiecewisePolynomial works, almost but not quite enough Josef spamming the world with messy examples > > Shape preserving splines or piecewise polynomials would make a nice > addition to scipy, but I'm only a potential user. > > I have dropped this for the moment, after taking a detour with > (global) orthonormal polynomial approximation, where I also haven't > solved the integration and function inversion problem yet (nice pdf > but only brute force cdf and ppf). > > Josef > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: try_interpolate_monotonic.py Type: text/x-python Size: 1271 bytes Desc: not available URL: From charlesr.harris at gmail.com Sat Oct 1 11:17:33 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 1 Oct 2011 09:17:33 -0600 Subject: [SciPy-User] inverse function of a spline In-Reply-To: References: Message-ID: On Sat, Oct 1, 2011 at 8:52 AM, wrote: > > > On Fri, Sep 30, 2011 at 12:37 PM, wrote: > >> On Thu, Sep 29, 2011 at 12:37 PM, Jeff Brown wrote: >> > gmail.com> writes: >> > >> >> >> >> On Fri, May 7, 2010 at 4:37 PM, nicky van foreest >> gmail.com> >> > wrote: >> >> > Hi Josef, >> >> > >> >> >> If I have a cubic spline, or any other smooth interpolator in scipy, >> >> >> is there a way to get the >> >> >> inverse function directly? >> >> > >> >> > How can you ensure that the cubic spline approx is non-decreasing? I >> >> > actually wonder whether using cubic splines is the best way to >> >> > approximate distribution functions. >> >> >> >> Now I know it's not, but I was designing the extension to the linear >> case >> >> on paper instead of in the interpreter, and got stuck on the wrong >> >> problem. >> >> >> > >> > There's an algorithm for making constrained-to-be-monotonic spline >> interpolants >> > (only in one dimension, though). The reference is Dougherty et al 1989 >> > Mathematics of Computation, vol 52 no 186 pp 471-494 (April 1989). This >> is >> > available on-line at www.jstor.org. >> >> Thanks for the reference. Maybe Ann's interpolators in scipy that take >> derivatives could be used for this. >> > > trying out how PiecewisePolynomial works, almost but not quite enough > > IIRC, de Boor dealt with fitting distribution functions somewhere in his book "A Practical Guide to Splines". I don't recall whether or not he constrains things to positivity, but recalling one of the figures, I think that he was fitting histograms, perhaps their area. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Oct 1 12:05:08 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 1 Oct 2011 12:05:08 -0400 Subject: [SciPy-User] inverse function of a spline In-Reply-To: References: Message-ID: On Sat, Oct 1, 2011 at 11:17 AM, Charles R Harris wrote: > > > On Sat, Oct 1, 2011 at 8:52 AM, wrote: > >> >> >> On Fri, Sep 30, 2011 at 12:37 PM, wrote: >> >>> On Thu, Sep 29, 2011 at 12:37 PM, Jeff Brown >>> wrote: >>> > gmail.com> writes: >>> > >>> >> >>> >> On Fri, May 7, 2010 at 4:37 PM, nicky van foreest >>> gmail.com> >>> > wrote: >>> >> > Hi Josef, >>> >> > >>> >> >> If I have a cubic spline, or any other smooth interpolator in >>> scipy, >>> >> >> is there a way to get the >>> >> >> inverse function directly? >>> >> > >>> >> > How can you ensure that the cubic spline approx is non-decreasing? I >>> >> > actually wonder whether using cubic splines is the best way to >>> >> > approximate distribution functions. >>> >> >>> >> Now I know it's not, but I was designing the extension to the linear >>> case >>> >> on paper instead of in the interpreter, and got stuck on the wrong >>> >> problem. >>> >> >>> > >>> > There's an algorithm for making constrained-to-be-monotonic spline >>> interpolants >>> > (only in one dimension, though). The reference is Dougherty et al 1989 >>> > Mathematics of Computation, vol 52 no 186 pp 471-494 (April 1989). >>> This is >>> > available on-line at www.jstor.org. >>> >>> Thanks for the reference. Maybe Ann's interpolators in scipy that take >>> derivatives could be used for this. >>> >> >> trying out how PiecewisePolynomial works, almost but not quite enough >> >> > IIRC, de Boor dealt with fitting distribution functions somewhere in his > book "A Practical Guide to Splines". I don't recall whether or not he > constrains things to positivity, but recalling one of the figures, I think > that he was fitting histograms, perhaps their area. > looks nice, he matches the area in each bin. page 79 ff, I don't see any explicit non-negativity constraints. as for an efficient implementation in numpython: ? Josef > > > > Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Oct 1 23:36:52 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 1 Oct 2011 23:36:52 -0400 Subject: [SciPy-User] inverse function of a spline In-Reply-To: References: Message-ID: An example of a possible usecase background I wrote a function for Lilliefors test of normality (Kolmogorov Smirnov test for normal distribution based on estimated mean and variance). (The reported power of the test is not great.) The p-values are non-standard and tabulated, by sample size and by probability (only 0.001 to 0.2) To safe work later on for other tests, I wrote a simple class, but I restrict to linear interpolation. Using two interp1d or using Rbf works fine, using interp2d gives me a warning - cannot add more knots - and the interpolated values are not correct. Since I never played much with interp2d, or interpolate.bisplrep or interpolate.BivariateSpline, human error is possible. Josef -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tabledist.py Type: text/x-python Size: 5107 bytes Desc: not available URL: From millman at berkeley.edu Sun Oct 2 14:02:10 2011 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 2 Oct 2011 11:02:10 -0700 Subject: [SciPy-User] [ANN] SciPy India 2011 Call for Presentations Message-ID: =============================== SciPy 2011 India Call for Papers =============================== The third `SciPy India Conference `_ will be held from December 4th through the 7th at the `Indian Institute of Technology, Bombay (IITB) `_ in Mumbai, Maharashtra India. At this conference, novel applications and breakthroughs made in the pursuit of science using Python are presented. Attended by leading figures from both academia and industry, it is an excellent opportunity to experience the cutting edge of scientific software development. The conference is followed by two days of tutorials and a code sprint, during which community experts provide training on several scientific Python packages. We invite you to take part by submitting a talk abstract on the conference website at: http://scipy.in Talk/Paper Submission --------------------- We solicit talks and accompanying papers (either formal academic or magazine-style articles) that discuss topics regarding scientific computing using Python, including applications, teaching, development and research. We welcome contributions from academia as well as industry. Keynote Speaker --------------- Eric Jones will deliver the keynote address this year. Eric has a broad background in engineering and software development and leads Enthought's product engineering and software design. Prior to co-founding Enthought, Eric worked with numerical electromagnetics and genetic optimization in the Department of Electrical Engineering at Duke University. He has taught numerous courses on the use of Python for scientific computing and serves as a member of the Python Software Foundation. He holds M.S. and Ph.D. degrees from Duke University in electrical engineering and a B.S.E. in mechanical engineering from Baylor University. Eric was the Keynote Speaker at SciPy US 2011. Important Dates --------------- October 26, 2011, Wednesday: Abstracts Due October 31, 2011, Monday: Schedule announced November 21, 2011, Monday: Proceedings paper submission due December 4-5, 2011, Sunday-Monday: Conference December 6-7 2011, Tuesday-Wednesday: Tutorials/Sprints Organizers ---------- * Jarrod Millman, Neuroscience Institute, UC Berkeley, USA (Conference Co-Chair) * Prabhu Ramachandran, Department of Aerospace Engineering, IIT Bombay, India (Conference Co-Chair) * FOSSEE Team From wesmckinn at gmail.com Mon Oct 3 01:12:28 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 3 Oct 2011 01:12:28 -0400 Subject: [SciPy-User] ANN: pandas 0.4.2 Message-ID: I'm pleased to announce the 0.4.2 pandas release, it includes a number of bugfixes from the recent 0.4.1 version but also includes a host of new speed optimizations primarily in the core data alignment and joining/merging routines. The most significant enhancement of which is the introduction of a specialized int64-based Index class which will help enable some of the fastest open source time series processing available, using the new NumPy datetime64 dtype. Please see the full release notes. Thanks to all for the feedback on recent releases and bug reports. best, Wes What is it ========== pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with ?relational? or ?labeled? data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. Links ===== Release Notes: https://github.com/wesm/pandas/blob/master/RELEASE.rst Documentation: http://pandas.sourceforge.net Installers: http://pypi.python.org/pypi/pandas Code Repository: http://github.com/wesm/pandas Mailing List: http://groups.google.com/group/pystatsmodels Blog: http://blog.wesmckinney.com pandas 0.4.2 Release Notes ========================== **Release date:** 10/3/2011 This is a performance optimization release with several bug fixes. The new Int64Index and new merging / joining Cython code and related Python infrastructure are the main new additions **New features / modules** - Added fast `Int64Index` type with specialized join, union, intersection. Will result in significant performance enhancements for int64-based time series (e.g. using NumPy's datetime64 one day) and also faster operations on DataFrame objects storing record array-like data. - Refactored `Index` classes to have a `join` method and associated data alignment routines throughout the codebase to be able to leverage optimized joining / merging routines. - Added `Series.align` method for aligning two series with choice of join method - Wrote faster Cython data alignment / merging routines resulting in substantial speed increases - Added `is_monotonic` property to `Index` classes with associated Cython code to evaluate the monotonicity of the `Index` values - Add method `get_level_values` to `MultiIndex` - Implemented shallow copy of `BlockManager` object in `DataFrame` internals **Improvements to existing features** - Improved performance of `isnull` and `notnull`, a regression from v0.3.0 (GH #187) - Wrote templating / code generation script to auto-generate Cython code for various functions which need to be available for the 4 major data types used in pandas (float64, bool, object, int64) - Refactored code related to `DataFrame.join` so that intermediate aligned copies of the data in each `DataFrame` argument do not need to be created. Substantial performance increases result (GH #176) - Substantially improved performance of generic `Index.intersection` and `Index.union` - Improved performance of `DateRange.union` with overlapping ranges and non-cacheable offsets (like Minute). Implemented analogous fast `DateRange.intersection` for overlapping ranges. - Implemented `BlockManager.take` resulting in significantly faster `take` performance on mixed-type `DataFrame` objects (GH #104) - Improved performance of `Series.sort_index` - Significant groupby performance enhancement: removed unnecessary integrity checks in DataFrame internals that were slowing down slicing operations to retrieve groups - Added informative Exception when passing dict to DataFrame groupby aggregation with axis != 0 **API Changes** None **Bug fixes** - Fixed minor unhandled exception in Cython code implementing fast groupby aggregation operations - Fixed bug in unstacking code manifesting with more than 3 hierarchical levels - Throw exception when step specified in label-based slice (GH #185) - Fix isnull to correctly work with np.float32. Fix upstream bug described in GH #182 - Finish implementation of as_index=False in groupby for DataFrame aggregation (GH #181) - Raise SkipTest for pre-epoch HDFStore failure. Real fix will be sorted out via datetime64 dtype Thanks ------ - Uri Laserson - Scott Sinclair From pgmdevlist at gmail.com Mon Oct 3 02:08:06 2011 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 3 Oct 2011 08:08:06 +0200 Subject: [SciPy-User] Masked array along one axis In-Reply-To: References: Message-ID: On Oct 01, 2011, at 10:34 , Gustavo Goretkin wrote: > I want to use a 2D array to store a set of points. There are n points in d dimensions, so the array is n-by-d. > > I want to use the masked array features of NumPy, but it doesn't make sense to mask only some some components of a point. Either one point is entirely masked or it isn't. In other words, the mask is n-by-1 (or equivalent up to singleton dimensions). > > Does the masked array support this masking only in some axes? No. However, instead of considering a nxd array of floats (i.e., a "standard" array), you may want to try a n array of d-fields (i.e., use a structured array). That way, your array would have only one axis and you could mask one row at a time. From gael.varoquaux at normalesup.org Mon Oct 3 03:50:06 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 3 Oct 2011 09:50:06 +0200 Subject: [SciPy-User] Fitting a model to an image to recover the position In-Reply-To: <42041109-D42D-4C0C-8911-65631307C91E@unc.edu> References: <42041109-D42D-4C0C-8911-65631307C91E@unc.edu> Message-ID: <20111003075006.GH11911@phare.normalesup.org> It seems to me that you have what is known as a registration problem. It has many local minima and is a nasty optimization problem. I am not an expert in registration, but I suggest that you try a multi-scale approach if your angles and translations are large: downsample by a large factor the two images (target and data), fit these, and use the rotation and translation parameters learned to initialize less downsampled fits. Iterate a few times for different values of downsampling. This helps avoiding the local minima. All the registration strategies are very dependant on the kind of data that you have. Registration is more of an art than a science. Ga?l On Fri, Sep 30, 2011 at 03:18:33PM +0000, Niemi, Sami wrote: > I am trying to solve a problem where I need to fit a model to an image to recover the position (of the model in that image). The real-life application is more complicated (fitting sparse field spectroscopic data to an SDSS r-band image, if you are interested in) than the simple example I give below, but it is analogous. The most significant difference being that in the real-life application I need to allow rotations (so I need to find a position x and y and rotation r that minimizes for example the chi**2 value) and that the difference between the model and image might be larger than the small random error applied in the simple example (and that there is less information in one of the directions because of the finite slit width). > The simple example I show below works for the real-life problem, but it's far from being effective or elegant. I would like to use some in-built minimization methods like scipy.optimize.leastsq to solve this problem but I haven't found a way to get it work on my problem. Any ideas how I could do this better? > Cheers, > Sami > Simple Example: > import numpy as np > def findPosition(image, model, xguess, yguess, length, width, xrange=20, yrange=20): > ''' > Finds the x and y position of a model in an image that minimizes chi**2 by looping > a grid around the initial guess given by xguess and yguess. > This method is far from optimal, so the question is how to do this > with the scipy.optimize.leastsq or some other built-in min. algorithm. > ''' > out = [] > for x in range(-xrange, xrange): > for y in range(-yrange, yrange): > obs = image[yguess+y:yguess+y+length, xguess+x:xguess+x+width].ravel() > chi2 = np.sum((obs - model)**2 / model) > out.append([chi2, x, y]) > out = np.array(out) > return out, np.argmin(out[:,0]) > #create an image of 100 times 100 pixels of random data as an example, this represents the imaging data > image = np.random.rand(10000).reshape(100,100) + 1000 > #take a slice and add a small error, this represents the model that had been recovered somewhere else > xpos, ypos = 80, 20 > length, width = 21, 4 > model = (image[ypos:ypos+length, xpos:xpos+width].copy() + np.random.rand(1)[0]*0.1).ravel() > #initial guess of the position (this is close, the correct one is 80, 20) > xguess, yguess = 75, 33 > #find the position using the idiotic algorithm > out, armin = findPosition(image, model, xguess, yguess, length, width) > #print out the recovered shift and position > print 'xshift yshift chi**2' > print out[armin,1], out[armin, 2], out[armin, 0] > print 'xfinal yfinal' > print xguess+out[armin,1], yguess+out[armin, 2], 'which should be %i, %i' % (xpos, ypos) > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Gael Varoquaux Research Fellow, INSERM Associate researcher, INRIA Laboratoire de Neuro-Imagerie Assistee par Ordinateur NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France Phone: ++ 33-1-69-08-78-35 Mobile: ++ 33-6-28-25-64-62 http://gael-varoquaux.info From tony at maths.lth.se Mon Oct 3 05:11:07 2011 From: tony at maths.lth.se (Tony Stillfjord) Date: Mon, 3 Oct 2011 11:11:07 +0200 Subject: [SciPy-User] scipy.sparse vs. pysparse In-Reply-To: <4E8733D1.4080502@iki.fi> References: <4E8733D1.4080502@iki.fi> Message-ID: Hello, I apologize for not responding earlier. Thanks for taking the time to work on this, Pauli. I took the liberty of sending this to the mailing list since I believe that's where you meant to send your previous email, instead of directly to me. I managed to apply the patch (first time doing that :) ) and ran the same test script you supplied. I get similar speed-ups, but on my system pysparse seems to be faster than on your system, while scipy.sparse is slower. I'm inclined to believe that that has something to do with how I built scipy, though I can't think of any reason straight away. I was even a little less kind towards pysparse in my test, in that I used the high-level matrix class and A*b rather than the low-level A.matvec(b, x) - see below. Results of Pauli's test script on my Ubuntu system: After: % N scipy.sparse pysparse 32 9.585e-06 4.476e-07 64 1.021e-05 6.214e-07 128 1.054e-05 1.009e-06 256 1.158e-05 1.651e-06 512 1.328e-05 3.177e-06 1024 1.619e-05 6.085e-06 2048 2.251e-05 1.18e-05 4096 3.483e-05 2.314e-05 8192 5.997e-05 4.585e-05 16384 0.0001373 9.185e-05 32768 0.0002084 0.0001839 Before: % N scipy.sparse pysparse 32 5.375e-05 4.51e-07 64 5.47e-05 6.221e-07 128 5.563e-05 9.973e-07 256 5.579e-05 1.681e-06 512 5.723e-05 3.166e-06 1024 6.057e-05 6.074e-06 2048 6.736e-05 1.177e-05 4096 7.966e-05 3.063e-05 8192 0.0001047 4.587e-05 16384 0.0001538 9.193e-05 32768 0.0002546 0.0001832 I also tried my own benchmark that you can get here: http://dl.dropbox.com/u/2349184/pysparse_vs_scipy_dev.py The results (in micro-seconds): 1D: N SciPy pysparse 50 1.60e+01 5.99e+00 100 1.01e+01 6.31e+00 200 1.10e+01 7.39e+00 300 1.16e+01 8.25e+00 500 1.29e+01 1.02e+01 1000 1.71e+01 1.36e+01 2500 2.51e+01 2.47e+01 5000 4.03e+01 4.26e+01 10000 7.07e+01 8.15e+01 25000 1.62e+02 1.90e+02 50000 3.67e+02 4.90e+02 100000 1.14e+03 1.37e+03 2D: N=M^2 SciPy pysparse 100 1.05e+01 6.86e+00 625 1.58e+01 1.42e+01 2500 3.21e+01 3.82e+01 10000 9.81e+01 1.36e+02 40000 5.15e+02 9.93e+02 90000 1.40e+03 2.56e+03 250000 3.99e+03 7.79e+03 1000000 1.55e+04 3.36e+04 Comparing to my original email one can see that the 2D results are even more satisfying. The same factor-5 speedup at the smallest size and a significant decrease also at N=10000 (almost 50%). When I get around to it I will try this out with some more "realistic" matrices. Regards, Tony Stillfjord On Sat, Oct 1, 2011 at 5:37 PM, Pauli Virtanen wrote: > Hi, > > Here is some optimization reducing the runtime overhead of scipy.sparse > matrix-vector multiplication by a factor of 5. > > https://github.com/pv/scipy-**work/compare/master...enh/** > sparse-speedup > > And a patch against Scipy 0.9.0 (@Tony: maybe you want to try it out?): > > https://github.com/pv/scipy-**work/compare/v0.9.0...enh/** > sparse-speedup-0.9.patch > > *** > > Quick benchmark: http://dl.dropbox.com/u/**5453551/bench_sparse.py > (Multiply vector with 1-D CSR Laplacian operator.) > > After: > > % N scipy.sparse pysparse > 32 7.169e-06 1.048e-06 > 64 7.367e-06 1.787e-06 > 128 7.814e-06 3.284e-06 > 256 8.633e-06 6.336e-06 > 512 1.025e-05 1.241e-05 > 1024 1.435e-05 2.455e-05 > 2048 1.989e-05 4.882e-05 > 4096 3.384e-05 9.798e-05 > 8192 6.098e-05 0.0001959 > > Before: > > % N scipy.sparse pysparse > 32 3.708e-05 1.032e-06 > 64 3.736e-05 1.803e-06 > 128 3.777e-05 3.368e-06 > 256 3.95e-05 6.324e-06 > 512 4.116e-05 1.267e-05 > 1024 4.661e-05 2.455e-05 > 2048 5.38e-05 4.873e-05 > 4096 6.946e-05 9.763e-05 > 8192 9.563e-05 0.0001959 > > The cross-over occurs around N ~ 300 instead of around N ~ 3000. The main > reason for the overhead is that multiplication with a sparse Laplacian is a > pretty lightweight operation, so the fact that some of scipy.sparse is > written in pure Python starts to matter. > > -- > Pauli Virtanen > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bala.biophysics at gmail.com Mon Oct 3 05:38:31 2011 From: bala.biophysics at gmail.com (Bala subramanian) Date: Mon, 3 Oct 2011 11:38:31 +0200 Subject: [SciPy-User] bivariate distribution Message-ID: Friends, Is there any way to calculate the bivariate/joint probability distribution function in scipy. Thanks, Bala -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Mon Oct 3 07:08:14 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 3 Oct 2011 06:08:14 -0500 Subject: [SciPy-User] bivariate distribution In-Reply-To: References: Message-ID: On Mon, Oct 3, 2011 at 4:38 AM, Bala subramanian wrote: > Friends, > Is there any way to calculate the bivariate/joint probability distribution > function in scipy. > > If by "the" bivariate/joint probability distribution you mean a bivariate normal distribution, you can use numpy.random.multivariate_normal: http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multivariate_normal.html#numpy.random.multivariate_normal Warren Thanks, > Bala > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Mon Oct 3 07:09:47 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 3 Oct 2011 06:09:47 -0500 Subject: [SciPy-User] bivariate distribution In-Reply-To: References: Message-ID: On Mon, Oct 3, 2011 at 6:08 AM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Mon, Oct 3, 2011 at 4:38 AM, Bala subramanian < > bala.biophysics at gmail.com> wrote: > >> Friends, >> Is there any way to calculate the bivariate/joint probability distribution >> function in scipy. >> >> > If by "the" bivariate/joint probability distribution you mean a bivariate > normal distribution, you can use numpy.random.multivariate_normal: > > http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multivariate_normal.html#numpy.random.multivariate_normal > > Well, to draw samples from it, anyway... > Warren > > > Thanks, >> Bala >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From schut at sarvision.nl Mon Oct 3 09:29:52 2011 From: schut at sarvision.nl (Vincent Schut) Date: Mon, 03 Oct 2011 15:29:52 +0200 Subject: [SciPy-User] Fitting a model to an image to recover the position In-Reply-To: <42041109-D42D-4C0C-8911-65631307C91E@unc.edu> References: <42041109-D42D-4C0C-8911-65631307C91E@unc.edu> Message-ID: On 09/30/2011 05:18 PM, Niemi, Sami wrote: > Hello, > > I am trying to solve a problem where I need to fit a model to an image to recover the position (of the model in that image). The real-life application is more complicated (fitting sparse field spectroscopic data to an SDSS r-band image, if you are interested in) than the simple example I give below, but it is analogous. The most significant difference being that in the real-life application I need to allow rotations (so I need to find a position x and y and rotation r that minimizes for example the chi**2 value) and that the difference between the model and image might be larger than the small random error applied in the simple example (and that there is less information in one of the directions because of the finite slit width). > > The simple example I show below works for the real-life problem, but it's far from being effective or elegant. I would like to use some in-built minimization methods like scipy.optimize.leastsq to solve this problem but I haven't found a way to get it work on my problem. Any ideas how I could do this better? > You might be interested in a totally different approach: registration (and rotation, if you want, but it's slightly more complicated) based on fft phase correlation. Google 'fft registration' and/of 'fft registration rotation' for more info, there is quite a lot available, both theory and algorithmic. I've been using it for coregistration of satellite images, and it works wonderfully. You can even easily get sub-pixel accuracy if you want. Best, Vincent. From nouiz at nouiz.org Tue Oct 4 09:10:32 2011 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Tue, 4 Oct 2011 09:10:32 -0400 Subject: [SciPy-User] In which numpy modules should MKL be better then ATLAS In-Reply-To: References: <4E86EA29.6060007@astro.uio.no> Message-ID: Hi, Atlas will usually get close to mkl speed, but it will take more time and you must compile it yourself. So on recent cpu, mkl should be faster. There is 2 cases that could make this different: do atlas and mkl used the same number ofthread? You should count only real core on the cpu, not hyperthread core. Having more then the number of real core will probably slow thing down. Some version of mkl have a speed problem with gemm(used by dot). The speed penalty is 2x. I don't know witch version are afftected. Fred On Oct 1, 2011 6:35 AM, "Klonuo Umom" wrote: > It's old Intel P4 3Ghz > ATLAS/LAPACK are build from source so maybe more optimized > > > On Sat, Oct 1, 2011 at 12:23 PM, Dag Sverre Seljebotn < > d.s.seljebotn at astro.uio.no> wrote: > >> On 10/01/2011 11:55 AM, Klonuo Umom wrote: >> > I had a chance to test this sample on different setups on same PC: >> > >> > import numpy as np >> > A=np.ones((1000,1000)) >> > B=np.ones((1000,1000)) >> > %timeit np.dot(A, B) >> > >> > because of OS reinstalling. >> > >> > 1x = ATLAS on Linux (reference speed) >> > 2x = MKL with GNU compilers on Linux >> > 2x = MKL with Intel compilers on Windows 7 >> > 30x = bare numpy >> > >> > I didn't plan to do this so I didn't test additional calculations, and I >> > was using latest version to date for all products. >> > >> > On Internet I usually find that MKL should outperform ATLAS. I'm curious >> > what would linalg module testing give, but as said I didn't test it. So >> > in which modules should user expect impact of MKL over ATLAS? In matrix >> > dot product obviously not. >> >> What CPU are you on? MKL is tuned for Intel CPUs, perhaps ATLAS >> outperforms it on AMD ones. >> >> Dag Sverre >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Tue Oct 4 11:10:33 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Tue, 4 Oct 2011 17:10:33 +0200 Subject: [SciPy-User] In which numpy modules should MKL be better then ATLAS In-Reply-To: References: <4E86EA29.6060007@astro.uio.no> Message-ID: OK, I'm not some skilled programmer, but I suspected as ATLAS autotunes during building and MKL just installs out of box, that ATLAS could be more optimized. This is not some serious test as it's just one calculation. FYI, while still speaking about old single core processors, I installed using same procedure and switches, exact same packages on 2.4GHz Pentium Celeron: atlas_threads_info: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib'] define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] language = f77 blas_opt_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib'] define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] language = c atlas_blas_threads_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib'] define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] language = c lapack_opt_info: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib'] define_macros = [('ATLAS_INFO', '"\\"3.8.4\\""')] language = f77 lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE mkl_info: NOT AVAILABLE The result was 2 times slower performance compared to 3GHz Celeron - or same as using MKL :D Intel(R) Celeron(R) CPU 2.40GHz cache size : 128 KB address sizes : 36 bits physical, 32 bits virtual %timeit np.dot(A, B) 1 loops, best of 3: 982 ms per loop vs Intel(R) Celeron(R) D CPU 3.06GHz cache size : 512 KB address sizes : 36 bits physical, 48 bits virtual %timeit np.dot(A, B) 1 loops, best of 3: 453 ms per loop This are old PCs, but I wont just throw them yet ;) 2011/10/4 Fr?d?ric Bastien > Hi, > > Atlas will usually get close to mkl speed, but it will take more time and > you must compile it yourself. So on recent cpu, mkl should be faster. > > There is 2 cases that could make this different: do atlas and mkl used the > same number ofthread? You should count only real core on the cpu, not > hyperthread core. Having more then the number of real core will probably > slow thing down. > > Some version of mkl have a speed problem with gemm(used by dot). The speed > penalty is 2x. I don't know witch version are afftected. > > Fred > On Oct 1, 2011 6:35 AM, "Klonuo Umom" wrote: > > It's old Intel P4 3Ghz > > ATLAS/LAPACK are build from source so maybe more optimized > > > > > > On Sat, Oct 1, 2011 at 12:23 PM, Dag Sverre Seljebotn < > > d.s.seljebotn at astro.uio.no> wrote: > > > >> On 10/01/2011 11:55 AM, Klonuo Umom wrote: > >> > I had a chance to test this sample on different setups on same PC: > >> > > >> > import numpy as np > >> > A=np.ones((1000,1000)) > >> > B=np.ones((1000,1000)) > >> > %timeit np.dot(A, B) > >> > > >> > because of OS reinstalling. > >> > > >> > 1x = ATLAS on Linux (reference speed) > >> > 2x = MKL with GNU compilers on Linux > >> > 2x = MKL with Intel compilers on Windows 7 > >> > 30x = bare numpy > >> > > >> > I didn't plan to do this so I didn't test additional calculations, and > I > >> > was using latest version to date for all products. > >> > > >> > On Internet I usually find that MKL should outperform ATLAS. I'm > curious > >> > what would linalg module testing give, but as said I didn't test it. > So > >> > in which modules should user expect impact of MKL over ATLAS? In > matrix > >> > dot product obviously not. > >> > >> What CPU are you on? MKL is tuned for Intel CPUs, perhaps ATLAS > >> outperforms it on AMD ones. > >> > >> Dag Sverre > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From woodsdm2 at muohio.edu Wed Oct 5 14:05:44 2011 From: woodsdm2 at muohio.edu (Woods, David M. Dr.) Date: Wed, 5 Oct 2011 14:05:44 -0400 Subject: [SciPy-User] test failure on Scipy 0.9.0 installation Message-ID: I just installed SciPy 0.9.0 with NumPy v 1.6.1 and Python 2.7.2. Running scipy.test() reports one failure: ====================================================================== FAIL: test_expon (test_morestats.TestAnderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/software/python/2.7.2/lib/python2.7/site-packages/scipy/stats/tests/test_morestats.py", line 72, in test_expon assert_array_less(crit[:-1], A) File "/software/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 869, in assert_array_less header='Arrays are not less-ordered') File "/software/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 613, in assert_array_compare chk_same_position(x_id, y_id, hasval='inf') File "/software/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", line 588, in chk_same_position raise AssertionError(msg) AssertionError: Arrays are not less-ordered x and y inf location mismatch: x: array([ 0.911, 1.065, 1.325, 1.587]) y: array(inf) What do I need to do to fix this failure? Other useful info: Uname -a: Linux 2.6.18-194.11.4.el5 #1 SMP Tue Sep 21 05:04:09 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux Built using GCC 4.1.2, ATLAS 3.9.51, and Lapack 3.3.1. The system has a working installation of SciPy 0.7.1 with NumPy 1.3.0, Python 2.6.4, ATLAS 3.8.3, and Lapack 3.2.1. Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From tapitman11 at gmail.com Wed Oct 5 21:57:36 2011 From: tapitman11 at gmail.com (Tim Pitman) Date: Wed, 5 Oct 2011 18:57:36 -0700 Subject: [SciPy-User] Efficient replacement to matlab textscan Message-ID: Hi All, I'm working on a project which currently uses matlab to generate a real time plot which uses textscan (http://www.mathworks.com/help/techdoc/ref/textscan.html) to read a file with about 800 float values (all in one column but each on it's own line) every other point is the x value and the off values are the y values, totaling about 400 points. The function then plots all the points. This is done about 10 times per second providing a constantly updating graph. Matlab uses about 9% CPU when this is running. I'm trying to replace this with numpy/matplotlib in a wxPython app. My wxPython app runs at about 20% CPU without the plot running but once I start the plot at 10 updates per second CPU usage jumps up to %70 percent. I'm currently using genfromtxt to read the file. I'm looking for suggestions on how to optimize this as much as possible. Thanks! Tim From josef.pktd at gmail.com Wed Oct 5 22:06:26 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 5 Oct 2011 22:06:26 -0400 Subject: [SciPy-User] Efficient replacement to matlab textscan In-Reply-To: References: Message-ID: On Wed, Oct 5, 2011 at 9:57 PM, Tim Pitman wrote: > Hi All, > > I'm working on a project which currently uses matlab to generate a > real time plot which uses textscan > (http://www.mathworks.com/help/techdoc/ref/textscan.html) to read a > file with about 800 float values (all in one column but each on it's > own line) every other point is the x value and the off values are the > y values, totaling about 400 points. The function then plots all the > points. This is done about 10 times per second providing a constantly > updating graph. Matlab uses about 9% CPU when this is running. I'm > trying to replace this with numpy/matplotlib in a wxPython app. My > wxPython app runs at about 20% CPU without the plot running but once I > start the plot at 10 updates per second CPU usage jumps up to %70 > percent. I'm currently using genfromtxt to read the file. I'm looking > for suggestions on how to optimize this as much as possible. numpy.loadtxt and reshape ? genfromtxt tries to do a lot of inference about the file Josef > > Thanks! > Tim > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From hnry2k at hotmail.com Wed Oct 5 22:40:15 2011 From: hnry2k at hotmail.com (=?iso-8859-1?B?Sm9yZ2UgRS4gtFNhbmNoZXogU2FuY2hleg==?=) Date: Wed, 5 Oct 2011 21:40:15 -0500 Subject: [SciPy-User] Can't find Discrete Cosine Transform Message-ID: Need some help, I'm trying to get the Discrete Cosine Transform of a sequence S_m, according to http://docs.scipy.org/doc/scipy/reference/fftpack.html#module-scipy.fftpack it is defined as "dct" in the scipy.fftpack module, however when I try to use it after: from scipy import fftpack it does not offer me that option (just these others: cc_diff, cs_diff, diff, fft, fft2, fftn, hilbert, ifft, ifft2, ifftn, ihilbert, irfft, itilbert, rfft, rfftfreq, sc_diff, shift, ssdiff and tilbert). Trying it, gives me the error message: Traceback (most recent call last): File "/home/george/Escritorio/prueba1D05Oct2011.py", line 78, in FCTS_m = fftpack.dct(S_m,norm='ortho') AttributeError: 'module' object has no attribute 'dct' as expected. What I have to do to make it work??? Thanks in Advance Jorge -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Thu Oct 6 00:09:06 2011 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Thu, 6 Oct 2011 00:09:06 -0400 Subject: [SciPy-User] Can't find Discrete Cosine Transform In-Reply-To: References: Message-ID: Works for me with scipy '0.9.0' (python3.2) >>> fftpack.dct([0.5,0.8,896,0.778]) array([ 1796.156 , -685.67009433, -1266.4593578 , 1653.90114302]) -- Oleksandr Huziy 2011/10/5 Jorge E. ?Sanchez Sanchez > Need some help, > > I'm trying to get the Discrete Cosine Transform of a sequence S_m, > according to > > http://docs.scipy.org/doc/scipy/reference/fftpack.html#module-scipy.fftpack > > it is defined as "dct" in the scipy.fftpack module, however when I try to > use it after: > > from scipy import fftpack > > it does not offer me that option (just these others: cc_diff, cs_diff, > diff, fft, fft2, > fftn, hilbert, ifft, ifft2, ifftn, ihilbert, irfft, itilbert, rfft, > rfftfreq, sc_diff, shift, ssdiff and > tilbert). > > Trying it, gives me the error message: > > Traceback (most recent call last): > File "/home/george/Escritorio/prueba1D05Oct2011.py", line 78, in > FCTS_m = fftpack.dct(S_m,norm='ortho') > AttributeError: 'module' object has no attribute 'dct' > > as expected. > > What I have to do to make it work??? > > Thanks in Advance > Jorge > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Thu Oct 6 00:16:48 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 5 Oct 2011 23:16:48 -0500 Subject: [SciPy-User] Can't find Discrete Cosine Transform In-Reply-To: References: Message-ID: 2011/10/5 Jorge E. ?Sanchez Sanchez > Need some help, > > I'm trying to get the Discrete Cosine Transform of a sequence S_m, > according to > > http://docs.scipy.org/doc/scipy/reference/fftpack.html#module-scipy.fftpack > > it is defined as "dct" in the scipy.fftpack module, however when I try to > use it after: > > from scipy import fftpack > > it does not offer me that option (just these others: cc_diff, cs_diff, > diff, fft, fft2, > fftn, hilbert, ifft, ifft2, ifftn, ihilbert, irfft, itilbert, rfft, > rfftfreq, sc_diff, shift, ssdiff and > tilbert). > > Trying it, gives me the error message: > > Traceback (most recent call last): > File "/home/george/Escritorio/prueba1D05Oct2011.py", line 78, in > FCTS_m = fftpack.dct(S_m,norm='ortho') > AttributeError: 'module' object has no attribute 'dct' > > as expected. > > What I have to do to make it work??? > dct was added to scipy in version 0.8: http://docs.scipy.org/doc/scipy/reference/release.0.8.0.html What version are you using? Warren > Thanks in Advance > Jorge > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.sinclair.za at gmail.com Thu Oct 6 02:44:29 2011 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Thu, 6 Oct 2011 08:44:29 +0200 Subject: [SciPy-User] test failure on Scipy 0.9.0 installation In-Reply-To: References: Message-ID: On 5 October 2011 20:05, Woods, David M. Dr. wrote: > > > I just installed SciPy 0.9.0 with NumPy v 1.6.1 and Python 2.7.2.? Running > scipy.test() reports one failure: > > > > ====================================================================== > > FAIL: test_expon (test_morestats.TestAnderson) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > ? File > "/software/python/2.7.2/lib/python2.7/site-packages/scipy/stats/tests/test_morestats.py", > line 72, in test_expon > > ??? assert_array_less(crit[:-1], A) > > ? File > "/software/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", > line 869, in assert_array_less > > ??? header='Arrays are not less-ordered') > > ? File > "/software/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", > line 613, in assert_array_compare > > ??? chk_same_position(x_id, y_id, hasval='inf') > > ? File > "/software/python/2.7.2/lib/python2.7/site-packages/numpy/testing/utils.py", > line 588, in chk_same_position > > ??? raise AssertionError(msg) > > AssertionError: > > Arrays are not less-ordered > > > > x and y inf location mismatch: > > x: array([ 0.911,? 1.065,? 1.325,? 1.587]) > > y: array(inf) This has been fixed, https://github.com/scipy/scipy/commit/89c88431e648369d4a15b4fd4e762a28eb86abd8 I don't see the failure with scipy 0.10.0b2 built against numpy 1.6.1. Cheers, Scott From L.J.Buitinck at uva.nl Thu Oct 6 07:43:46 2011 From: L.J.Buitinck at uva.nl (Lars Buitinck) Date: Thu, 6 Oct 2011 13:43:46 +0200 Subject: [SciPy-User] .A property on scipy.sparse matrices Message-ID: Hi all, While inspecting the scipy.maxentropy package, I noted that it used a .A property on scipy.sparse matrices to convert those to numpy.ndarrays; it seems to do the same as `.toarray()`. I'd never seen this property before and I can't seem to find it in the docs [1] either. Is this a part of the public API or not? [1] http://docs.scipy.org/doc/scipy/reference/sparse.html TIA, -- Lars Buitinck Scientific programmer, ILPS University of Amsterdam From hnry2k at hotmail.com Thu Oct 6 12:27:07 2011 From: hnry2k at hotmail.com (=?iso-8859-1?B?Sm9yZ2UgRS4gtFNhbmNoZXogU2FuY2hleg==?=) Date: Thu, 6 Oct 2011 11:27:07 -0500 Subject: [SciPy-User] Can't find Discrete Cosine Transform In-Reply-To: References: , Message-ID: Thanks Warren and Oleksandr, I was thinking I have version 0.8 but it is 0.7.0, I have to update it then. Best Regards Jorge From: warren.weckesser at enthought.com Date: Wed, 5 Oct 2011 23:16:48 -0500 To: scipy-user at scipy.org Subject: Re: [SciPy-User] Can't find Discrete Cosine Transform 2011/10/5 Jorge E. ?Sanchez Sanchez Need some help, I'm trying to get the Discrete Cosine Transform of a sequence S_m, according to http://docs.scipy.org/doc/scipy/reference/fftpack.html#module-scipy.fftpack it is defined as "dct" in the scipy.fftpack module, however when I try to use it after: from scipy import fftpack it does not offer me that option (just these others: cc_diff, cs_diff, diff, fft, fft2, fftn, hilbert, ifft, ifft2, ifftn, ihilbert, irfft, itilbert, rfft, rfftfreq, sc_diff, shift, ssdiff and tilbert). Trying it, gives me the error message: Traceback (most recent call last): File "/home/george/Escritorio/prueba1D05Oct2011.py", line 78, in FCTS_m = fftpack.dct(S_m,norm='ortho') AttributeError: 'module' object has no attribute 'dct' as expected. What I have to do to make it work??? dct was added to scipy in version 0.8: http://docs.scipy.org/doc/scipy/reference/release.0.8.0.html What version are you using? Warren Thanks in Advance Jorge _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Thu Oct 6 12:29:13 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 6 Oct 2011 12:29:13 -0400 Subject: [SciPy-User] simulated annealing initial temperature Message-ID: Was just playing around and noticed this. Which is "right," docs or code? Docs: T0 : float Initial Temperature (estimated as 1.2 times the largest cost-function deviation over random points in the range). Code: self.T0 = (fmax-fmin)*1.5 Skipper From lorenzo.isella at gmail.com Fri Oct 7 06:46:29 2011 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Fri, 7 Oct 2011 12:46:29 +0200 Subject: [SciPy-User] Image Enhancing in Python Message-ID: Dear All, I hope this is not too off topic. I saw some wikis about Python and image analysis. What I am trying to do should not be too difficult, but I am unsure about how to proceed. It is my understanding that a color image (RGB), for Python (and not only) is nothing else that a combination of 3 different 2D arrays. What I would like to do is the following: take this color image, turn it into a grayscale image (and this is the easy bit) with the exception of the red color (which should be preserved). The final image should therefore be grayscale and red. I suppose I will have to do some thresholding by hand to select which red shades I want to keep, but I am unsure about how to proceed. Any suggestion is welcome. Cheers Lorenzo From eraldo.pomponi at gmail.com Fri Oct 7 06:51:39 2011 From: eraldo.pomponi at gmail.com (Eraldo Pomponi) Date: Fri, 7 Oct 2011 12:51:39 +0200 Subject: [SciPy-User] Image Enhancing in Python In-Reply-To: References: Message-ID: Dear Lorenzo, Check the scipy.ndimage package. http://docs.scipy.org/doc/scipy/reference/ndimage.html#module-scipy.ndimage Cheers, Eraldo On Fri, Oct 7, 2011 at 12:46 PM, Lorenzo Isella wrote: > Dear All, > I hope this is not too off topic. > I saw some wikis about Python and image analysis. What I am trying to > do should not be too difficult, but I am unsure about how to proceed. > It is my understanding that a color image (RGB), for Python (and not > only) is nothing else that a combination of 3 different 2D arrays. > What I would like to do is the following: take this color image, turn > it into a grayscale image (and this is the easy bit) with the > exception of the red color (which should be preserved). > The final image should therefore be grayscale and red. I suppose I > will have to do some thresholding by hand to select which red shades I > want to keep, but I am unsure about how to proceed. > Any suggestion is welcome. > Cheers > > Lorenzo > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eraldo.pomponi at gmail.com Fri Oct 7 07:18:32 2011 From: eraldo.pomponi at gmail.com (Eraldo Pomponi) Date: Fri, 7 Oct 2011 13:18:32 +0200 Subject: [SciPy-User] Image Enhancing in Python In-Reply-To: References: Message-ID: Dear Lorenzo, I forgot to add a link to a useful lecture about images manipulation using Numpy and Scipy : http://scipy-lectures.github.com/advanced/image_processing/index.html Cheers, Eraldo On Fri, Oct 7, 2011 at 12:46 PM, Lorenzo Isella wrote: > Dear All, > I hope this is not too off topic. > I saw some wikis about Python and image analysis. What I am trying to > do should not be too difficult, but I am unsure about how to proceed. > It is my understanding that a color image (RGB), for Python (and not > only) is nothing else that a combination of 3 different 2D arrays. > What I would like to do is the following: take this color image, turn > it into a grayscale image (and this is the easy bit) with the > exception of the red color (which should be preserved). > The final image should therefore be grayscale and red. I suppose I > will have to do some thresholding by hand to select which red shades I > want to keep, but I am unsure about how to proceed. > Any suggestion is welcome. > Cheers > > Lorenzo > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iara.mbp at gmail.com Fri Oct 7 09:43:17 2011 From: iara.mbp at gmail.com (Iara Pereira) Date: Fri, 7 Oct 2011 13:43:17 +0000 (UTC) Subject: [SciPy-User] computing surface gradients using mayavi mlab Message-ID: Hi there, I'm using mlab.mesh to visualize data of a pressue field over a streamlined body and I would like to compute the pressure gradient on that surface. I was wondering, since mayavi uses some kind of interpolation method to represent the field between mesh nods, if it is possible to extract the derivatives on each cell from the vtk object. Or is there any other alternative besides triangulating the surface and calculating finite differences for interpolated points inside the triangle? Thanks for the help... Iara From gael.varoquaux at normalesup.org Fri Oct 7 10:25:07 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 7 Oct 2011 16:25:07 +0200 Subject: [SciPy-User] computing surface gradients using mayavi mlab In-Reply-To: References: Message-ID: <20111007142507.GF25670@phare.normalesup.org> On Fri, Oct 07, 2011 at 01:43:17PM +0000, Iara Pereira wrote: > I was wondering, since mayavi uses some kind of interpolation method to > represent the field between mesh nods, if it is possible to extract the > derivatives on each cell from the vtk object. I am not sure what you are looking for: what are these derivatives (what is derived as a function of want)? If you have a volumetric data set (which a mesh isn't), you can use the VTK interpolation engine with mlab.pipeline.probe_data http://github.enthought.com/mayavi/mayavi/auto/mlab_pipeline_data.html#mayavi.tools.pipeline.probe_data See also: http://github.enthought.com/mayavi/mayavi/data.html#retrieving-the-data-from-mayavi-pipelines HTH, Gael From tdimiduk at physics.harvard.edu Fri Oct 7 11:28:08 2011 From: tdimiduk at physics.harvard.edu (Tom Dimiduk) Date: Fri, 07 Oct 2011 11:28:08 -0400 Subject: [SciPy-User] Image Enhancing in Python In-Reply-To: References: Message-ID: <4E8F1A88.20100@physics.harvard.edu> That shouldn't be too hard. When I work with images I use PIL and scipy. If you have those two libraries installed, this code should do what I think you want (if I am interpreting what you want correctly). I have not actually tested this code, so there may be a few bugs, but it should get you the right idea. Tom --- # this loads the image as an NxMx3 array with the color chanels in the # third direction import numpy as np import Image from scipy.misc.pilutil import fromimage im = Image.open(filename) im = fromimage(im).astype('d') # and here we do the channel arithmetic you like red = im[...,0] gray = im[...,1] + im[...,2] # red and gray are now NxM images which have the data you want # This is now two images, if you want it to be 1 you could use vstack # to put them back together into a single image, I would suggest then # putting the "gray" value in twice so that it is an NxMx3 image again # as things will expect. composite = np.vstack((red, gray, gray)) On 10/07/2011 06:46 AM, Lorenzo Isella wrote: > Dear All, > I hope this is not too off topic. > I saw some wikis about Python and image analysis. What I am trying to > do should not be too difficult, but I am unsure about how to proceed. > It is my understanding that a color image (RGB), for Python (and not > only) is nothing else that a combination of 3 different 2D arrays. > What I would like to do is the following: take this color image, turn > it into a grayscale image (and this is the easy bit) with the > exception of the red color (which should be preserved). > The final image should therefore be grayscale and red. I suppose I > will have to do some thresholding by hand to select which red shades I > want to keep, but I am unsure about how to proceed. > Any suggestion is welcome. > Cheers > > Lorenzo > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From pav at iki.fi Fri Oct 7 13:01:22 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 07 Oct 2011 19:01:22 +0200 Subject: [SciPy-User] Segmentation fault with scipy v0.9.0 In-Reply-To: References: Message-ID: 20.09.2011 17:43, Baker D.J. kirjoitti: > I?m building scipy v0.9.0 on a RHELS 5.3 cluster. I?m building the > package with python 2.6.5, numpy 1.6.1 and the GNU compilers v4.1.2. > I?ve kept things simple and just used the ?bog standard? BLAS/LAPACK > installed via RHELS rpms. I?ve built and tested numpy today and that is > fine. On the other hand I find that the scipy tests fail with a > segmentation fault. On running the tests with ?scipy.test(verbose=2) I > find the following failure: > > test_nonlin.TestJacobianDotSolve.test_broyden1 ... Segmentation fault I saw similar behavior on one machine. The reason was that my Numpy and Scipy were linked against different BLAS libraries. Although I was linking Scipy against a specific BLAS, it apparently still got the ZDOTC symbol (which that test ends up calling) from some other library. If I renamed the 'ZDOTC' function to 'ZDOTCC' in the BLAS library and adjusted Scipy similarly, then it worked --- so apparently the name ZDOTC became overridden in runtime... *** However, recompiling Numpy resolved that issue for me. Maybe it also helps in your case? -- Pauli Virtanen From josef.pktd at gmail.com Sat Oct 8 13:49:19 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 8 Oct 2011 13:49:19 -0400 Subject: [SciPy-User] permutation pvalues and stats ? Message-ID: my first gist: https://gist.github.com/1270325 discussion: end of https://github.com/scipy/scipy/pull/87 Sturla also has posted several cases to the mailing list over the years. Josef In spite of the impression that someone looking at the commitlog might get: I'm not dead - just nostalgic From paul.anton.letnes at gmail.com Sun Oct 9 05:49:19 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Sun, 9 Oct 2011 11:49:19 +0200 Subject: [SciPy-User] OS X Lion build fails: arpack Message-ID: Hi everyone. I am trying to build scipy using gcc and gfortran downloaded from here: http://hpc.sourceforge.net/ However, it appears that ARPACK or its wrappers fail to build for some reason - see the log at the end of the email. I would appreciate any help getting a working scipy installation running. For the record, 'pip install scipy' fails with what seems to me like the same error. I also noticed the flag '-march=core2' somewhere - this seems a little odd, as my machine has an i7 processor. But then, maybe these architectures are the same? At any rate it should not affect the error, which seems to be syntax related. Cheers Paul System info: ++++++++++++ scipy version 0.9.0 (0.10 beta2 works but fails scipy.test()) Mac OS X 10.7.1 # preinstalled, not upgraded to % uname -a Darwin courant.local 11.1.0 Darwin Kernel Version 11.1.0: Tue Jul 26 16:07:11 PDT 2011; root:xnu-1699.22.81~1/RELEASE_X86_64 x86_64 % python # built from homebrew Python 2.7.2 (default, Oct 8 2011, 00:40:41) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin % gcc --version # from hpc.sourceforge.net gcc (GCC) 4.6.1 % gfortran --version # from hpc.sourceforge.net GNU Fortran (GCC) 4.6.1 >>> numpy.__version__ '1.6.1' Build log: ++++++++++ Warning: No configuration returned, assuming unavailable.blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] non-existing path in 'scipy/io': 'docs' lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] umfpack_info: libraries umfpack not found in /usr/local/bin/../Cellar/python/2.7.2/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/distutils/system_info.py:460: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources creating build creating build/src.macosx-10.4-x86_64-2.7 creating build/src.macosx-10.4-x86_64-2.7/scipy building library "dfftpack" sources building library "fftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "dop" sources building library "fitpack" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "superlu_src" sources building library "arpack" sources building library "qhull" sources building library "sc_c_misc" sources building library "sc_cephes" sources building library "sc_mach" sources building library "sc_toms" sources building library "sc_amos" sources building library "sc_cdf" sources building library "sc_specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.cluster._hierarchy_wrap" sources building extension "scipy.fftpack._fftpack" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/fftpack creating build/src.macosx-10.4-x86_64-2.7/scipy/fftpack/src conv_template:> build/src.macosx-10.4-x86_64-2.7/scipy/fftpack/src/dct.c f2py options: [] f2py: scipy/fftpack/fftpack.pyf Reading fortran codes... Reading file 'scipy/fftpack/fftpack.pyf' (format:free) Line #86 in scipy/fftpack/fftpack.pyf:" /* Single precision version */" crackline:2: No pattern for line Post-processing... Block: _fftpack Block: zfft Block: drfft Block: zrfft Block: zfftnd Block: destroy_zfft_cache Block: destroy_zfftnd_cache Block: destroy_drfft_cache Block: cfft Block: rfft Block: crfft Block: cfftnd Block: destroy_cfft_cache Block: destroy_cfftnd_cache Block: destroy_rfft_cache Block: ddct1 Block: ddct2 Block: ddct3 Block: dct1 Block: dct2 Block: dct3 Block: destroy_ddct2_cache Block: destroy_ddct1_cache Block: destroy_dct2_cache Block: destroy_dct1_cache Post-processing (stage 2)... Building modules... Building module "_fftpack"... Constructing wrapper function "zfft"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = zfft(x,[n,direction,normalize,overwrite_x]) Constructing wrapper function "drfft"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = drfft(x,[n,direction,normalize,overwrite_x]) Constructing wrapper function "zrfft"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = zrfft(x,[n,direction,normalize,overwrite_x]) Constructing wrapper function "zfftnd"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = zfftnd(x,[s,direction,normalize,overwrite_x]) Constructing wrapper function "destroy_zfft_cache"... destroy_zfft_cache() Constructing wrapper function "destroy_zfftnd_cache"... destroy_zfftnd_cache() Constructing wrapper function "destroy_drfft_cache"... destroy_drfft_cache() Constructing wrapper function "cfft"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = cfft(x,[n,direction,normalize,overwrite_x]) Constructing wrapper function "rfft"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = rfft(x,[n,direction,normalize,overwrite_x]) Constructing wrapper function "crfft"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = crfft(x,[n,direction,normalize,overwrite_x]) Constructing wrapper function "cfftnd"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = cfftnd(x,[s,direction,normalize,overwrite_x]) Constructing wrapper function "destroy_cfft_cache"... destroy_cfft_cache() Constructing wrapper function "destroy_cfftnd_cache"... destroy_cfftnd_cache() Constructing wrapper function "destroy_rfft_cache"... destroy_rfft_cache() Constructing wrapper function "ddct1"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = ddct1(x,[n,normalize,overwrite_x]) Constructing wrapper function "ddct2"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = ddct2(x,[n,normalize,overwrite_x]) Constructing wrapper function "ddct3"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = ddct3(x,[n,normalize,overwrite_x]) Constructing wrapper function "dct1"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = dct1(x,[n,normalize,overwrite_x]) Constructing wrapper function "dct2"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = dct2(x,[n,normalize,overwrite_x]) Constructing wrapper function "dct3"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = dct3(x,[n,normalize,overwrite_x]) Constructing wrapper function "destroy_ddct2_cache"... destroy_ddct2_cache() Constructing wrapper function "destroy_ddct1_cache"... destroy_ddct1_cache() Constructing wrapper function "destroy_dct2_cache"... destroy_dct2_cache() Constructing wrapper function "destroy_dct1_cache"... destroy_dct1_cache() Wrote C/API module "_fftpack" to file "build/src.macosx-10.4-x86_64-2.7/scipy/fftpack/_fftpackmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. copying /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.c -> build/src.macosx-10.4-x86_64-2.7 copying /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.h -> build/src.macosx-10.4-x86_64-2.7 building extension "scipy.fftpack.convolve" sources f2py options: [] f2py: scipy/fftpack/convolve.pyf Reading fortran codes... Reading file 'scipy/fftpack/convolve.pyf' (format:free) Post-processing... Block: convolve__user__routines Block: kernel_func Block: convolve Block: init_convolution_kernel In: scipy/fftpack/convolve.pyf:convolve:unknown_interface:init_convolution_kernel get_useparameters: no module convolve__user__routines info used by init_convolution_kernel Block: destroy_convolve_cache Block: convolve Block: convolve_z Post-processing (stage 2)... Building modules... Constructing call-back function "cb_kernel_func_in_convolve__user__routines" def kernel_func(k): return kernel_func Building module "convolve"... Constructing wrapper function "init_convolution_kernel"... omega = init_convolution_kernel(n,kernel_func,[d,zero_nyquist,kernel_func_extra_args]) Constructing wrapper function "destroy_convolve_cache"... destroy_convolve_cache() Constructing wrapper function "convolve"... y = convolve(x,omega,[swap_real_imag,overwrite_x]) Constructing wrapper function "convolve_z"... y = convolve_z(x,omega_real,omega_imag,[overwrite_x]) Wrote C/API module "convolve" to file "build/src.macosx-10.4-x86_64-2.7/scipy/fftpack/convolvemodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/integrate f2py options: [] f2py: scipy/integrate/vode.pyf Reading fortran codes... Reading file 'scipy/integrate/vode.pyf' (format:free) Post-processing... Block: dvode__user__routines Block: dvode_user_interface Block: f Block: jac Block: zvode__user__routines Block: zvode_user_interface Block: f Block: jac Block: vode Block: dvode In: scipy/integrate/vode.pyf:vode:unknown_interface:dvode get_useparameters: no module dvode__user__routines info used by dvode Block: zvode In: scipy/integrate/vode.pyf:vode:unknown_interface:zvode get_useparameters: no module zvode__user__routines info used by zvode Post-processing (stage 2)... Building modules... Constructing call-back function "cb_f_in_dvode__user__routines" def f(t,y): return ydot Constructing call-back function "cb_jac_in_dvode__user__routines" def jac(t,y): return jac Constructing call-back function "cb_f_in_zvode__user__routines" def f(t,y): return ydot Constructing call-back function "cb_jac_in_zvode__user__routines" def jac(t,y): return jac Building module "vode"... Constructing wrapper function "dvode"... warning: callstatement is defined without callprotoargument getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y,t,istate = dvode(f,jac,y,t,tout,rtol,atol,itask,istate,rwork,iwork,mf,[f_extra_args,jac_extra_args,overwrite_y]) Constructing wrapper function "zvode"... warning: callstatement is defined without callprotoargument getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y,t,istate = zvode(f,jac,y,t,tout,rtol,atol,itask,istate,zwork,rwork,iwork,mf,[f_extra_args,jac_extra_args,overwrite_y]) Wrote C/API module "vode" to file "build/src.macosx-10.4-x86_64-2.7/scipy/integrate/vodemodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.integrate._dop" sources f2py options: [] f2py: scipy/integrate/dop.pyf Reading fortran codes... Reading file 'scipy/integrate/dop.pyf' (format:free) Post-processing... Block: __user__routines Block: fcn Block: solout Block: _dop Block: dopri5 In: scipy/integrate/dop.pyf:_dop:unknown_interface:dopri5 get_useparameters: no module __user__routines info used by dopri5 Block: dop853 In: scipy/integrate/dop.pyf:_dop:unknown_interface:dop853 get_useparameters: no module __user__routines info used by dop853 Post-processing (stage 2)... Building modules... Constructing call-back function "cb_fcn_in___user__routines" def fcn(x,y): return f Constructing call-back function "cb_solout_in___user__routines" def solout(nr,xold,x,y,con,icomp,[nd]): return irtn Building module "_dop"... Constructing wrapper function "dopri5"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y,iwork,idid = dopri5(fcn,x,y,xend,rtol,atol,solout,work,iwork,[fcn_extra_args,overwrite_y,solout_extra_args]) Constructing wrapper function "dop853"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y,iwork,idid = dop853(fcn,x,y,xend,rtol,atol,solout,work,iwork,[fcn_extra_args,overwrite_y,solout_extra_args]) Wrote C/API module "_dop" to file "build/src.macosx-10.4-x86_64-2.7/scipy/integrate/_dopmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.interpolate.interpnd" sources building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/interpolate creating build/src.macosx-10.4-x86_64-2.7/scipy/interpolate/src f2py options: [] f2py: scipy/interpolate/src/fitpack.pyf Reading fortran codes... Reading file 'scipy/interpolate/src/fitpack.pyf' (format:free) Post-processing... Block: dfitpack Block: splev Block: splder Block: splint Block: sproot Block: spalde Block: curfit Block: percur Block: parcur Block: fpcurf0 Block: fpcurf1 Block: fpcurfm1 Block: bispev Block: bispeu Block: surfit_smth Block: surfit_lsq Block: regrid_smth Block: dblint Post-processing (stage 2)... Building modules... Building module "dfitpack"... Constructing wrapper function "splev"... y = splev(t,c,k,x,[e]) Constructing wrapper function "splder"... y = splder(t,c,k,x,[nu,e]) Creating wrapper for Fortran function "splint"("splint")... Constructing wrapper function "splint"... splint = splint(t,c,k,a,b) Constructing wrapper function "sproot"... zero,m,ier = sproot(t,c,[mest]) Constructing wrapper function "spalde"... d,ier = spalde(t,c,k,x) Constructing wrapper function "curfit"... n,c,fp,ier = curfit(iopt,x,y,w,t,wrk,iwrk,[xb,xe,k,s]) Constructing wrapper function "percur"... n,c,fp,ier = percur(iopt,x,y,w,t,wrk,iwrk,[k,s]) Constructing wrapper function "parcur"... n,c,fp,ier = parcur(iopt,ipar,idim,u,x,w,ub,ue,t,wrk,iwrk,[k,s]) Constructing wrapper function "fpcurf0"... x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier = fpcurf0(x,y,k,[w,xb,xe,s,nest]) Constructing wrapper function "fpcurf1"... x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier = fpcurf1(x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier,[overwrite_x,overwrite_y,overwrite_w,overwrite_t,overwrite_c,overwrite_fpint,overwrite_nrdata]) Constructing wrapper function "fpcurfm1"... x,y,w,xb,xe,k,s,n,t,c,fp,fpint,nrdata,ier = fpcurfm1(x,y,k,t,[w,xb,xe,overwrite_t]) Constructing wrapper function "bispev"... z,ier = bispev(tx,ty,c,kx,ky,x,y) Constructing wrapper function "bispeu"... z,ier = bispeu(tx,ty,c,kx,ky,x,y) Constructing wrapper function "surfit_smth"... nx,tx,ny,ty,c,fp,wrk1,ier = surfit_smth(x,y,z,[w,xb,xe,yb,ye,kx,ky,s,nxest,nyest,eps,lwrk2]) Constructing wrapper function "surfit_lsq"... tx,ty,c,fp,ier = surfit_lsq(x,y,z,tx,ty,[w,xb,xe,yb,ye,kx,ky,eps,lwrk2,overwrite_tx,overwrite_ty]) Constructing wrapper function "regrid_smth"... nx,tx,ny,ty,c,fp,ier = regrid_smth(x,y,z,[xb,xe,yb,ye,kx,ky,s]) Creating wrapper for Fortran function "dblint"("dblint")... Constructing wrapper function "dblint"... dblint = dblint(tx,ty,c,kx,ky,xb,xe,yb,ye) Wrote C/API module "dfitpack" to file "build/src.macosx-10.4-x86_64-2.7/scipy/interpolate/src/dfitpackmodule.c" Fortran 77 wrappers are saved to "build/src.macosx-10.4-x86_64-2.7/scipy/interpolate/src/dfitpack-f2pywrappers.f" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. adding 'build/src.macosx-10.4-x86_64-2.7/scipy/interpolate/src/dfitpack-f2pywrappers.f' to sources. building extension "scipy.interpolate._interpolate" sources building extension "scipy.io.matlab.streams" sources building extension "scipy.io.matlab.mio_utils" sources building extension "scipy.io.matlab.mio5_utils" sources building extension "scipy.lib.blas.fblas" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/lib creating build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas from_template:> build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas/fblas.pyf ('Including file', 'scipy/lib/blas/fblas_l1.pyf.src') ('Including file', 'scipy/lib/blas/fblas_l2.pyf.src') ('Including file', 'scipy/lib/blas/fblas_l3.pyf.src') Mismatch in number of replacements (base ) for <__l1=->. Ignoring. conv_template:> build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas/fblaswrap_veclib_c.c creating build/src.macosx-10.4-x86_64-2.7/build creating build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7 creating build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy creating build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/lib creating build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas f2py options: ['skip:', ':'] f2py: build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas/fblas.pyf Reading fortran codes... Reading file 'build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas/fblas.pyf' (format:free) Post-processing... Block: fblas Block: srotg Block: drotg Block: crotg Block: zrotg Block: srotmg Block: drotmg Block: srot Block: drot Block: csrot Block: zdrot Block: srotm Block: drotm Block: sswap Block: dswap Block: cswap Block: zswap Block: sscal Block: dscal Block: cscal Block: zscal Block: csscal Block: zdscal Block: scopy Block: dcopy Block: ccopy Block: zcopy Block: saxpy Block: daxpy Block: caxpy Block: zaxpy Block: sdot Block: ddot Block: cdotu Block: zdotu Block: cdotc Block: zdotc Block: snrm2 Block: dnrm2 Block: scnrm2 Block: dznrm2 Block: sasum Block: dasum Block: scasum Block: dzasum Block: isamax Block: idamax Block: icamax Block: izamax Block: sgemv Block: dgemv Block: cgemv Block: zgemv Block: ssymv Block: dsymv Block: chemv Block: zhemv Block: strmv Block: dtrmv Block: ctrmv Block: ztrmv Block: sger Block: dger Block: cgeru Block: zgeru Block: cgerc Block: zgerc Block: sgemm Block: dgemm Block: cgemm Block: zgemm Post-processing (stage 2)... Building modules... Building module "fblas"... Constructing wrapper function "srotg"... c,s = srotg(a,b) Constructing wrapper function "drotg"... c,s = drotg(a,b) Constructing wrapper function "crotg"... c,s = crotg(a,b) Constructing wrapper function "zrotg"... c,s = zrotg(a,b) Constructing wrapper function "srotmg"... param = srotmg(d1,d2,x1,y1) Constructing wrapper function "drotmg"... param = drotmg(d1,d2,x1,y1) Constructing wrapper function "srot"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = srot(x,y,c,s,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) Constructing wrapper function "drot"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = drot(x,y,c,s,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) Constructing wrapper function "csrot"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = csrot(x,y,c,s,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) Constructing wrapper function "zdrot"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = zdrot(x,y,c,s,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) Constructing wrapper function "srotm"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = srotm(x,y,param,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) Constructing wrapper function "drotm"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = drotm(x,y,param,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) Constructing wrapper function "sswap"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = sswap(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "dswap"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = dswap(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "cswap"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = cswap(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "zswap"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = zswap(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "sscal"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = sscal(a,x,[n,offx,incx]) Constructing wrapper function "dscal"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = dscal(a,x,[n,offx,incx]) Constructing wrapper function "cscal"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = cscal(a,x,[n,offx,incx]) Constructing wrapper function "zscal"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = zscal(a,x,[n,offx,incx]) Constructing wrapper function "csscal"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = csscal(a,x,[n,offx,incx,overwrite_x]) Constructing wrapper function "zdscal"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = zdscal(a,x,[n,offx,incx,overwrite_x]) Constructing wrapper function "scopy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = scopy(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "dcopy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = dcopy(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "ccopy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = ccopy(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "zcopy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = zcopy(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "saxpy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' z = saxpy(x,y,[n,a,offx,incx,offy,incy]) Constructing wrapper function "daxpy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' z = daxpy(x,y,[n,a,offx,incx,offy,incy]) Constructing wrapper function "caxpy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' z = caxpy(x,y,[n,a,offx,incx,offy,incy]) Constructing wrapper function "zaxpy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' z = zaxpy(x,y,[n,a,offx,incx,offy,incy]) Creating wrapper for Fortran function "sdot"("sdot")... Constructing wrapper function "sdot"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' xy = sdot(x,y,[n,offx,incx,offy,incy]) Creating wrapper for Fortran function "ddot"("ddot")... Constructing wrapper function "ddot"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' xy = ddot(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "cdotu"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' xy = cdotu(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "zdotu"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' xy = zdotu(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "cdotc"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' xy = cdotc(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "zdotc"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' xy = zdotc(x,y,[n,offx,incx,offy,incy]) Creating wrapper for Fortran function "snrm2"("snrm2")... Constructing wrapper function "snrm2"... getarrdims:warning: assumed shape array, using 0 instead of '*' n2 = snrm2(x,[n,offx,incx]) Creating wrapper for Fortran function "dnrm2"("dnrm2")... Constructing wrapper function "dnrm2"... getarrdims:warning: assumed shape array, using 0 instead of '*' n2 = dnrm2(x,[n,offx,incx]) Creating wrapper for Fortran function "scnrm2"("scnrm2")... Constructing wrapper function "scnrm2"... getarrdims:warning: assumed shape array, using 0 instead of '*' n2 = scnrm2(x,[n,offx,incx]) Creating wrapper for Fortran function "dznrm2"("dznrm2")... Constructing wrapper function "dznrm2"... getarrdims:warning: assumed shape array, using 0 instead of '*' n2 = dznrm2(x,[n,offx,incx]) Creating wrapper for Fortran function "sasum"("sasum")... Constructing wrapper function "sasum"... getarrdims:warning: assumed shape array, using 0 instead of '*' s = sasum(x,[n,offx,incx]) Creating wrapper for Fortran function "dasum"("dasum")... Constructing wrapper function "dasum"... getarrdims:warning: assumed shape array, using 0 instead of '*' s = dasum(x,[n,offx,incx]) Creating wrapper for Fortran function "scasum"("scasum")... Constructing wrapper function "scasum"... getarrdims:warning: assumed shape array, using 0 instead of '*' s = scasum(x,[n,offx,incx]) Creating wrapper for Fortran function "dzasum"("dzasum")... Constructing wrapper function "dzasum"... getarrdims:warning: assumed shape array, using 0 instead of '*' s = dzasum(x,[n,offx,incx]) Constructing wrapper function "isamax"... getarrdims:warning: assumed shape array, using 0 instead of '*' k = isamax(x,[n,offx,incx]) Constructing wrapper function "idamax"... getarrdims:warning: assumed shape array, using 0 instead of '*' k = idamax(x,[n,offx,incx]) Constructing wrapper function "icamax"... getarrdims:warning: assumed shape array, using 0 instead of '*' k = icamax(x,[n,offx,incx]) Constructing wrapper function "izamax"... getarrdims:warning: assumed shape array, using 0 instead of '*' k = izamax(x,[n,offx,incx]) Constructing wrapper function "sgemv"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = sgemv(alpha,a,x,[beta,y,offx,incx,offy,incy,trans,overwrite_y]) Constructing wrapper function "dgemv"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = dgemv(alpha,a,x,[beta,y,offx,incx,offy,incy,trans,overwrite_y]) Constructing wrapper function "cgemv"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = cgemv(alpha,a,x,[beta,y,offx,incx,offy,incy,trans,overwrite_y]) Constructing wrapper function "zgemv"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = zgemv(alpha,a,x,[beta,y,offx,incx,offy,incy,trans,overwrite_y]) Constructing wrapper function "ssymv"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = ssymv(alpha,a,x,[beta,y,offx,incx,offy,incy,lower,overwrite_y]) Constructing wrapper function "dsymv"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = dsymv(alpha,a,x,[beta,y,offx,incx,offy,incy,lower,overwrite_y]) Constructing wrapper function "chemv"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = chemv(alpha,a,x,[beta,y,offx,incx,offy,incy,lower,overwrite_y]) Constructing wrapper function "zhemv"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = zhemv(alpha,a,x,[beta,y,offx,incx,offy,incy,lower,overwrite_y]) Constructing wrapper function "strmv"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = strmv(a,x,[offx,incx,lower,trans,unitdiag,overwrite_x]) Constructing wrapper function "dtrmv"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = dtrmv(a,x,[offx,incx,lower,trans,unitdiag,overwrite_x]) Constructing wrapper function "ctrmv"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = ctrmv(a,x,[offx,incx,lower,trans,unitdiag,overwrite_x]) Constructing wrapper function "ztrmv"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = ztrmv(a,x,[offx,incx,lower,trans,unitdiag,overwrite_x]) Constructing wrapper function "sger"... a = sger(alpha,x,y,[incx,incy,a,overwrite_x,overwrite_y,overwrite_a]) Constructing wrapper function "dger"... a = dger(alpha,x,y,[incx,incy,a,overwrite_x,overwrite_y,overwrite_a]) Constructing wrapper function "cgeru"... a = cgeru(alpha,x,y,[incx,incy,a,overwrite_x,overwrite_y,overwrite_a]) Constructing wrapper function "zgeru"... a = zgeru(alpha,x,y,[incx,incy,a,overwrite_x,overwrite_y,overwrite_a]) Constructing wrapper function "cgerc"... a = cgerc(alpha,x,y,[incx,incy,a,overwrite_x,overwrite_y,overwrite_a]) Constructing wrapper function "zgerc"... a = zgerc(alpha,x,y,[incx,incy,a,overwrite_x,overwrite_y,overwrite_a]) Constructing wrapper function "sgemm"... c = sgemm(alpha,a,b,[beta,c,trans_a,trans_b,overwrite_c]) Constructing wrapper function "dgemm"... c = dgemm(alpha,a,b,[beta,c,trans_a,trans_b,overwrite_c]) Constructing wrapper function "cgemm"... c = cgemm(alpha,a,b,[beta,c,trans_a,trans_b,overwrite_c]) Constructing wrapper function "zgemm"... c = zgemm(alpha,a,b,[beta,c,trans_a,trans_b,overwrite_c]) Wrote C/API module "fblas" to file "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas/fblasmodule.c" Fortran 77 wrappers are saved to "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas/fblas-f2pywrappers.f" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. adding 'build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] f2py: build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas/cblas.pyf Reading fortran codes... Reading file 'build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas/cblas.pyf' (format:free) Post-processing... Block: cblas Block: empty_module Post-processing (stage 2)... Building modules... Building module "cblas"... Constructing wrapper function "empty_module"... empty_module() Wrote C/API module "cblas" to file "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/lib/blas/cblasmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.lib.lapack.flapack" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack from_template:> build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf ('Including file', 'scipy/lib/lapack/flapack_user.pyf.src') ('Including file', 'scipy/lib/lapack/flapack_le.pyf.src') ('Including file', 'scipy/lib/lapack/flapack_lls.pyf.src') ('Including file', 'scipy/lib/lapack/flapack_esv.pyf.src') ('Including file', 'scipy/lib/lapack/flapack_gesv.pyf.src') ('Including file', 'scipy/lib/lapack/flapack_lec.pyf.src') ('Including file', 'scipy/lib/lapack/flapack_llsc.pyf.src') ('Including file', 'scipy/lib/lapack/flapack_sevc.pyf.src') ('Including file', 'scipy/lib/lapack/flapack_evc.pyf.src') ('Including file', 'scipy/lib/lapack/flapack_svdc.pyf.src') ('Including file', 'scipy/lib/lapack/flapack_gsevc.pyf.src') ('Including file', 'scipy/lib/lapack/flapack_gevc.pyf.src') ('Including file', 'scipy/lib/lapack/flapack_aux.pyf.src') creating build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack f2py options: ['skip:', ':'] f2py: build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf Reading fortran codes... Reading file 'build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf' (format:free) Line #1590 in build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:" 3*n-1" crackline:3: No pattern for line Line #1612 in build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:" 3*n-1" crackline:3: No pattern for line Line #1634 in build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:" 2*n-1" crackline:3: No pattern for line Line #1656 in build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:" 2*n-1" crackline:3: No pattern for line Line #1679 in build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:" (compute_v?1+6*n+2*n*n:2*n+1)" crackline:3: No pattern for line Line #1704 in build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:" (compute_v?1+6*n+2*n*n:2*n+1)" crackline:3: No pattern for line Line #1729 in build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:" (compute_v?2*n+n*n:n+1)" crackline:3: No pattern for line Line #1754 in build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:" (compute_v?2*n+n*n:n+1)" crackline:3: No pattern for line Line #2647 in build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:" n" crackline:3: No pattern for line Line #2668 in build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:" n" crackline:3: No pattern for line Line #2689 in build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:" n" crackline:3: No pattern for line Line #2710 in build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:" n" crackline:3: No pattern for line Post-processing... Block: flapack Block: gees__user__routines Block: gees_user_interface Block: sselect Block: dselect Block: cselect Block: zselect Block: sgesv Block: dgesv Block: cgesv Block: zgesv Block: sgbsv Block: dgbsv Block: cgbsv Block: zgbsv Block: sposv Block: dposv Block: cposv Block: zposv Block: sgelss Block: dgelss Block: cgelss Block: zgelss Block: ssyev Block: dsyev Block: cheev Block: zheev Block: ssyevd Block: dsyevd Block: cheevd Block: zheevd Block: ssyevr Block: dsyevr Block: cheevr Block: zheevr Block: sgees In: build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:flapack:unknown_interface:sgees get_useparameters: no module gees__user__routines info used by sgees Block: dgees In: build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:flapack:unknown_interface:dgees get_useparameters: no module gees__user__routines info used by dgees Block: cgees In: build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:flapack:unknown_interface:cgees get_useparameters: no module gees__user__routines info used by cgees Block: zgees In: build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapack.pyf:flapack:unknown_interface:zgees get_useparameters: no module gees__user__routines info used by zgees Block: sgeev Block: dgeev Block: cgeev Block: zgeev Block: sgesdd Block: dgesdd Block: cgesdd Block: zgesdd Block: ssygv Block: dsygv Block: chegv Block: zhegv Block: ssygvd Block: dsygvd Block: chegvd Block: zhegvd Block: sggev Block: dggev Block: cggev Block: zggev Block: sgetrf Block: dgetrf Block: cgetrf Block: zgetrf Block: spotrf Block: dpotrf Block: cpotrf Block: zpotrf Block: sgetrs Block: dgetrs Block: cgetrs Block: zgetrs Block: spotrs Block: dpotrs Block: cpotrs Block: zpotrs Block: sgetri Block: dgetri Block: cgetri Block: zgetri Block: spotri Block: dpotri Block: cpotri Block: zpotri Block: strtri Block: dtrtri Block: ctrtri Block: ztrtri Block: sgeqrf Block: dgeqrf Block: cgeqrf Block: zgeqrf Block: sorgqr Block: dorgqr Block: cungqr Block: zungqr Block: sgehrd Block: dgehrd Block: cgehrd Block: zgehrd Block: sgebal Block: dgebal Block: cgebal Block: zgebal Block: slauum Block: dlauum Block: clauum Block: zlauum Block: slaswp Block: dlaswp Block: claswp Block: zlaswp Post-processing (stage 2)... Building modules... Constructing call-back function "cb_sselect_in_gees__user__routines" def sselect(arg1,arg2): return sselect Constructing call-back function "cb_dselect_in_gees__user__routines" def dselect(arg1,arg2): return dselect Constructing call-back function "cb_cselect_in_gees__user__routines" def cselect(arg): return cselect Constructing call-back function "cb_zselect_in_gees__user__routines" def zselect(arg): return zselect Building module "flapack"... Constructing wrapper function "sgesv"... lu,piv,x,info = sgesv(a,b,[overwrite_a,overwrite_b]) Constructing wrapper function "dgesv"... lu,piv,x,info = dgesv(a,b,[overwrite_a,overwrite_b]) Constructing wrapper function "cgesv"... lu,piv,x,info = cgesv(a,b,[overwrite_a,overwrite_b]) Constructing wrapper function "zgesv"... lu,piv,x,info = zgesv(a,b,[overwrite_a,overwrite_b]) Constructing wrapper function "sgbsv"... lub,piv,x,info = sgbsv(kl,ku,ab,b,[overwrite_ab,overwrite_b]) Constructing wrapper function "dgbsv"... lub,piv,x,info = dgbsv(kl,ku,ab,b,[overwrite_ab,overwrite_b]) Constructing wrapper function "cgbsv"... lub,piv,x,info = cgbsv(kl,ku,ab,b,[overwrite_ab,overwrite_b]) Constructing wrapper function "zgbsv"... lub,piv,x,info = zgbsv(kl,ku,ab,b,[overwrite_ab,overwrite_b]) Constructing wrapper function "sposv"... c,x,info = sposv(a,b,[lower,overwrite_a,overwrite_b]) Constructing wrapper function "dposv"... c,x,info = dposv(a,b,[lower,overwrite_a,overwrite_b]) Constructing wrapper function "cposv"... c,x,info = cposv(a,b,[lower,overwrite_a,overwrite_b]) Constructing wrapper function "zposv"... c,x,info = zposv(a,b,[lower,overwrite_a,overwrite_b]) Constructing wrapper function "sgelss"... v,x,s,rank,info = sgelss(a,b,[cond,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "dgelss"... v,x,s,rank,info = dgelss(a,b,[cond,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "cgelss"... v,x,s,rank,info = cgelss(a,b,[cond,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "zgelss"... v,x,s,rank,info = zgelss(a,b,[cond,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "ssyev"... w,v,info = ssyev(a,[compute_v,lower,lwork,overwrite_a]) Constructing wrapper function "dsyev"... w,v,info = dsyev(a,[compute_v,lower,lwork,overwrite_a]) Constructing wrapper function "cheev"... w,v,info = cheev(a,[compute_v,lower,lwork,overwrite_a]) Constructing wrapper function "zheev"... w,v,info = zheev(a,[compute_v,lower,lwork,overwrite_a]) Constructing wrapper function "ssyevd"... w,v,info = ssyevd(a,[compute_v,lower,lwork,overwrite_a]) Constructing wrapper function "dsyevd"... w,v,info = dsyevd(a,[compute_v,lower,lwork,overwrite_a]) Constructing wrapper function "cheevd"... w,v,info = cheevd(a,[compute_v,lower,lwork,overwrite_a]) Constructing wrapper function "zheevd"... w,v,info = zheevd(a,[compute_v,lower,lwork,overwrite_a]) Constructing wrapper function "ssyevr"... w,v,info = ssyevr(a,[compute_v,lower,vrange,irange,atol,lwork,overwrite_a]) Constructing wrapper function "dsyevr"... w,v,info = dsyevr(a,[compute_v,lower,vrange,irange,atol,lwork,overwrite_a]) Constructing wrapper function "cheevr"... w,v,info = cheevr(a,[compute_v,lower,vrange,irange,atol,lwork,overwrite_a]) Constructing wrapper function "zheevr"... w,v,info = zheevr(a,[compute_v,lower,vrange,irange,atol,lwork,overwrite_a]) Constructing wrapper function "sgees"... t,sdim,wr,wi,vs,info = sgees(sselect,a,[compute_v,sort_t,lwork,sselect_extra_args,overwrite_a]) Constructing wrapper function "dgees"... t,sdim,wr,wi,vs,info = dgees(dselect,a,[compute_v,sort_t,lwork,dselect_extra_args,overwrite_a]) Constructing wrapper function "cgees"... t,sdim,w,vs,info = cgees(cselect,a,[compute_v,sort_t,lwork,cselect_extra_args,overwrite_a]) Constructing wrapper function "zgees"... t,sdim,w,vs,info = zgees(zselect,a,[compute_v,sort_t,lwork,zselect_extra_args,overwrite_a]) Constructing wrapper function "sgeev"... wr,wi,vl,vr,info = sgeev(a,[compute_vl,compute_vr,lwork,overwrite_a]) Constructing wrapper function "dgeev"... wr,wi,vl,vr,info = dgeev(a,[compute_vl,compute_vr,lwork,overwrite_a]) Constructing wrapper function "cgeev"... w,vl,vr,info = cgeev(a,[compute_vl,compute_vr,lwork,overwrite_a]) Constructing wrapper function "zgeev"... w,vl,vr,info = zgeev(a,[compute_vl,compute_vr,lwork,overwrite_a]) Constructing wrapper function "sgesdd"... u,s,vt,info = sgesdd(a,[compute_uv,lwork,overwrite_a]) Constructing wrapper function "dgesdd"... u,s,vt,info = dgesdd(a,[compute_uv,lwork,overwrite_a]) Constructing wrapper function "cgesdd"... u,s,vt,info = cgesdd(a,[compute_uv,lwork,overwrite_a]) Constructing wrapper function "zgesdd"... u,s,vt,info = zgesdd(a,[compute_uv,lwork,overwrite_a]) Constructing wrapper function "ssygv"... w,v,info = ssygv(a,b,[itype,compute_v,lower,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "dsygv"... w,v,info = dsygv(a,b,[itype,compute_v,lower,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "chegv"... w,v,info = chegv(a,b,[itype,compute_v,lower,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "zhegv"... w,v,info = zhegv(a,b,[itype,compute_v,lower,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "ssygvd"... w,v,info = ssygvd(a,b,[itype,compute_v,lower,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "dsygvd"... w,v,info = dsygvd(a,b,[itype,compute_v,lower,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "chegvd"... w,v,info = chegvd(a,b,[itype,compute_v,lower,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "zhegvd"... w,v,info = zhegvd(a,b,[itype,compute_v,lower,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "sggev"... alphar,alphai,beta,vl,vr,info = sggev(a,b,[compute_vl,compute_vr,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "dggev"... alphar,alphai,beta,vl,vr,info = dggev(a,b,[compute_vl,compute_vr,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "cggev"... alpha,beta,vl,vr,info = cggev(a,b,[compute_vl,compute_vr,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "zggev"... alpha,beta,vl,vr,info = zggev(a,b,[compute_vl,compute_vr,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "sgetrf"... lu,piv,info = sgetrf(a,[overwrite_a]) Constructing wrapper function "dgetrf"... lu,piv,info = dgetrf(a,[overwrite_a]) Constructing wrapper function "cgetrf"... lu,piv,info = cgetrf(a,[overwrite_a]) Constructing wrapper function "zgetrf"... lu,piv,info = zgetrf(a,[overwrite_a]) Constructing wrapper function "spotrf"... c,info = spotrf(a,[lower,clean,overwrite_a]) Constructing wrapper function "dpotrf"... c,info = dpotrf(a,[lower,clean,overwrite_a]) Constructing wrapper function "cpotrf"... c,info = cpotrf(a,[lower,clean,overwrite_a]) Constructing wrapper function "zpotrf"... c,info = zpotrf(a,[lower,clean,overwrite_a]) Constructing wrapper function "sgetrs"... x,info = sgetrs(lu,piv,b,[trans,overwrite_b]) Constructing wrapper function "dgetrs"... x,info = dgetrs(lu,piv,b,[trans,overwrite_b]) Constructing wrapper function "cgetrs"... x,info = cgetrs(lu,piv,b,[trans,overwrite_b]) Constructing wrapper function "zgetrs"... x,info = zgetrs(lu,piv,b,[trans,overwrite_b]) Constructing wrapper function "spotrs"... x,info = spotrs(c,b,[lower,overwrite_b]) Constructing wrapper function "dpotrs"... x,info = dpotrs(c,b,[lower,overwrite_b]) Constructing wrapper function "cpotrs"... x,info = cpotrs(c,b,[lower,overwrite_b]) Constructing wrapper function "zpotrs"... x,info = zpotrs(c,b,[lower,overwrite_b]) Constructing wrapper function "sgetri"... inv_a,info = sgetri(lu,piv,[lwork,overwrite_lu]) Constructing wrapper function "dgetri"... inv_a,info = dgetri(lu,piv,[lwork,overwrite_lu]) Constructing wrapper function "cgetri"... inv_a,info = cgetri(lu,piv,[lwork,overwrite_lu]) Constructing wrapper function "zgetri"... inv_a,info = zgetri(lu,piv,[lwork,overwrite_lu]) Constructing wrapper function "spotri"... inv_a,info = spotri(c,[lower,overwrite_c]) Constructing wrapper function "dpotri"... inv_a,info = dpotri(c,[lower,overwrite_c]) Constructing wrapper function "cpotri"... inv_a,info = cpotri(c,[lower,overwrite_c]) Constructing wrapper function "zpotri"... inv_a,info = zpotri(c,[lower,overwrite_c]) Constructing wrapper function "strtri"... inv_c,info = strtri(c,[lower,unitdiag,overwrite_c]) Constructing wrapper function "dtrtri"... inv_c,info = dtrtri(c,[lower,unitdiag,overwrite_c]) Constructing wrapper function "ctrtri"... inv_c,info = ctrtri(c,[lower,unitdiag,overwrite_c]) Constructing wrapper function "ztrtri"... inv_c,info = ztrtri(c,[lower,unitdiag,overwrite_c]) Constructing wrapper function "sgeqrf"... qr,tau,info = sgeqrf(a,[lwork,overwrite_a]) Constructing wrapper function "dgeqrf"... qr,tau,info = dgeqrf(a,[lwork,overwrite_a]) Constructing wrapper function "cgeqrf"... qr,tau,info = cgeqrf(a,[lwork,overwrite_a]) Constructing wrapper function "zgeqrf"... qr,tau,info = zgeqrf(a,[lwork,overwrite_a]) Constructing wrapper function "sorgqr"... q,info = sorgqr(qr,tau,[lwork,overwrite_qr,overwrite_tau]) Constructing wrapper function "dorgqr"... q,info = dorgqr(qr,tau,[lwork,overwrite_qr,overwrite_tau]) Constructing wrapper function "cungqr"... q,info = cungqr(qr,tau,[lwork,overwrite_qr,overwrite_tau]) Constructing wrapper function "zungqr"... q,info = zungqr(qr,tau,[lwork,overwrite_qr,overwrite_tau]) Constructing wrapper function "sgehrd"... ht,tau,info = sgehrd(a,[lo,hi,lwork,overwrite_a]) Constructing wrapper function "dgehrd"... ht,tau,info = dgehrd(a,[lo,hi,lwork,overwrite_a]) Constructing wrapper function "cgehrd"... ht,tau,info = cgehrd(a,[lo,hi,lwork,overwrite_a]) Constructing wrapper function "zgehrd"... ht,tau,info = zgehrd(a,[lo,hi,lwork,overwrite_a]) Constructing wrapper function "sgebal"... ba,lo,hi,pivscale,info = sgebal(a,[scale,permute,overwrite_a]) Constructing wrapper function "dgebal"... ba,lo,hi,pivscale,info = dgebal(a,[scale,permute,overwrite_a]) Constructing wrapper function "cgebal"... ba,lo,hi,pivscale,info = cgebal(a,[scale,permute,overwrite_a]) Constructing wrapper function "zgebal"... ba,lo,hi,pivscale,info = zgebal(a,[scale,permute,overwrite_a]) Constructing wrapper function "slauum"... a,info = slauum(c,[lower,overwrite_c]) Constructing wrapper function "dlauum"... a,info = dlauum(c,[lower,overwrite_c]) Constructing wrapper function "clauum"... a,info = clauum(c,[lower,overwrite_c]) Constructing wrapper function "zlauum"... a,info = zlauum(c,[lower,overwrite_c]) Constructing wrapper function "slaswp"... getarrdims:warning: assumed shape array, using 0 instead of '*' a = slaswp(a,piv,[k1,k2,off,inc,overwrite_a]) Constructing wrapper function "dlaswp"... getarrdims:warning: assumed shape array, using 0 instead of '*' a = dlaswp(a,piv,[k1,k2,off,inc,overwrite_a]) Constructing wrapper function "claswp"... getarrdims:warning: assumed shape array, using 0 instead of '*' a = claswp(a,piv,[k1,k2,off,inc,overwrite_a]) Constructing wrapper function "zlaswp"... getarrdims:warning: assumed shape array, using 0 instead of '*' a = zlaswp(a,piv,[k1,k2,off,inc,overwrite_a]) Wrote C/API module "flapack" to file "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/flapackmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/clapack.pyf' to sources. f2py options: ['skip:', ':'] f2py: build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/clapack.pyf Reading fortran codes... Reading file 'build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/clapack.pyf' (format:free) Post-processing... Block: clapack Block: empty_module Post-processing (stage 2)... Building modules... Building module "clapack"... Constructing wrapper function "empty_module"... empty_module() Wrote C/API module "clapack" to file "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/clapackmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] f2py:> build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/calc_lworkmodule.c Reading fortran codes... Reading file 'scipy/lib/lapack/calc_lwork.f' (format:fix,strict) Post-processing... Block: calc_lwork Block: gehrd Block: gesdd Block: gelss Block: getri Block: geev Block: heev Block: syev Block: gees Block: geqrf Block: gqr Post-processing (stage 2)... Building modules... Building module "calc_lwork"... Constructing wrapper function "gehrd"... minwrk,maxwrk = gehrd(prefix,n,[lo,hi]) Constructing wrapper function "gesdd"... minwrk,maxwrk = gesdd(prefix,m,n,[compute_uv]) Constructing wrapper function "gelss"... minwrk,maxwrk = gelss(prefix,m,n,nrhs) Constructing wrapper function "getri"... minwrk,maxwrk = getri(prefix,n) Constructing wrapper function "geev"... minwrk,maxwrk = geev(prefix,n,[compute_vl,compute_vr]) Constructing wrapper function "heev"... minwrk,maxwrk = heev(prefix,n,[lower]) Constructing wrapper function "syev"... minwrk,maxwrk = syev(prefix,n,[lower]) Constructing wrapper function "gees"... minwrk,maxwrk = gees(prefix,n,[compute_v]) Constructing wrapper function "geqrf"... minwrk,maxwrk = geqrf(prefix,m,n) Constructing wrapper function "gqr"... minwrk,maxwrk = gqr(prefix,m,n) Wrote C/API module "calc_lwork" to file "build/src.macosx-10.4-x86_64-2.7/scipy/lib/lapack/calc_lworkmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/linalg generating fblas interface 23 adding 'build/src.macosx-10.4-x86_64-2.7/scipy/linalg/fblas.pyf' to sources. creating build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/linalg f2py options: [] f2py: build/src.macosx-10.4-x86_64-2.7/scipy/linalg/fblas.pyf Reading fortran codes... Reading file 'build/src.macosx-10.4-x86_64-2.7/scipy/linalg/fblas.pyf' (format:free) Post-processing... Block: fblas Block: srotg Block: drotg Block: crotg Block: zrotg Block: srotmg Block: drotmg Block: srot Block: drot Block: csrot Block: zdrot Block: srotm Block: drotm Block: sswap Block: dswap Block: cswap Block: zswap Block: sscal Block: dscal Block: cscal Block: zscal Block: csscal Block: zdscal Block: scopy Block: dcopy Block: ccopy Block: zcopy Block: saxpy Block: daxpy Block: caxpy Block: zaxpy Block: cdotu Block: zdotu Block: cdotc Block: zdotc Block: sgemv Block: dgemv Block: cgemv Block: zgemv Block: chemv Block: zhemv Block: ssymv Block: dsymv Block: strmv Block: dtrmv Block: ctrmv Block: ztrmv Block: sger Block: dger Block: cgeru Block: zgeru Block: cgerc Block: zgerc Block: sgemm Block: dgemm Block: cgemm Block: zgemm Block: sdot Block: ddot Block: snrm2 Block: dnrm2 Block: scnrm2 Block: dznrm2 Block: sasum Block: dasum Block: scasum Block: dzasum Block: isamax Block: idamax Block: icamax Block: izamax Post-processing (stage 2)... Building modules... Building module "fblas"... Constructing wrapper function "srotg"... c,s = srotg(a,b) Constructing wrapper function "drotg"... c,s = drotg(a,b) Constructing wrapper function "crotg"... c,s = crotg(a,b) Constructing wrapper function "zrotg"... c,s = zrotg(a,b) Constructing wrapper function "srotmg"... param = srotmg(d1,d2,x1,y1) Constructing wrapper function "drotmg"... param = drotmg(d1,d2,x1,y1) Constructing wrapper function "srot"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = srot(x,y,c,s,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) Constructing wrapper function "drot"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = drot(x,y,c,s,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) Constructing wrapper function "csrot"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = csrot(x,y,c,s,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) Constructing wrapper function "zdrot"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = zdrot(x,y,c,s,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) Constructing wrapper function "srotm"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = srotm(x,y,param,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) Constructing wrapper function "drotm"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = drotm(x,y,param,[n,offx,incx,offy,incy,overwrite_x,overwrite_y]) Constructing wrapper function "sswap"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = sswap(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "dswap"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = dswap(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "cswap"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = cswap(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "zswap"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,y = zswap(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "sscal"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = sscal(a,x,[n,offx,incx]) Constructing wrapper function "dscal"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = dscal(a,x,[n,offx,incx]) Constructing wrapper function "cscal"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = cscal(a,x,[n,offx,incx]) Constructing wrapper function "zscal"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = zscal(a,x,[n,offx,incx]) Constructing wrapper function "csscal"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = csscal(a,x,[n,offx,incx,overwrite_x]) Constructing wrapper function "zdscal"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = zdscal(a,x,[n,offx,incx,overwrite_x]) Constructing wrapper function "scopy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = scopy(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "dcopy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = dcopy(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "ccopy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = ccopy(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "zcopy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = zcopy(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "saxpy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = saxpy(x,y,[n,a,offx,incx,offy,incy]) Constructing wrapper function "daxpy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = daxpy(x,y,[n,a,offx,incx,offy,incy]) Constructing wrapper function "caxpy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = caxpy(x,y,[n,a,offx,incx,offy,incy]) Constructing wrapper function "zaxpy"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = zaxpy(x,y,[n,a,offx,incx,offy,incy]) Constructing wrapper function "cdotu"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' xy = cdotu(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "zdotu"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' xy = zdotu(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "cdotc"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' xy = cdotc(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "zdotc"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' xy = zdotc(x,y,[n,offx,incx,offy,incy]) Constructing wrapper function "sgemv"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = sgemv(alpha,a,x,[beta,y,offx,incx,offy,incy,trans,overwrite_y]) Constructing wrapper function "dgemv"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = dgemv(alpha,a,x,[beta,y,offx,incx,offy,incy,trans,overwrite_y]) Constructing wrapper function "cgemv"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = cgemv(alpha,a,x,[beta,y,offx,incx,offy,incy,trans,overwrite_y]) Constructing wrapper function "zgemv"... getarrdims:warning: assumed shape array, using 0 instead of '*' y = zgemv(alpha,a,x,[beta,y,offx,incx,offy,incy,trans,overwrite_y]) Constructing wrapper function "chemv"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = chemv(alpha,a,x,beta,y,[offx,incx,offy,incy,lower,overwrite_y]) Constructing wrapper function "zhemv"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = zhemv(alpha,a,x,beta,y,[offx,incx,offy,incy,lower,overwrite_y]) Constructing wrapper function "ssymv"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = ssymv(alpha,a,x,beta,y,[offx,incx,offy,incy,lower,overwrite_y]) Constructing wrapper function "dsymv"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' y = dsymv(alpha,a,x,beta,y,[offx,incx,offy,incy,lower,overwrite_y]) Constructing wrapper function "strmv"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = strmv(a,x,[offx,incx,lower,trans,unitdiag,overwrite_x]) Constructing wrapper function "dtrmv"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = dtrmv(a,x,[offx,incx,lower,trans,unitdiag,overwrite_x]) Constructing wrapper function "ctrmv"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = ctrmv(a,x,[offx,incx,lower,trans,unitdiag,overwrite_x]) Constructing wrapper function "ztrmv"... getarrdims:warning: assumed shape array, using 0 instead of '*' x = ztrmv(a,x,[offx,incx,lower,trans,unitdiag,overwrite_x]) Constructing wrapper function "sger"... a = sger(alpha,x,y,[incx,incy,a,overwrite_x,overwrite_y,overwrite_a]) Constructing wrapper function "dger"... a = dger(alpha,x,y,[incx,incy,a,overwrite_x,overwrite_y,overwrite_a]) Constructing wrapper function "cgeru"... a = cgeru(alpha,x,y,[incx,incy,a,overwrite_x,overwrite_y,overwrite_a]) Constructing wrapper function "zgeru"... a = zgeru(alpha,x,y,[incx,incy,a,overwrite_x,overwrite_y,overwrite_a]) Constructing wrapper function "cgerc"... a = cgerc(alpha,x,y,[incx,incy,a,overwrite_x,overwrite_y,overwrite_a]) Constructing wrapper function "zgerc"... a = zgerc(alpha,x,y,[incx,incy,a,overwrite_x,overwrite_y,overwrite_a]) Constructing wrapper function "sgemm"... c = sgemm(alpha,a,b,[beta,c,trans_a,trans_b,overwrite_c]) Constructing wrapper function "dgemm"... c = dgemm(alpha,a,b,[beta,c,trans_a,trans_b,overwrite_c]) Constructing wrapper function "cgemm"... c = cgemm(alpha,a,b,[beta,c,trans_a,trans_b,overwrite_c]) Constructing wrapper function "zgemm"... c = zgemm(alpha,a,b,[beta,c,trans_a,trans_b,overwrite_c]) Creating wrapper for Fortran function "sdot"("sdot")... Constructing wrapper function "sdot"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' xy = sdot(x,y,[n,offx,incx,offy,incy]) Creating wrapper for Fortran function "ddot"("ddot")... Constructing wrapper function "ddot"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' xy = ddot(x,y,[n,offx,incx,offy,incy]) Creating wrapper for Fortran function "snrm2"("snrm2")... Constructing wrapper function "snrm2"... getarrdims:warning: assumed shape array, using 0 instead of '*' n2 = snrm2(x,[n,offx,incx]) Creating wrapper for Fortran function "dnrm2"("dnrm2")... Constructing wrapper function "dnrm2"... getarrdims:warning: assumed shape array, using 0 instead of '*' n2 = dnrm2(x,[n,offx,incx]) Creating wrapper for Fortran function "scnrm2"("scnrm2")... Constructing wrapper function "scnrm2"... getarrdims:warning: assumed shape array, using 0 instead of '*' n2 = scnrm2(x,[n,offx,incx]) Creating wrapper for Fortran function "dznrm2"("dznrm2")... Constructing wrapper function "dznrm2"... getarrdims:warning: assumed shape array, using 0 instead of '*' n2 = dznrm2(x,[n,offx,incx]) Creating wrapper for Fortran function "sasum"("sasum")... Constructing wrapper function "sasum"... getarrdims:warning: assumed shape array, using 0 instead of '*' s = sasum(x,[n,offx,incx]) Creating wrapper for Fortran function "dasum"("dasum")... Constructing wrapper function "dasum"... getarrdims:warning: assumed shape array, using 0 instead of '*' s = dasum(x,[n,offx,incx]) Creating wrapper for Fortran function "scasum"("scasum")... Constructing wrapper function "scasum"... getarrdims:warning: assumed shape array, using 0 instead of '*' s = scasum(x,[n,offx,incx]) Creating wrapper for Fortran function "dzasum"("dzasum")... Constructing wrapper function "dzasum"... getarrdims:warning: assumed shape array, using 0 instead of '*' s = dzasum(x,[n,offx,incx]) Constructing wrapper function "isamax"... getarrdims:warning: assumed shape array, using 0 instead of '*' k = isamax(x,[n,offx,incx]) Constructing wrapper function "idamax"... getarrdims:warning: assumed shape array, using 0 instead of '*' k = idamax(x,[n,offx,incx]) Constructing wrapper function "icamax"... getarrdims:warning: assumed shape array, using 0 instead of '*' k = icamax(x,[n,offx,incx]) Constructing wrapper function "izamax"... getarrdims:warning: assumed shape array, using 0 instead of '*' k = izamax(x,[n,offx,incx]) Wrote C/API module "fblas" to file "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/linalg/fblasmodule.c" Fortran 77 wrappers are saved to "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/linalg/fblas-f2pywrappers.f" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. adding 'build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.macosx-10.4-x86_64-2.7/scipy/linalg/cblas.pyf' to sources. f2py options: [] f2py: build/src.macosx-10.4-x86_64-2.7/scipy/linalg/cblas.pyf Reading fortran codes... Reading file 'build/src.macosx-10.4-x86_64-2.7/scipy/linalg/cblas.pyf' (format:free) Post-processing... Block: cblas Block: empty_module Post-processing (stage 2)... Building modules... Building module "cblas"... Constructing wrapper function "empty_module"... empty_module() Wrote C/API module "cblas" to file "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/linalg/cblasmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.linalg.flapack" sources generating flapack interface 64 adding 'build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf' to sources. f2py options: [] f2py: build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf Reading fortran codes... Reading file 'build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf' (format:free) Line #3296 in build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf:" char*,char*,char*,int*,int*,complex_float*,int*,complex_float*,int*,float*,float*,int*,int*,float*,int*,float*,complex_float*,int*,complex_float*,float*,int*,int*,int*" crackline:3: No pattern for line Line #3373 in build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf:" char*,char*,char*,int*,int*,complex_double*,int*,complex_double*,int*,double*,double*,int*,int*,double*,int*,double*,complex_double*,int*,complex_double*,double*,int*,int*,int*" crackline:3: No pattern for line Line #3533 in build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf:"lprotoargument char*,int*,int *,int*,int*,float*,int*,int*,float*,int*,int*" crackline:3: No pattern for line Line #3563 in build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf:"lprotoargument char*,int*,int *,int*,int*,double*,int*,int*,double*,int*,int*" crackline:3: No pattern for line Line #3593 in build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf:"lprotoargument char*,int*,int *,int*,int*,complex_float*,int*,int*,complex_float*,int*,int*" crackline:3: No pattern for line Line #3623 in build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf:"lprotoargument char*,int*,int *,int*,int*,complex_double*,int*,int*,complex_double*,int*,int*" crackline:3: No pattern for line Post-processing... Block: cgees__user__routines Block: cgees_user_interface Block: cselect Block: dgees__user__routines Block: dgees_user_interface Block: dselect Block: sgees__user__routines Block: sgees_user_interface Block: sselect Block: zgees__user__routines Block: zgees_user_interface Block: zselect Block: flapack Block: spbtrf Block: dpbtrf Block: cpbtrf Block: zpbtrf Block: spbtrs Block: dpbtrs Block: cpbtrs Block: zpbtrs Block: strtrs Block: dtrtrs Block: ctrtrs Block: ztrtrs Block: spbsv Block: dpbsv Block: cpbsv Block: zpbsv Block: sgebal Block: dgebal Block: cgebal Block: zgebal Block: sgehrd Block: dgehrd Block: cgehrd Block: zgehrd Block: sgbsv Block: dgbsv Block: cgbsv Block: zgbsv Block: sgesv Block: dgesv Block: cgesv Block: zgesv Block: sgetrf Block: dgetrf Block: cgetrf Block: zgetrf Block: sgetrs Block: dgetrs Block: cgetrs Block: zgetrs Block: sgetri Block: dgetri Block: cgetri Block: zgetri Block: sgesdd Block: dgesdd Block: cgesdd Block: zgesdd Block: sgelss Block: dgelss Block: cgelss Block: zgelss Block: sgeqrf Block: dgeqrf Block: cgeqrf Block: zgeqrf Block: sgerqf Block: dgerqf Block: cgerqf Block: zgerqf Block: sorgqr Block: dorgqr Block: cungqr Block: zungqr Block: sgeev Block: dgeev Block: cgeev Block: zgeev Block: sgegv Block: dgegv Block: cgegv Block: zgegv Block: ssyev Block: dsyev Block: cheev Block: zheev Block: sposv Block: dposv Block: cposv Block: zposv Block: spotrf Block: dpotrf Block: cpotrf Block: zpotrf Block: spotrs Block: dpotrs Block: cpotrs Block: zpotrs Block: spotri Block: dpotri Block: cpotri Block: zpotri Block: slauum Block: dlauum Block: clauum Block: zlauum Block: strtri Block: dtrtri Block: ctrtri Block: ztrtri Block: slaswp Block: dlaswp Block: claswp Block: zlaswp Block: cgees In: build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf:flapack:unknown_interface:cgees get_useparameters: no module cgees__user__routines info used by cgees Block: zgees In: build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf:flapack:unknown_interface:zgees get_useparameters: no module zgees__user__routines info used by zgees Block: dgees In: build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf:flapack:unknown_interface:dgees get_useparameters: no module dgees__user__routines info used by dgees Block: sgees In: build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack.pyf:flapack:unknown_interface:sgees get_useparameters: no module sgees__user__routines info used by sgees Block: sggev Block: dggev Block: cggev Block: zggev Block: ssbev Block: dsbev Block: ssbevd Block: dsbevd Block: ssbevx Block: dsbevx Block: chbevd Block: zhbevd Block: chbevx Block: zhbevx Block: sgbtrf Block: dgbtrf Block: cgbtrf Block: zgbtrf Block: sgbtrs Block: dgbtrs Block: cgbtrs Block: zgbtrs Block: ssyevr Block: dsyevr Block: cheevr Block: zheevr Block: ssygv Block: dsygv Block: chegv Block: zhegv Block: ssygvd Block: dsygvd Block: chegvd Block: zhegvd Block: ssygvx Block: dsygvx Block: chegvx Block: zhegvx Block: slamch Block: dlamch Post-processing (stage 2)... Building modules... Constructing call-back function "cb_cselect_in_cgees__user__routines" def cselect(e_w__i__e): return cselect Constructing call-back function "cb_dselect_in_dgees__user__routines" def dselect(e_wr__i__e,e_wi__i__e): return dselect Constructing call-back function "cb_sselect_in_sgees__user__routines" def sselect(e_wr__i__e,e_wi__i__e): return sselect Constructing call-back function "cb_zselect_in_zgees__user__routines" def zselect(e_w__i__e): return zselect Building module "flapack"... Constructing wrapper function "spbtrf"... c,info = spbtrf(ab,[lower,ldab,overwrite_ab]) Constructing wrapper function "dpbtrf"... c,info = dpbtrf(ab,[lower,ldab,overwrite_ab]) Constructing wrapper function "cpbtrf"... c,info = cpbtrf(ab,[lower,ldab,overwrite_ab]) Constructing wrapper function "zpbtrf"... c,info = zpbtrf(ab,[lower,ldab,overwrite_ab]) Constructing wrapper function "spbtrs"... x,info = spbtrs(ab,b,[lower,ldab,overwrite_b]) Constructing wrapper function "dpbtrs"... x,info = dpbtrs(ab,b,[lower,ldab,overwrite_b]) Constructing wrapper function "cpbtrs"... x,info = cpbtrs(ab,b,[lower,ldab,overwrite_b]) Constructing wrapper function "zpbtrs"... x,info = zpbtrs(ab,b,[lower,ldab,overwrite_b]) Constructing wrapper function "strtrs"... x,info = strtrs(a,b,[lower,trans,unitdiag,lda,overwrite_b]) Constructing wrapper function "dtrtrs"... x,info = dtrtrs(a,b,[lower,trans,unitdiag,lda,overwrite_b]) Constructing wrapper function "ctrtrs"... x,info = ctrtrs(a,b,[lower,trans,unitdiag,lda,overwrite_b]) Constructing wrapper function "ztrtrs"... x,info = ztrtrs(a,b,[lower,trans,unitdiag,lda,overwrite_b]) Constructing wrapper function "spbsv"... c,x,info = spbsv(ab,b,[lower,ldab,overwrite_ab,overwrite_b]) Constructing wrapper function "dpbsv"... c,x,info = dpbsv(ab,b,[lower,ldab,overwrite_ab,overwrite_b]) Constructing wrapper function "cpbsv"... c,x,info = cpbsv(ab,b,[lower,ldab,overwrite_ab,overwrite_b]) Constructing wrapper function "zpbsv"... c,x,info = zpbsv(ab,b,[lower,ldab,overwrite_ab,overwrite_b]) Constructing wrapper function "sgebal"... ba,lo,hi,pivscale,info = sgebal(a,[scale,permute,overwrite_a]) Constructing wrapper function "dgebal"... ba,lo,hi,pivscale,info = dgebal(a,[scale,permute,overwrite_a]) Constructing wrapper function "cgebal"... ba,lo,hi,pivscale,info = cgebal(a,[scale,permute,overwrite_a]) Constructing wrapper function "zgebal"... ba,lo,hi,pivscale,info = zgebal(a,[scale,permute,overwrite_a]) Constructing wrapper function "sgehrd"... ht,tau,info = sgehrd(a,[lo,hi,lwork,overwrite_a]) Constructing wrapper function "dgehrd"... ht,tau,info = dgehrd(a,[lo,hi,lwork,overwrite_a]) Constructing wrapper function "cgehrd"... ht,tau,info = cgehrd(a,[lo,hi,lwork,overwrite_a]) Constructing wrapper function "zgehrd"... ht,tau,info = zgehrd(a,[lo,hi,lwork,overwrite_a]) Constructing wrapper function "sgbsv"... lub,piv,x,info = sgbsv(kl,ku,ab,b,[overwrite_ab,overwrite_b]) Constructing wrapper function "dgbsv"... lub,piv,x,info = dgbsv(kl,ku,ab,b,[overwrite_ab,overwrite_b]) Constructing wrapper function "cgbsv"... lub,piv,x,info = cgbsv(kl,ku,ab,b,[overwrite_ab,overwrite_b]) Constructing wrapper function "zgbsv"... lub,piv,x,info = zgbsv(kl,ku,ab,b,[overwrite_ab,overwrite_b]) Constructing wrapper function "sgesv"... lu,piv,x,info = sgesv(a,b,[overwrite_a,overwrite_b]) Constructing wrapper function "dgesv"... lu,piv,x,info = dgesv(a,b,[overwrite_a,overwrite_b]) Constructing wrapper function "cgesv"... lu,piv,x,info = cgesv(a,b,[overwrite_a,overwrite_b]) Constructing wrapper function "zgesv"... lu,piv,x,info = zgesv(a,b,[overwrite_a,overwrite_b]) Constructing wrapper function "sgetrf"... lu,piv,info = sgetrf(a,[overwrite_a]) Constructing wrapper function "dgetrf"... lu,piv,info = dgetrf(a,[overwrite_a]) Constructing wrapper function "cgetrf"... lu,piv,info = cgetrf(a,[overwrite_a]) Constructing wrapper function "zgetrf"... lu,piv,info = zgetrf(a,[overwrite_a]) Constructing wrapper function "sgetrs"... x,info = sgetrs(lu,piv,b,[trans,overwrite_b]) Constructing wrapper function "dgetrs"... x,info = dgetrs(lu,piv,b,[trans,overwrite_b]) Constructing wrapper function "cgetrs"... x,info = cgetrs(lu,piv,b,[trans,overwrite_b]) Constructing wrapper function "zgetrs"... x,info = zgetrs(lu,piv,b,[trans,overwrite_b]) Constructing wrapper function "sgetri"... inv_a,info = sgetri(lu,piv,[lwork,overwrite_lu]) Constructing wrapper function "dgetri"... inv_a,info = dgetri(lu,piv,[lwork,overwrite_lu]) Constructing wrapper function "cgetri"... inv_a,info = cgetri(lu,piv,[lwork,overwrite_lu]) Constructing wrapper function "zgetri"... inv_a,info = zgetri(lu,piv,[lwork,overwrite_lu]) Constructing wrapper function "sgesdd"... u,s,vt,info = sgesdd(a,[compute_uv,lwork,overwrite_a]) Constructing wrapper function "dgesdd"... u,s,vt,info = dgesdd(a,[compute_uv,lwork,overwrite_a]) Constructing wrapper function "cgesdd"... u,s,vt,info = cgesdd(a,[compute_uv,lwork,overwrite_a]) Constructing wrapper function "zgesdd"... u,s,vt,info = zgesdd(a,[compute_uv,lwork,overwrite_a]) Constructing wrapper function "sgelss"... v,x,s,rank,info = sgelss(a,b,[cond,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "dgelss"... v,x,s,rank,info = dgelss(a,b,[cond,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "cgelss"... v,x,s,rank,info = cgelss(a,b,[cond,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "zgelss"... v,x,s,rank,info = zgelss(a,b,[cond,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "sgeqrf"... qr,tau,work,info = sgeqrf(a,[lwork,overwrite_a]) Constructing wrapper function "dgeqrf"... qr,tau,work,info = dgeqrf(a,[lwork,overwrite_a]) Constructing wrapper function "cgeqrf"... qr,tau,work,info = cgeqrf(a,[lwork,overwrite_a]) Constructing wrapper function "zgeqrf"... qr,tau,work,info = zgeqrf(a,[lwork,overwrite_a]) Constructing wrapper function "sgerqf"... qr,tau,work,info = sgerqf(a,[lwork,overwrite_a]) Constructing wrapper function "dgerqf"... qr,tau,work,info = dgerqf(a,[lwork,overwrite_a]) Constructing wrapper function "cgerqf"... qr,tau,work,info = cgerqf(a,[lwork,overwrite_a]) Constructing wrapper function "zgerqf"... qr,tau,work,info = zgerqf(a,[lwork,overwrite_a]) Constructing wrapper function "sorgqr"... q,work,info = sorgqr(a,tau,[lwork,overwrite_a]) Constructing wrapper function "dorgqr"... q,work,info = dorgqr(a,tau,[lwork,overwrite_a]) Constructing wrapper function "cungqr"... q,work,info = cungqr(a,tau,[lwork,overwrite_a]) Constructing wrapper function "zungqr"... q,work,info = zungqr(a,tau,[lwork,overwrite_a]) Constructing wrapper function "sgeev"... wr,wi,vl,vr,info = sgeev(a,[compute_vl,compute_vr,lwork,overwrite_a]) Constructing wrapper function "dgeev"... wr,wi,vl,vr,info = dgeev(a,[compute_vl,compute_vr,lwork,overwrite_a]) Constructing wrapper function "cgeev"... w,vl,vr,info = cgeev(a,[compute_vl,compute_vr,lwork,overwrite_a]) Constructing wrapper function "zgeev"... w,vl,vr,info = zgeev(a,[compute_vl,compute_vr,lwork,overwrite_a]) Constructing wrapper function "sgegv"... alphar,alphai,beta,vl,vr,info = sgegv(a,b,[compute_vl,compute_vr,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "dgegv"... alphar,alphai,beta,vl,vr,info = dgegv(a,b,[compute_vl,compute_vr,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "cgegv"... alpha,beta,vl,vr,info = cgegv(a,b,[compute_vl,compute_vr,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "zgegv"... alpha,beta,vl,vr,info = zgegv(a,b,[compute_vl,compute_vr,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "ssyev"... w,v,info = ssyev(a,[compute_v,lower,lwork,overwrite_a]) Constructing wrapper function "dsyev"... w,v,info = dsyev(a,[compute_v,lower,lwork,overwrite_a]) Constructing wrapper function "cheev"... w,v,info = cheev(a,[compute_v,lower,lwork,overwrite_a]) Constructing wrapper function "zheev"... w,v,info = zheev(a,[compute_v,lower,lwork,overwrite_a]) Constructing wrapper function "sposv"... c,x,info = sposv(a,b,[lower,overwrite_a,overwrite_b]) Constructing wrapper function "dposv"... c,x,info = dposv(a,b,[lower,overwrite_a,overwrite_b]) Constructing wrapper function "cposv"... c,x,info = cposv(a,b,[lower,overwrite_a,overwrite_b]) Constructing wrapper function "zposv"... c,x,info = zposv(a,b,[lower,overwrite_a,overwrite_b]) Constructing wrapper function "spotrf"... c,info = spotrf(a,[lower,clean,overwrite_a]) Constructing wrapper function "dpotrf"... c,info = dpotrf(a,[lower,clean,overwrite_a]) Constructing wrapper function "cpotrf"... c,info = cpotrf(a,[lower,clean,overwrite_a]) Constructing wrapper function "zpotrf"... c,info = zpotrf(a,[lower,clean,overwrite_a]) Constructing wrapper function "spotrs"... x,info = spotrs(c,b,[lower,overwrite_b]) Constructing wrapper function "dpotrs"... x,info = dpotrs(c,b,[lower,overwrite_b]) Constructing wrapper function "cpotrs"... x,info = cpotrs(c,b,[lower,overwrite_b]) Constructing wrapper function "zpotrs"... x,info = zpotrs(c,b,[lower,overwrite_b]) Constructing wrapper function "spotri"... inv_a,info = spotri(c,[lower,overwrite_c]) Constructing wrapper function "dpotri"... inv_a,info = dpotri(c,[lower,overwrite_c]) Constructing wrapper function "cpotri"... inv_a,info = cpotri(c,[lower,overwrite_c]) Constructing wrapper function "zpotri"... inv_a,info = zpotri(c,[lower,overwrite_c]) Constructing wrapper function "slauum"... a,info = slauum(c,[lower,overwrite_c]) Constructing wrapper function "dlauum"... a,info = dlauum(c,[lower,overwrite_c]) Constructing wrapper function "clauum"... a,info = clauum(c,[lower,overwrite_c]) Constructing wrapper function "zlauum"... a,info = zlauum(c,[lower,overwrite_c]) Constructing wrapper function "strtri"... inv_c,info = strtri(c,[lower,unitdiag,overwrite_c]) Constructing wrapper function "dtrtri"... inv_c,info = dtrtri(c,[lower,unitdiag,overwrite_c]) Constructing wrapper function "ctrtri"... inv_c,info = ctrtri(c,[lower,unitdiag,overwrite_c]) Constructing wrapper function "ztrtri"... inv_c,info = ztrtri(c,[lower,unitdiag,overwrite_c]) Constructing wrapper function "slaswp"... getarrdims:warning: assumed shape array, using 0 instead of '*' a = slaswp(a,piv,[k1,k2,off,inc,overwrite_a]) Constructing wrapper function "dlaswp"... getarrdims:warning: assumed shape array, using 0 instead of '*' a = dlaswp(a,piv,[k1,k2,off,inc,overwrite_a]) Constructing wrapper function "claswp"... getarrdims:warning: assumed shape array, using 0 instead of '*' a = claswp(a,piv,[k1,k2,off,inc,overwrite_a]) Constructing wrapper function "zlaswp"... getarrdims:warning: assumed shape array, using 0 instead of '*' a = zlaswp(a,piv,[k1,k2,off,inc,overwrite_a]) Constructing wrapper function "cgees"... t,sdim,w,vs,work,info = cgees(cselect,a,[compute_v,sort_t,lwork,cselect_extra_args,overwrite_a]) Constructing wrapper function "zgees"... t,sdim,w,vs,work,info = zgees(zselect,a,[compute_v,sort_t,lwork,zselect_extra_args,overwrite_a]) Constructing wrapper function "dgees"... t,sdim,wr,wi,vs,work,info = dgees(dselect,a,[compute_v,sort_t,lwork,dselect_extra_args,overwrite_a]) Constructing wrapper function "sgees"... t,sdim,wr,wi,vs,work,info = sgees(sselect,a,[compute_v,sort_t,lwork,sselect_extra_args,overwrite_a]) Constructing wrapper function "sggev"... alphar,alphai,beta,vl,vr,work,info = sggev(a,b,[compute_vl,compute_vr,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "dggev"... alphar,alphai,beta,vl,vr,work,info = dggev(a,b,[compute_vl,compute_vr,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "cggev"... alpha,beta,vl,vr,work,info = cggev(a,b,[compute_vl,compute_vr,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "zggev"... alpha,beta,vl,vr,work,info = zggev(a,b,[compute_vl,compute_vr,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "ssbev"... getarrdims:warning: assumed shape array, using 0 instead of '*' w,z,info = ssbev(ab,[compute_v,lower,ldab,overwrite_ab]) Constructing wrapper function "dsbev"... getarrdims:warning: assumed shape array, using 0 instead of '*' w,z,info = dsbev(ab,[compute_v,lower,ldab,overwrite_ab]) Constructing wrapper function "ssbevd"... getarrdims:warning: assumed shape array, using 0 instead of '*' w,z,info = ssbevd(ab,[compute_v,lower,ldab,liwork,overwrite_ab]) Constructing wrapper function "dsbevd"... getarrdims:warning: assumed shape array, using 0 instead of '*' w,z,info = dsbevd(ab,[compute_v,lower,ldab,liwork,overwrite_ab]) Constructing wrapper function "ssbevx"... getarrdims:warning: assumed shape array, using 0 instead of '*' w,z,m,ifail,info = ssbevx(ab,vl,vu,il,iu,[ldab,compute_v,range,lower,abstol,mmax,overwrite_ab]) Constructing wrapper function "dsbevx"... getarrdims:warning: assumed shape array, using 0 instead of '*' w,z,m,ifail,info = dsbevx(ab,vl,vu,il,iu,[ldab,compute_v,range,lower,abstol,mmax,overwrite_ab]) Constructing wrapper function "chbevd"... getarrdims:warning: assumed shape array, using 0 instead of '*' w,z,info = chbevd(ab,[compute_v,lower,ldab,lrwork,liwork,overwrite_ab]) Constructing wrapper function "zhbevd"... getarrdims:warning: assumed shape array, using 0 instead of '*' w,z,info = zhbevd(ab,[compute_v,lower,ldab,lrwork,liwork,overwrite_ab]) Constructing wrapper function "chbevx"... warning: callstatement is defined without callprotoargument getarrdims:warning: assumed shape array, using 0 instead of '*' w,z,m,ifail,info = chbevx(ab,vl,vu,il,iu,[ldab,compute_v,range,lower,abstol,mmax,overwrite_ab]) Constructing wrapper function "zhbevx"... warning: callstatement is defined without callprotoargument getarrdims:warning: assumed shape array, using 0 instead of '*' w,z,m,ifail,info = zhbevx(ab,vl,vu,il,iu,[ldab,compute_v,range,lower,abstol,mmax,overwrite_ab]) Constructing wrapper function "sgbtrf"... getarrdims:warning: assumed shape array, using 0 instead of '*' lu,ipiv,info = sgbtrf(ab,kl,ku,[m,n,ldab,overwrite_ab]) Constructing wrapper function "dgbtrf"... getarrdims:warning: assumed shape array, using 0 instead of '*' lu,ipiv,info = dgbtrf(ab,kl,ku,[m,n,ldab,overwrite_ab]) Constructing wrapper function "cgbtrf"... getarrdims:warning: assumed shape array, using 0 instead of '*' lu,ipiv,info = cgbtrf(ab,kl,ku,[m,n,ldab,overwrite_ab]) Constructing wrapper function "zgbtrf"... getarrdims:warning: assumed shape array, using 0 instead of '*' lu,ipiv,info = zgbtrf(ab,kl,ku,[m,n,ldab,overwrite_ab]) Constructing wrapper function "sgbtrs"... warning: callstatement is defined without callprotoargument getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,info = sgbtrs(ab,kl,ku,b,ipiv,[trans,n,ldab,ldb,overwrite_b]) Constructing wrapper function "dgbtrs"... warning: callstatement is defined without callprotoargument getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,info = dgbtrs(ab,kl,ku,b,ipiv,[trans,n,ldab,ldb,overwrite_b]) Constructing wrapper function "cgbtrs"... warning: callstatement is defined without callprotoargument getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,info = cgbtrs(ab,kl,ku,b,ipiv,[trans,n,ldab,ldb,overwrite_b]) Constructing wrapper function "zgbtrs"... warning: callstatement is defined without callprotoargument getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,info = zgbtrs(ab,kl,ku,b,ipiv,[trans,n,ldab,ldb,overwrite_b]) Constructing wrapper function "ssyevr"... w,z,info = ssyevr(a,[jobz,range,uplo,il,iu,lwork,overwrite_a]) Constructing wrapper function "dsyevr"... w,z,info = dsyevr(a,[jobz,range,uplo,il,iu,lwork,overwrite_a]) Constructing wrapper function "cheevr"... w,z,info = cheevr(a,[jobz,range,uplo,il,iu,lwork,overwrite_a]) Constructing wrapper function "zheevr"... w,z,info = zheevr(a,[jobz,range,uplo,il,iu,lwork,overwrite_a]) Constructing wrapper function "ssygv"... a,w,info = ssygv(a,b,[itype,jobz,uplo,overwrite_a,overwrite_b]) Constructing wrapper function "dsygv"... a,w,info = dsygv(a,b,[itype,jobz,uplo,overwrite_a,overwrite_b]) Constructing wrapper function "chegv"... a,w,info = chegv(a,b,[itype,jobz,uplo,overwrite_a,overwrite_b]) Constructing wrapper function "zhegv"... a,w,info = zhegv(a,b,[itype,jobz,uplo,overwrite_a,overwrite_b]) Constructing wrapper function "ssygvd"... a,w,info = ssygvd(a,b,[itype,jobz,uplo,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "dsygvd"... a,w,info = dsygvd(a,b,[itype,jobz,uplo,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "chegvd"... a,w,info = chegvd(a,b,[itype,jobz,uplo,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "zhegvd"... a,w,info = zhegvd(a,b,[itype,jobz,uplo,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "ssygvx"... w,z,ifail,info = ssygvx(a,b,iu,[itype,jobz,uplo,il,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "dsygvx"... w,z,ifail,info = dsygvx(a,b,iu,[itype,jobz,uplo,il,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "chegvx"... w,z,ifail,info = chegvx(a,b,iu,[itype,jobz,uplo,il,lwork,overwrite_a,overwrite_b]) Constructing wrapper function "zhegvx"... w,z,ifail,info = zhegvx(a,b,iu,[itype,jobz,uplo,il,lwork,overwrite_a,overwrite_b]) Creating wrapper for Fortran function "slamch"("slamch")... Constructing wrapper function "slamch"... slamch = slamch(cmach) Creating wrapper for Fortran function "dlamch"("dlamch")... Constructing wrapper function "dlamch"... dlamch = dlamch(cmach) Wrote C/API module "flapack" to file "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapackmodule.c" Fortran 77 wrappers are saved to "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack-f2pywrappers.f" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. adding 'build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.macosx-10.4-x86_64-2.7/scipy/linalg/clapack.pyf' to sources. f2py options: [] f2py: build/src.macosx-10.4-x86_64-2.7/scipy/linalg/clapack.pyf Reading fortran codes... Reading file 'build/src.macosx-10.4-x86_64-2.7/scipy/linalg/clapack.pyf' (format:free) Post-processing... Block: clapack Block: empty_module Post-processing (stage 2)... Building modules... Building module "clapack"... Constructing wrapper function "empty_module"... empty_module() Wrote C/API module "clapack" to file "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/linalg/clapackmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] f2py:> build/src.macosx-10.4-x86_64-2.7/scipy/linalg/_flinalgmodule.c Reading fortran codes... Reading file 'scipy/linalg/src/det.f' (format:fix,strict) Reading file 'scipy/linalg/src/lu.f' (format:fix,strict) Post-processing... Block: _flinalg {'attrspec': ['intent(out)']} In: :_flinalg:scipy/linalg/src/det.f:ddet_c vars2fortran: No typespec for argument "info". Block: ddet_c {'attrspec': ['intent(out)']} In: :_flinalg:scipy/linalg/src/det.f:ddet_r vars2fortran: No typespec for argument "info". Block: ddet_r {'attrspec': ['intent(out)']} In: :_flinalg:scipy/linalg/src/det.f:sdet_c vars2fortran: No typespec for argument "info". Block: sdet_c {'attrspec': ['intent(out)']} In: :_flinalg:scipy/linalg/src/det.f:sdet_r vars2fortran: No typespec for argument "info". Block: sdet_r {'attrspec': ['intent(out)']} In: :_flinalg:scipy/linalg/src/det.f:zdet_c vars2fortran: No typespec for argument "info". Block: zdet_c {'attrspec': ['intent(out)']} In: :_flinalg:scipy/linalg/src/det.f:zdet_r vars2fortran: No typespec for argument "info". Block: zdet_r {'attrspec': ['intent(out)']} In: :_flinalg:scipy/linalg/src/det.f:cdet_c vars2fortran: No typespec for argument "info". Block: cdet_c {'attrspec': ['intent(out)']} In: :_flinalg:scipy/linalg/src/det.f:cdet_r vars2fortran: No typespec for argument "info". Block: cdet_r {'attrspec': ['intent(out)']} In: :_flinalg:scipy/linalg/src/lu.f:dlu_c vars2fortran: No typespec for argument "info". Block: dlu_c {'attrspec': ['intent(out)']} In: :_flinalg:scipy/linalg/src/lu.f:zlu_c vars2fortran: No typespec for argument "info". Block: zlu_c {'attrspec': ['intent(out)']} In: :_flinalg:scipy/linalg/src/lu.f:slu_c vars2fortran: No typespec for argument "info". Block: slu_c {'attrspec': ['intent(out)']} In: :_flinalg:scipy/linalg/src/lu.f:clu_c vars2fortran: No typespec for argument "info". Block: clu_c Post-processing (stage 2)... Building modules... Building module "_flinalg"... Constructing wrapper function "ddet_c"... det,info = ddet_c(a,[overwrite_a]) Constructing wrapper function "ddet_r"... det,info = ddet_r(a,[overwrite_a]) Constructing wrapper function "sdet_c"... det,info = sdet_c(a,[overwrite_a]) Constructing wrapper function "sdet_r"... det,info = sdet_r(a,[overwrite_a]) Constructing wrapper function "zdet_c"... det,info = zdet_c(a,[overwrite_a]) Constructing wrapper function "zdet_r"... det,info = zdet_r(a,[overwrite_a]) Constructing wrapper function "cdet_c"... det,info = cdet_c(a,[overwrite_a]) Constructing wrapper function "cdet_r"... det,info = cdet_r(a,[overwrite_a]) Constructing wrapper function "dlu_c"... p,l,u,info = dlu_c(a,[permute_l,overwrite_a]) Constructing wrapper function "zlu_c"... p,l,u,info = zlu_c(a,[permute_l,overwrite_a]) Constructing wrapper function "slu_c"... p,l,u,info = slu_c(a,[permute_l,overwrite_a]) Constructing wrapper function "clu_c"... p,l,u,info = clu_c(a,[permute_l,overwrite_a]) Wrote C/API module "_flinalg" to file "build/src.macosx-10.4-x86_64-2.7/scipy/linalg/_flinalgmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] f2py:> build/src.macosx-10.4-x86_64-2.7/scipy/linalg/calc_lworkmodule.c Reading fortran codes... Reading file 'scipy/linalg/src/calc_lwork.f' (format:fix,strict) Post-processing... Block: calc_lwork Block: gehrd Block: gesdd Block: gelss Block: getri Block: geev Block: heev Block: syev Block: gees Block: geqrf Block: gqr Post-processing (stage 2)... Building modules... Building module "calc_lwork"... Constructing wrapper function "gehrd"... minwrk,maxwrk = gehrd(prefix,n,lo,hi) Constructing wrapper function "gesdd"... minwrk,maxwrk = gesdd(prefix,m,n,compute_uv) Constructing wrapper function "gelss"... minwrk,maxwrk = gelss(prefix,m,n,nrhs) Constructing wrapper function "getri"... minwrk,maxwrk = getri(prefix,n) Constructing wrapper function "geev"... minwrk,maxwrk = geev(prefix,n,[compute_vl,compute_vr]) Constructing wrapper function "heev"... minwrk,maxwrk = heev(prefix,n,[lower]) Constructing wrapper function "syev"... minwrk,maxwrk = syev(prefix,n,[lower]) Constructing wrapper function "gees"... minwrk,maxwrk = gees(prefix,n,[compute_v]) Constructing wrapper function "geqrf"... minwrk,maxwrk = geqrf(prefix,m,n) Constructing wrapper function "gqr"... minwrk,maxwrk = gqr(prefix,m,n) Wrote C/API module "calc_lwork" to file "build/src.macosx-10.4-x86_64-2.7/scipy/linalg/calc_lworkmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/optimize creating build/src.macosx-10.4-x86_64-2.7/scipy/optimize/lbfgsb f2py options: [] f2py: scipy/optimize/lbfgsb/lbfgsb.pyf Reading fortran codes... Reading file 'scipy/optimize/lbfgsb/lbfgsb.pyf' (format:free) Post-processing... Block: _lbfgsb Block: setulb Post-processing (stage 2)... Building modules... Building module "_lbfgsb"... Constructing wrapper function "setulb"... setulb(m,x,l,u,nbd,f,g,factr,pgtol,wa,iwa,task,iprint,csave,lsave,isave,dsave,[n]) Wrote C/API module "_lbfgsb" to file "build/src.macosx-10.4-x86_64-2.7/scipy/optimize/lbfgsb/_lbfgsbmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/optimize/cobyla f2py options: [] f2py: scipy/optimize/cobyla/cobyla.pyf Reading fortran codes... Reading file 'scipy/optimize/cobyla/cobyla.pyf' (format:free) Post-processing... Block: _cobyla__user__routines Block: _cobyla_user_interface Block: calcfc Block: _cobyla Block: minimize In: scipy/optimize/cobyla/cobyla.pyf:_cobyla:unknown_interface:minimize get_useparameters: no module _cobyla__user__routines info used by minimize Post-processing (stage 2)... Building modules... Constructing call-back function "cb_calcfc_in__cobyla__user__routines" def calcfc(x,con): return f Building module "_cobyla"... Constructing wrapper function "minimize"... x = minimize(calcfc,m,x,rhobeg,rhoend,[iprint,maxfun,calcfc_extra_args]) Wrote C/API module "_cobyla" to file "build/src.macosx-10.4-x86_64-2.7/scipy/optimize/cobyla/_cobylamodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.optimize.minpack2" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/optimize/minpack2 f2py options: [] f2py: scipy/optimize/minpack2/minpack2.pyf Reading fortran codes... Reading file 'scipy/optimize/minpack2/minpack2.pyf' (format:free) Post-processing... Block: minpack2 Block: dcsrch Block: dcstep Post-processing (stage 2)... Building modules... Building module "minpack2"... Constructing wrapper function "dcsrch"... stp,f,g,task = dcsrch(stp,f,g,ftol,gtol,xtol,task,stpmin,stpmax,isave,dsave) Constructing wrapper function "dcstep"... stx,fx,dx,sty,fy,dy,stp,brackt = dcstep(stx,fx,dx,sty,fy,dy,stp,fp,dp,brackt,stpmin,stpmax) Wrote C/API module "minpack2" to file "build/src.macosx-10.4-x86_64-2.7/scipy/optimize/minpack2/minpack2module.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.optimize._slsqp" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/optimize/slsqp f2py options: [] f2py: scipy/optimize/slsqp/slsqp.pyf Reading fortran codes... Reading file 'scipy/optimize/slsqp/slsqp.pyf' (format:free) Post-processing... Block: _slsqp Block: slsqp Post-processing (stage 2)... Building modules... Building module "_slsqp"... Constructing wrapper function "slsqp"... slsqp(m,meq,x,xl,xu,f,c,g,a,acc,iter,mode,w,jw,[la,n,l_w,l_jw]) Wrote C/API module "_slsqp" to file "build/src.macosx-10.4-x86_64-2.7/scipy/optimize/slsqp/_slsqpmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.optimize._nnls" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/optimize/nnls f2py options: [] f2py: scipy/optimize/nnls/nnls.pyf Reading fortran codes... Reading file 'scipy/optimize/nnls/nnls.pyf' (format:free) crackline: groupcounter=1 groupname={0: '', 1: 'python module', 2: 'interface', 3: 'subroutine'} crackline: Mismatch of blocks encountered. Trying to fix it by assuming "end" statement. Post-processing... Block: _nnls Block: nnls Post-processing (stage 2)... Building modules... Building module "_nnls"... Constructing wrapper function "nnls"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' x,rnorm,mode = nnls(a,m,n,b,w,zz,index_bn,[mda,overwrite_a,overwrite_b]) Wrote C/API module "_nnls" to file "build/src.macosx-10.4-x86_64-2.7/scipy/optimize/nnls/_nnlsmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.signal.sigtools" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/signal conv_template:> build/src.macosx-10.4-x86_64-2.7/scipy/signal/lfilter.c conv_template:> build/src.macosx-10.4-x86_64-2.7/scipy/signal/correlate_nd.c building extension "scipy.signal.spline" sources building extension "scipy.sparse.linalg.isolve._iterative" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/sparse creating build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg creating build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve creating build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative from_template:> build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative/STOPTEST2.f from_template:> build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative/getbreak.f from_template:> build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative/BiCGREVCOM.f from_template:> build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative/BiCGSTABREVCOM.f from_template:> build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative/CGREVCOM.f from_template:> build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative/CGSREVCOM.f from_template:> build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative/GMRESREVCOM.f from_template:> build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative/QMRREVCOM.f from_template:> build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative/_iterative.pyf creating build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/sparse creating build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg creating build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve creating build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative f2py options: [] f2py: build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative/_iterative.pyf Reading fortran codes... Reading file 'build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative/_iterative.pyf' (format:free) Post-processing... Block: _iterative Block: sbicgrevcom Block: dbicgrevcom Block: cbicgrevcom Block: zbicgrevcom Block: sbicgstabrevcom Block: dbicgstabrevcom Block: cbicgstabrevcom Block: zbicgstabrevcom Block: scgrevcom Block: dcgrevcom Block: ccgrevcom Block: zcgrevcom Block: scgsrevcom Block: dcgsrevcom Block: ccgsrevcom Block: zcgsrevcom Block: sqmrrevcom Block: dqmrrevcom Block: cqmrrevcom Block: zqmrrevcom Block: sgmresrevcom Block: dgmresrevcom Block: cgmresrevcom Block: zgmresrevcom Block: sstoptest2 Block: dstoptest2 Block: cstoptest2 Block: zstoptest2 Post-processing (stage 2)... Building modules... Building module "_iterative"... Constructing wrapper function "sbicgrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = sbicgrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "dbicgrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = dbicgrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "cbicgrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = cbicgrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "zbicgrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = zbicgrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "sbicgstabrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = sbicgstabrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "dbicgstabrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = dbicgstabrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "cbicgstabrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = cbicgstabrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "zbicgstabrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = zbicgstabrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "scgrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = scgrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "dcgrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = dcgrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "ccgrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = ccgrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "zcgrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = zcgrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "scgsrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = scgsrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "dcgsrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = dcgsrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "ccgsrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = ccgsrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "zcgsrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = zcgsrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "sqmrrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = sqmrrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "dqmrrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = dqmrrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "cqmrrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = cqmrrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "zqmrrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = zqmrrevcom(b,x,work,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "sgmresrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = sgmresrevcom(b,x,restrt,work,work2,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "dgmresrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = dgmresrevcom(b,x,restrt,work,work2,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "cgmresrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = cgmresrevcom(b,x,restrt,work,work2,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "zgmresrevcom"... x,iter,resid,info,ndx1,ndx2,sclr1,sclr2,ijob = zgmresrevcom(b,x,restrt,work,work2,iter,resid,info,ndx1,ndx2,ijob) Constructing wrapper function "sstoptest2"... bnrm2,resid,info = sstoptest2(r,b,bnrm2,tol,info) Constructing wrapper function "dstoptest2"... bnrm2,resid,info = dstoptest2(r,b,bnrm2,tol,info) Constructing wrapper function "cstoptest2"... bnrm2,resid,info = cstoptest2(r,b,bnrm2,tol,info) Constructing wrapper function "zstoptest2"... bnrm2,resid,info = zstoptest2(r,b,bnrm2,tol,info) Wrote C/API module "_iterative" to file "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve/iterative/_iterativemodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.sparse.linalg.dsolve._superlu" sources building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve creating build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve/umfpack building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen creating build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack from_template:> build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/arpack.pyf creating build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen creating build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack f2py options: [] f2py: build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/arpack.pyf Reading fortran codes... Reading file 'build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/arpack.pyf' (format:free) Line #5 in build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/arpack.pyf:" <_rd=real,double precision>" crackline:1: No pattern for line Line #6 in build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/arpack.pyf:" <_cd=complex,double complex>" crackline:1: No pattern for line Post-processing... Block: _arpack Block: ssaupd Block: dsaupd Block: sseupd Block: dseupd Block: snaupd Block: dnaupd Block: sneupd Block: dneupd Block: cnaupd Block: znaupd Block: cneupd Block: zneupd Post-processing (stage 2)... Building modules... Building module "_arpack"... Constructing wrapper function "ssaupd"... ido,resid,v,iparam,ipntr,info = ssaupd(ido,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,info,[n,ncv,ldv,lworkl]) Constructing wrapper function "dsaupd"... ido,resid,v,iparam,ipntr,info = dsaupd(ido,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,info,[n,ncv,ldv,lworkl]) Constructing wrapper function "sseupd"... d,z,info = sseupd(rvec,howmny,select,sigma,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,info,[ldz,n,ncv,ldv,lworkl]) Constructing wrapper function "dseupd"... d,z,info = dseupd(rvec,howmny,select,sigma,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,info,[ldz,n,ncv,ldv,lworkl]) Constructing wrapper function "snaupd"... ido,resid,v,iparam,ipntr,info = snaupd(ido,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,info,[n,ncv,ldv,lworkl]) Constructing wrapper function "dnaupd"... ido,resid,v,iparam,ipntr,info = dnaupd(ido,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,info,[n,ncv,ldv,lworkl]) Constructing wrapper function "sneupd"... dr,di,z,info = sneupd(rvec,howmny,select,sigmar,sigmai,workev,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,info,[ldz,n,ncv,ldv,lworkl]) Constructing wrapper function "dneupd"... dr,di,z,info = dneupd(rvec,howmny,select,sigmar,sigmai,workev,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,info,[ldz,n,ncv,ldv,lworkl]) Constructing wrapper function "cnaupd"... ido,resid,v,iparam,ipntr,info = cnaupd(ido,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,rwork,info,[n,ncv,ldv,lworkl]) Constructing wrapper function "znaupd"... ido,resid,v,iparam,ipntr,info = znaupd(ido,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,rwork,info,[n,ncv,ldv,lworkl]) Constructing wrapper function "cneupd"... d,z,info = cneupd(rvec,howmny,select,sigma,workev,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,rwork,info,[ldz,n,ncv,ldv,lworkl]) Constructing wrapper function "zneupd"... d,z,info = zneupd(rvec,howmny,select,sigma,workev,bmat,which,nev,tol,resid,v,iparam,ipntr,workd,workl,rwork,info,[ldz,n,ncv,ldv,lworkl]) Constructing COMMON block support for "debug"... logfil,ndigit,mgetv0,msaupd,msaup2,msaitr,mseigt,msapps,msgets,mseupd,mnaupd,mnaup2,mnaitr,mneigh,mnapps,mngets,mneupd,mcaupd,mcaup2,mcaitr,mceigh,mcapps,mcgets,mceupd Constructing COMMON block support for "timing"... nopx,nbx,nrorth,nitref,nrstrt,tsaupd,tsaup2,tsaitr,tseigt,tsgets,tsapps,tsconv,tnaupd,tnaup2,tnaitr,tneigh,tngets,tnapps,tnconv,tcaupd,tcaup2,tcaitr,tceigh,tcgets,tcapps,tcconv,tmvopx,tmvbx,tgetv0,titref,trvec Wrote C/API module "_arpack" to file "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/_arpackmodule.c" Fortran 77 wrappers are saved to "build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. adding 'build/src.macosx-10.4-x86_64-2.7/build/src.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f' to sources. building extension "scipy.sparse.sparsetools._csr" sources building extension "scipy.sparse.sparsetools._csc" sources building extension "scipy.sparse.sparsetools._coo" sources building extension "scipy.sparse.sparsetools._bsr" sources building extension "scipy.sparse.sparsetools._dia" sources building extension "scipy.sparse.sparsetools._csgraph" sources building extension "scipy.spatial.qhull" sources building extension "scipy.spatial.ckdtree" sources building extension "scipy.spatial._distance_wrap" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/special f2py options: ['--no-wrap-functions'] f2py: scipy/special/specfun.pyf Reading fortran codes... Reading file 'scipy/special/specfun.pyf' (format:free) Post-processing... Block: specfun Block: clqmn Block: lqmn Block: clpmn Block: jdzo Block: bernob Block: bernoa Block: csphjy Block: lpmns Block: eulera Block: clqn Block: airyzo Block: eulerb Block: cva1 Block: lqnb Block: lamv Block: lagzo Block: legzo Block: pbdv Block: cerzo Block: lamn Block: clpn Block: lqmns Block: chgm Block: lpmn Block: fcszo Block: aswfb Block: lqna Block: cpbdn Block: lpn Block: fcoef Block: sphi Block: rcty Block: lpni Block: cyzo Block: csphik Block: sphj Block: othpl Block: klvnzo Block: jyzo Block: rctj Block: herzo Block: sphk Block: pbvv Block: segv Block: sphy Post-processing (stage 2)... Building modules... Building module "specfun"... Constructing wrapper function "clqmn"... cqm,cqd = clqmn(m,n,z) Constructing wrapper function "lqmn"... qm,qd = lqmn(m,n,x) Constructing wrapper function "clpmn"... cpm,cpd = clpmn(m,n,x,y) Constructing wrapper function "jdzo"... n,m,pcode,zo = jdzo(nt) Constructing wrapper function "bernob"... bn = bernob(n) Constructing wrapper function "bernoa"... bn = bernoa(n) Constructing wrapper function "csphjy"... nm,csj,cdj,csy,cdy = csphjy(n,z) Constructing wrapper function "lpmns"... pm,pd = lpmns(m,n,x) Constructing wrapper function "eulera"... en = eulera(n) Constructing wrapper function "clqn"... cqn,cqd = clqn(n,z) Constructing wrapper function "airyzo"... xa,xb,xc,xd = airyzo(nt,[kf]) Constructing wrapper function "eulerb"... en = eulerb(n) Constructing wrapper function "cva1"... cv = cva1(kd,m,q) Constructing wrapper function "lqnb"... qn,qd = lqnb(n,x) Constructing wrapper function "lamv"... vm,vl,dl = lamv(v,x) Constructing wrapper function "lagzo"... x,w = lagzo(n) Constructing wrapper function "legzo"... x,w = legzo(n) Constructing wrapper function "pbdv"... dv,dp,pdf,pdd = pbdv(v,x) Constructing wrapper function "cerzo"... zo = cerzo(nt) Constructing wrapper function "lamn"... nm,bl,dl = lamn(n,x) Constructing wrapper function "clpn"... cpn,cpd = clpn(n,z) Constructing wrapper function "lqmns"... qm,qd = lqmns(m,n,x) Constructing wrapper function "chgm"... hg = chgm(a,b,x) Constructing wrapper function "lpmn"... pm,pd = lpmn(m,n,x) Constructing wrapper function "fcszo"... zo = fcszo(kf,nt) Constructing wrapper function "aswfb"... s1f,s1d = aswfb(m,n,c,x,kd,cv) Constructing wrapper function "lqna"... qn,qd = lqna(n,x) Constructing wrapper function "cpbdn"... cpb,cpd = cpbdn(n,z) Constructing wrapper function "lpn"... pn,pd = lpn(n,x) Constructing wrapper function "fcoef"... fc = fcoef(kd,m,q,a) Constructing wrapper function "sphi"... nm,si,di = sphi(n,x) Constructing wrapper function "rcty"... nm,ry,dy = rcty(n,x) Constructing wrapper function "lpni"... pn,pd,pl = lpni(n,x) Constructing wrapper function "cyzo"... zo,zv = cyzo(nt,kf,kc) Constructing wrapper function "csphik"... nm,csi,cdi,csk,cdk = csphik(n,z) Constructing wrapper function "sphj"... nm,sj,dj = sphj(n,x) Constructing wrapper function "othpl"... pl,dpl = othpl(kf,n,x) Constructing wrapper function "klvnzo"... zo = klvnzo(nt,kd) Constructing wrapper function "jyzo"... rj0,rj1,ry0,ry1 = jyzo(n,nt) Constructing wrapper function "rctj"... nm,rj,dj = rctj(n,x) Constructing wrapper function "herzo"... x,w = herzo(n) Constructing wrapper function "sphk"... nm,sk,dk = sphk(n,x) Constructing wrapper function "pbvv"... vv,vp,pvf,pvd = pbvv(v,x) Constructing wrapper function "segv"... cv,eg = segv(m,n,c,kd) Constructing wrapper function "sphy"... nm,sy,dy = sphy(n,x) Wrote C/API module "specfun" to file "build/src.macosx-10.4-x86_64-2.7/scipy/special/specfunmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.special.orthogonal_eval" sources building extension "scipy.special.lambertw" sources building extension "scipy.stats.statlib" sources creating build/src.macosx-10.4-x86_64-2.7/scipy/stats f2py options: ['--no-wrap-functions'] f2py: scipy/stats/statlib.pyf Reading fortran codes... Reading file 'scipy/stats/statlib.pyf' (format:free) Post-processing... Block: statlib Block: swilk Block: wprob Block: gscale Block: prho Post-processing (stage 2)... Building modules... Building module "statlib"... Constructing wrapper function "swilk"... a,w,pw,ifault = swilk(x,a,[init,n1]) Constructing wrapper function "wprob"... astart,a1,ifault = wprob(test,other) Constructing wrapper function "gscale"... astart,a1,ifault = gscale(test,other) Constructing wrapper function "prho"... ifault = prho(n,is) Wrote C/API module "statlib" to file "build/src.macosx-10.4-x86_64-2.7/scipy/stats/statlibmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.stats.vonmises_cython" sources building extension "scipy.stats.futil" sources f2py options: [] f2py:> build/src.macosx-10.4-x86_64-2.7/scipy/stats/futilmodule.c Reading fortran codes... Reading file 'scipy/stats/futil.f' (format:fix,strict) Post-processing... Block: futil Block: dqsort Block: dfreps Post-processing (stage 2)... Building modules... Building module "futil"... Constructing wrapper function "dqsort"... arr = dqsort(arr,[overwrite_arr]) Constructing wrapper function "dfreps"... replist,repnum,nlist = dfreps(arr) Wrote C/API module "futil" to file "build/src.macosx-10.4-x86_64-2.7/scipy/stats/futilmodule.c" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] f2py: scipy/stats/mvn.pyf Reading fortran codes... Reading file 'scipy/stats/mvn.pyf' (format:free) Post-processing... Block: mvn Block: mvnun Block: mvndst Post-processing (stage 2)... Building modules... Building module "mvn"... Constructing wrapper function "mvnun"... value,inform = mvnun(lower,upper,means,covar,[maxpts,abseps,releps]) Constructing wrapper function "mvndst"... error,value,inform = mvndst(lower,upper,infin,correl,[maxpts,abseps,releps]) Constructing COMMON block support for "dkblck"... ivls Wrote C/API module "mvn" to file "build/src.macosx-10.4-x86_64-2.7/scipy/stats/mvnmodule.c" Fortran 77 wrappers are saved to "build/src.macosx-10.4-x86_64-2.7/scipy/stats/mvn-f2pywrappers.f" adding 'build/src.macosx-10.4-x86_64-2.7/fortranobject.c' to sources. adding 'build/src.macosx-10.4-x86_64-2.7' to include_dirs. adding 'build/src.macosx-10.4-x86_64-2.7/scipy/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building data_files sources build_src: building npy-pkg config files running build_py creating build/lib.macosx-10.4-x86_64-2.7 creating build/lib.macosx-10.4-x86_64-2.7/scipy copying scipy/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy copying scipy/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy copying scipy/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy copying scipy/version.py -> build/lib.macosx-10.4-x86_64-2.7/scipy copying build/src.macosx-10.4-x86_64-2.7/scipy/__config__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy creating build/lib.macosx-10.4-x86_64-2.7/scipy/cluster copying scipy/cluster/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/cluster copying scipy/cluster/hierarchy.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/cluster copying scipy/cluster/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/cluster copying scipy/cluster/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/cluster copying scipy/cluster/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/cluster copying scipy/cluster/vq.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/cluster creating build/lib.macosx-10.4-x86_64-2.7/scipy/constants copying scipy/constants/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/constants copying scipy/constants/codata.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/constants copying scipy/constants/constants.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/constants copying scipy/constants/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/constants creating build/lib.macosx-10.4-x86_64-2.7/scipy/fftpack copying scipy/fftpack/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/fftpack copying scipy/fftpack/basic.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/fftpack copying scipy/fftpack/fftpack_version.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/fftpack copying scipy/fftpack/helper.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/fftpack copying scipy/fftpack/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/fftpack copying scipy/fftpack/pseudo_diffs.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/fftpack copying scipy/fftpack/realtransforms.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/fftpack copying scipy/fftpack/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/fftpack copying scipy/fftpack/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/fftpack creating build/lib.macosx-10.4-x86_64-2.7/scipy/integrate copying scipy/integrate/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/integrate copying scipy/integrate/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/integrate copying scipy/integrate/ode.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/integrate copying scipy/integrate/odepack.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/integrate copying scipy/integrate/quadpack.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/integrate copying scipy/integrate/quadrature.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/integrate copying scipy/integrate/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/integrate copying scipy/integrate/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/integrate creating build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/fitpack.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/fitpack2.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/generate_interpnd.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/interpnd_info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/interpolate.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/interpolate_wrapper.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/ndgriddata.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/polyint.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/rbf.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate copying scipy/interpolate/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/interpolate creating build/lib.macosx-10.4-x86_64-2.7/scipy/io copying scipy/io/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io copying scipy/io/data_store.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io copying scipy/io/dumb_shelve.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io copying scipy/io/dumbdbm_patched.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io copying scipy/io/idl.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io copying scipy/io/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io copying scipy/io/mmio.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io copying scipy/io/netcdf.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io copying scipy/io/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io copying scipy/io/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io copying scipy/io/wavfile.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io creating build/lib.macosx-10.4-x86_64-2.7/scipy/io/matlab copying scipy/io/matlab/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/matlab copying scipy/io/matlab/byteordercodes.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/matlab copying scipy/io/matlab/mio.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/matlab copying scipy/io/matlab/mio4.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/matlab copying scipy/io/matlab/mio5.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/matlab copying scipy/io/matlab/mio5_params.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/matlab copying scipy/io/matlab/miobase.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/matlab copying scipy/io/matlab/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/matlab copying scipy/io/matlab/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/matlab creating build/lib.macosx-10.4-x86_64-2.7/scipy/io/arff copying scipy/io/arff/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/arff copying scipy/io/arff/arffread.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/arff copying scipy/io/arff/myfunctools.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/arff copying scipy/io/arff/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/arff copying scipy/io/arff/utils.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/io/arff creating build/lib.macosx-10.4-x86_64-2.7/scipy/lib copying scipy/lib/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib copying scipy/lib/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib copying scipy/lib/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib copying scipy/lib/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib creating build/lib.macosx-10.4-x86_64-2.7/scipy/lib/blas copying scipy/lib/blas/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib/blas copying scipy/lib/blas/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib/blas copying scipy/lib/blas/scons_support.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib/blas copying scipy/lib/blas/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib/blas copying scipy/lib/blas/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib/blas creating build/lib.macosx-10.4-x86_64-2.7/scipy/lib/lapack copying scipy/lib/lapack/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib/lapack copying scipy/lib/lapack/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib/lapack copying scipy/lib/lapack/scons_support.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib/lapack copying scipy/lib/lapack/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib/lapack copying scipy/lib/lapack/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/lib/lapack creating build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/basic.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/blas.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/decomp.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/decomp_cholesky.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/decomp_lu.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/decomp_qr.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/decomp_schur.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/decomp_svd.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/flinalg.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/interface_gen.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/lapack.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/linalg_version.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/matfuncs.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/misc.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/scons_support.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/setup_atlas_version.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg copying scipy/linalg/special_matrices.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/linalg creating build/lib.macosx-10.4-x86_64-2.7/scipy/maxentropy copying scipy/maxentropy/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/maxentropy copying scipy/maxentropy/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/maxentropy copying scipy/maxentropy/maxentropy.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/maxentropy copying scipy/maxentropy/maxentutils.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/maxentropy copying scipy/maxentropy/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/maxentropy copying scipy/maxentropy/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/maxentropy creating build/lib.macosx-10.4-x86_64-2.7/scipy/misc copying scipy/misc/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/misc copying scipy/misc/common.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/misc copying scipy/misc/doccer.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/misc copying scipy/misc/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/misc copying scipy/misc/pilutil.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/misc copying scipy/misc/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/misc copying scipy/misc/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/misc creating build/lib.macosx-10.4-x86_64-2.7/scipy/odr copying scipy/odr/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/odr copying scipy/odr/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/odr copying scipy/odr/models.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/odr copying scipy/odr/odrpack.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/odr copying scipy/odr/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/odr copying scipy/odr/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/odr creating build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/_tstutils.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/anneal.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/cobyla.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/lbfgsb.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/linesearch.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/minpack.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/nnls.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/nonlin.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/optimize.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/slsqp.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/tnc.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize copying scipy/optimize/zeros.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/optimize creating build/lib.macosx-10.4-x86_64-2.7/scipy/signal copying scipy/signal/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/signal copying scipy/signal/bsplines.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/signal copying scipy/signal/filter_design.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/signal copying scipy/signal/fir_filter_design.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/signal copying scipy/signal/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/signal copying scipy/signal/ltisys.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/signal copying scipy/signal/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/signal copying scipy/signal/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/signal copying scipy/signal/signaltools.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/signal copying scipy/signal/waveforms.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/signal copying scipy/signal/wavelets.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/signal copying scipy/signal/windows.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/signal creating build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/base.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/bsr.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/compressed.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/construct.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/coo.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/csc.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/csgraph.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/csr.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/data.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/dia.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/dok.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/extract.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/lil.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/spfuncs.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse copying scipy/sparse/sputils.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse creating build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg copying scipy/sparse/linalg/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg copying scipy/sparse/linalg/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg copying scipy/sparse/linalg/interface.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg copying scipy/sparse/linalg/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg copying scipy/sparse/linalg/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg creating build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve copying scipy/sparse/linalg/isolve/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve copying scipy/sparse/linalg/isolve/iterative.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve copying scipy/sparse/linalg/isolve/lgmres.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve copying scipy/sparse/linalg/isolve/lsqr.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve copying scipy/sparse/linalg/isolve/minres.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve copying scipy/sparse/linalg/isolve/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve copying scipy/sparse/linalg/isolve/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve copying scipy/sparse/linalg/isolve/utils.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/isolve creating build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve copying scipy/sparse/linalg/dsolve/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve copying scipy/sparse/linalg/dsolve/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve copying scipy/sparse/linalg/dsolve/linsolve.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve copying scipy/sparse/linalg/dsolve/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve copying scipy/sparse/linalg/dsolve/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve creating build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve/umfpack copying scipy/sparse/linalg/dsolve/umfpack/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve/umfpack copying scipy/sparse/linalg/dsolve/umfpack/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve/umfpack copying scipy/sparse/linalg/dsolve/umfpack/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve/umfpack copying scipy/sparse/linalg/dsolve/umfpack/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve/umfpack copying scipy/sparse/linalg/dsolve/umfpack/umfpack.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve/umfpack creating build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen copying scipy/sparse/linalg/eigen/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen copying scipy/sparse/linalg/eigen/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen copying scipy/sparse/linalg/eigen/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen copying scipy/sparse/linalg/eigen/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen creating build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack copying scipy/sparse/linalg/eigen/arpack/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack copying scipy/sparse/linalg/eigen/arpack/arpack.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack copying scipy/sparse/linalg/eigen/arpack/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack copying scipy/sparse/linalg/eigen/arpack/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack copying scipy/sparse/linalg/eigen/arpack/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack creating build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/lobpcg copying scipy/sparse/linalg/eigen/lobpcg/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/lobpcg copying scipy/sparse/linalg/eigen/lobpcg/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/lobpcg copying scipy/sparse/linalg/eigen/lobpcg/lobpcg.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/lobpcg copying scipy/sparse/linalg/eigen/lobpcg/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/lobpcg copying scipy/sparse/linalg/eigen/lobpcg/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/lobpcg creating build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/sparsetools copying scipy/sparse/sparsetools/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/sparsetools copying scipy/sparse/sparsetools/bsr.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/sparsetools copying scipy/sparse/sparsetools/coo.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/sparsetools copying scipy/sparse/sparsetools/csc.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/sparsetools copying scipy/sparse/sparsetools/csgraph.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/sparsetools copying scipy/sparse/sparsetools/csr.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/sparsetools copying scipy/sparse/sparsetools/dia.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/sparsetools copying scipy/sparse/sparsetools/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/sparsetools copying scipy/sparse/sparsetools/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/sparse/sparsetools creating build/lib.macosx-10.4-x86_64-2.7/scipy/spatial copying scipy/spatial/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/spatial copying scipy/spatial/distance.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/spatial copying scipy/spatial/generate_qhull.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/spatial copying scipy/spatial/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/spatial copying scipy/spatial/kdtree.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/spatial copying scipy/spatial/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/spatial copying scipy/spatial/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/spatial creating build/lib.macosx-10.4-x86_64-2.7/scipy/special copying scipy/special/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/special copying scipy/special/basic.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/special copying scipy/special/gendoc.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/special copying scipy/special/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/special copying scipy/special/orthogonal.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/special copying scipy/special/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/special copying scipy/special/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/special copying scipy/special/special_version.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/special copying scipy/special/spfun_stats.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/special creating build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/_support.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/distributions.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/kde.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/morestats.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/mstats.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/mstats_basic.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/mstats_extras.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/rv.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/stats.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats copying scipy/stats/vonmises.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/stats creating build/lib.macosx-10.4-x86_64-2.7/scipy/ndimage copying scipy/ndimage/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/ndimage copying scipy/ndimage/_ni_support.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/ndimage copying scipy/ndimage/filters.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/ndimage copying scipy/ndimage/fourier.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/ndimage copying scipy/ndimage/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/ndimage copying scipy/ndimage/interpolation.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/ndimage copying scipy/ndimage/io.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/ndimage copying scipy/ndimage/measurements.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/ndimage copying scipy/ndimage/morphology.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/ndimage copying scipy/ndimage/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/ndimage copying scipy/ndimage/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/ndimage creating build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/__init__.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/accelerate_tools.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/ast_tools.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/base_info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/base_spec.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/blitz_spec.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/blitz_tools.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/build_tools.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/bytecodecompiler.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/c_spec.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/catalog.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/common_info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/converters.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/cpp_namespace_spec.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/ext_tools.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/inline_tools.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/md5_load.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/numpy_scalar_spec.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/platform_info.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/setup.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/setupscons.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/size_check.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/slice_handler.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/standard_array_spec.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/swig2_spec.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/swigptr.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/swigptr2.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/vtk_spec.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave copying scipy/weave/weave_version.py -> build/lib.macosx-10.4-x86_64-2.7/scipy/weave running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Could not locate executable g77 customize Gnu95FCompiler Found executable /usr/local/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_clib building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.4-x86_64-2.7 creating build/temp.macosx-10.4-x86_64-2.7/scipy creating build/temp.macosx-10.4-x86_64-2.7/scipy/fftpack creating build/temp.macosx-10.4-x86_64-2.7/scipy/fftpack/src creating build/temp.macosx-10.4-x86_64-2.7/scipy/fftpack/src/dfftpack compile options: '-I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' gfortran:f77: scipy/fftpack/src/dfftpack/dcosqb.f gfortran:f77: scipy/fftpack/src/dfftpack/dcosqf.f gfortran:f77: scipy/fftpack/src/dfftpack/dcosqi.f gfortran:f77: scipy/fftpack/src/dfftpack/dcost.f gfortran:f77: scipy/fftpack/src/dfftpack/dcosti.f gfortran:f77: scipy/fftpack/src/dfftpack/dfftb.f gfortran:f77: scipy/fftpack/src/dfftpack/dfftb1.f gfortran:f77: scipy/fftpack/src/dfftpack/dfftf.f gfortran:f77: scipy/fftpack/src/dfftpack/dfftf1.f gfortran:f77: scipy/fftpack/src/dfftpack/dffti.f gfortran:f77: scipy/fftpack/src/dfftpack/dffti1.f scipy/fftpack/src/dfftpack/dffti1.f: In function ?dffti1?: scipy/fftpack/src/dfftpack/dffti1.f:13:0: warning: ?ntry? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/fftpack/src/dfftpack/dsinqb.f gfortran:f77: scipy/fftpack/src/dfftpack/dsinqf.f gfortran:f77: scipy/fftpack/src/dfftpack/dsinqi.f gfortran:f77: scipy/fftpack/src/dfftpack/dsint.f gfortran:f77: scipy/fftpack/src/dfftpack/dsint1.f gfortran:f77: scipy/fftpack/src/dfftpack/dsinti.f gfortran:f77: scipy/fftpack/src/dfftpack/zfftb.f gfortran:f77: scipy/fftpack/src/dfftpack/zfftb1.f gfortran:f77: scipy/fftpack/src/dfftpack/zfftf.f gfortran:f77: scipy/fftpack/src/dfftpack/zfftf1.f gfortran:f77: scipy/fftpack/src/dfftpack/zffti.f gfortran:f77: scipy/fftpack/src/dfftpack/zffti1.f scipy/fftpack/src/dfftpack/zffti1.f: In function ?zffti1?: scipy/fftpack/src/dfftpack/zffti1.f:13:0: warning: ?ntry? may be used uninitialized in this function [-Wuninitialized] ar: adding 23 object files to build/temp.macosx-10.4-x86_64-2.7/libdfftpack.a ranlib:@ build/temp.macosx-10.4-x86_64-2.7/libdfftpack.a building 'fftpack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.4-x86_64-2.7/scipy/fftpack/src/fftpack compile options: '-I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' gfortran:f77: scipy/fftpack/src/fftpack/cfftb.f gfortran:f77: scipy/fftpack/src/fftpack/cfftb1.f gfortran:f77: scipy/fftpack/src/fftpack/cfftf.f gfortran:f77: scipy/fftpack/src/fftpack/cfftf1.f gfortran:f77: scipy/fftpack/src/fftpack/cffti.f gfortran:f77: scipy/fftpack/src/fftpack/cffti1.f scipy/fftpack/src/fftpack/cffti1.f: In function ?cffti1?: scipy/fftpack/src/fftpack/cffti1.f:12:0: warning: ?ntry? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/fftpack/src/fftpack/cosqb.f gfortran:f77: scipy/fftpack/src/fftpack/cosqf.f gfortran:f77: scipy/fftpack/src/fftpack/cosqi.f gfortran:f77: scipy/fftpack/src/fftpack/cost.f gfortran:f77: scipy/fftpack/src/fftpack/costi.f gfortran:f77: scipy/fftpack/src/fftpack/rfftb.f gfortran:f77: scipy/fftpack/src/fftpack/rfftb1.f gfortran:f77: scipy/fftpack/src/fftpack/rfftf.f gfortran:f77: scipy/fftpack/src/fftpack/rfftf1.f gfortran:f77: scipy/fftpack/src/fftpack/rffti.f gfortran:f77: scipy/fftpack/src/fftpack/rffti1.f scipy/fftpack/src/fftpack/rffti1.f: In function ?rffti1?: scipy/fftpack/src/fftpack/rffti1.f:12:0: warning: ?ntry? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/fftpack/src/fftpack/sinqb.f gfortran:f77: scipy/fftpack/src/fftpack/sinqf.f gfortran:f77: scipy/fftpack/src/fftpack/sinqi.f gfortran:f77: scipy/fftpack/src/fftpack/sint.f gfortran:f77: scipy/fftpack/src/fftpack/sint1.f gfortran:f77: scipy/fftpack/src/fftpack/sinti.f ar: adding 23 object files to build/temp.macosx-10.4-x86_64-2.7/libfftpack.a ranlib:@ build/temp.macosx-10.4-x86_64-2.7/libfftpack.a building 'linpack_lite' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.4-x86_64-2.7/scipy/integrate creating build/temp.macosx-10.4-x86_64-2.7/scipy/integrate/linpack_lite compile options: '-I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' gfortran:f77: scipy/integrate/linpack_lite/dgbfa.f gfortran:f77: scipy/integrate/linpack_lite/dgbsl.f gfortran:f77: scipy/integrate/linpack_lite/dgefa.f gfortran:f77: scipy/integrate/linpack_lite/dgesl.f gfortran:f77: scipy/integrate/linpack_lite/dgtsl.f gfortran:f77: scipy/integrate/linpack_lite/zgbfa.f scipy/integrate/linpack_lite/zgbfa.f:95.21: dimag(zdumi) = (0.0d0,-1.0d0)*zdumi 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) scipy/integrate/linpack_lite/zgbfa.f:94.21: dreal(zdumr) = zdumr 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) gfortran:f77: scipy/integrate/linpack_lite/zgbsl.f scipy/integrate/linpack_lite/zgbsl.f:73.21: dimag(zdumi) = (0.0d0,-1.0d0)*zdumi 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) scipy/integrate/linpack_lite/zgbsl.f:72.21: dreal(zdumr) = zdumr 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) gfortran:f77: scipy/integrate/linpack_lite/zgefa.f scipy/integrate/linpack_lite/zgefa.f:59.21: dimag(zdumi) = (0.0d0,-1.0d0)*zdumi 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) scipy/integrate/linpack_lite/zgefa.f:58.21: dreal(zdumr) = zdumr 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) gfortran:f77: scipy/integrate/linpack_lite/zgesl.f scipy/integrate/linpack_lite/zgesl.f:67.21: dimag(zdumi) = (0.0d0,-1.0d0)*zdumi 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) scipy/integrate/linpack_lite/zgesl.f:66.21: dreal(zdumr) = zdumr 1 Warning: Possible change of value in conversion from COMPLEX(8) to REAL(8) at (1) ar: adding 9 object files to build/temp.macosx-10.4-x86_64-2.7/liblinpack_lite.a ranlib:@ build/temp.macosx-10.4-x86_64-2.7/liblinpack_lite.a building 'mach' library using additional config_fc from setup script for fortran compiler: {'noopt': ('scipy/integrate/setup.pyc', 1)} customize Gnu95FCompiler compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC creating build/temp.macosx-10.4-x86_64-2.7/scipy/integrate/mach compile options: '-I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' gfortran:f77: scipy/integrate/mach/d1mach.f gfortran:f77: scipy/integrate/mach/i1mach.f gfortran:f77: scipy/integrate/mach/r1mach.f scipy/integrate/mach/r1mach.f:167.27: CALL I1MCRA(SMALL, K, 16, 0, 0) 1 Warning: Rank mismatch in argument 'a' at (1) (scalar and rank-1) scipy/integrate/mach/r1mach.f:168.27: CALL I1MCRA(LARGE, K, 32751, 16777215, 16777215) 1 Warning: Rank mismatch in argument 'a' at (1) (scalar and rank-1) scipy/integrate/mach/r1mach.f:169.27: CALL I1MCRA(RIGHT, K, 15520, 0, 0) 1 Warning: Rank mismatch in argument 'a' at (1) (scalar and rank-1) scipy/integrate/mach/r1mach.f:170.27: CALL I1MCRA(DIVER, K, 15536, 0, 0) 1 Warning: Rank mismatch in argument 'a' at (1) (scalar and rank-1) scipy/integrate/mach/r1mach.f:171.27: CALL I1MCRA(LOG10, K, 16339, 4461392, 10451455) 1 Warning: Rank mismatch in argument 'a' at (1) (scalar and rank-1) gfortran:f77: scipy/integrate/mach/xerror.f scipy/integrate/mach/xerror.f:1.37: SUBROUTINE XERROR(MESS,NMESS,L1,L2) 1 Warning: Unused dummy argument 'l1' at (1) scipy/integrate/mach/xerror.f:1.40: SUBROUTINE XERROR(MESS,NMESS,L1,L2) 1 Warning: Unused dummy argument 'l2' at (1) ar: adding 4 object files to build/temp.macosx-10.4-x86_64-2.7/libmach.a ranlib:@ build/temp.macosx-10.4-x86_64-2.7/libmach.a building 'quadpack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.4-x86_64-2.7/scipy/integrate/quadpack compile options: '-I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' gfortran:f77: scipy/integrate/quadpack/dqag.f gfortran:f77: scipy/integrate/quadpack/dqage.f gfortran:f77: scipy/integrate/quadpack/dqagi.f gfortran:f77: scipy/integrate/quadpack/dqagie.f scipy/integrate/quadpack/dqagie.f: In function ?dqagie?: scipy/integrate/quadpack/dqagie.f:384:0: warning: ?small? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/quadpack/dqagie.f:372:0: warning: ?ertest? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/quadpack/dqagie.f:362:0: warning: ?erlarg? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/quadpack/dqagp.f gfortran:f77: scipy/integrate/quadpack/dqagpe.f scipy/integrate/quadpack/dqagpe.f: In function ?dqagpe?: scipy/integrate/quadpack/dqagpe.f:196:0: warning: ?k? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/quadpack/dqags.f gfortran:f77: scipy/integrate/quadpack/dqagse.f scipy/integrate/quadpack/dqagse.f: In function ?dqagse?: scipy/integrate/quadpack/dqagse.f:376:0: warning: ?small? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/quadpack/dqagse.f:363:0: warning: ?ertest? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/quadpack/dqagse.f:353:0: warning: ?erlarg? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/quadpack/dqawc.f gfortran:f77: scipy/integrate/quadpack/dqawce.f gfortran:f77: scipy/integrate/quadpack/dqawf.f gfortran:f77: scipy/integrate/quadpack/dqawfe.f scipy/integrate/quadpack/dqawfe.f:267.10: 10 l = dabs(omega) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/integrate/quadpack/dqawfe.f: In function ?dqawfe?: scipy/integrate/quadpack/dqawfe.f:356:0: warning: ?drl? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/quadpack/dqawfe.f:316:0: warning: ?ll? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/quadpack/dqawo.f gfortran:f77: scipy/integrate/quadpack/dqawoe.f scipy/integrate/quadpack/dqawoe.f: In function ?dqawoe?: scipy/integrate/quadpack/dqawoe.f:449:0: warning: ?ertest? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/quadpack/dqawoe.f:428:0: warning: ?erlarg? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/quadpack/dqaws.f gfortran:f77: scipy/integrate/quadpack/dqawse.f gfortran:f77: scipy/integrate/quadpack/dqc25c.f gfortran:f77: scipy/integrate/quadpack/dqc25f.f scipy/integrate/quadpack/dqc25f.f: In function ?dqc25f?: scipy/integrate/quadpack/dqc25f.f:325:0: warning: ?m? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/quadpack/dqc25s.f gfortran:f77: scipy/integrate/quadpack/dqcheb.f gfortran:f77: scipy/integrate/quadpack/dqelg.f gfortran:f77: scipy/integrate/quadpack/dqk15.f gfortran:f77: scipy/integrate/quadpack/dqk15i.f gfortran:f77: scipy/integrate/quadpack/dqk15w.f gfortran:f77: scipy/integrate/quadpack/dqk21.f gfortran:f77: scipy/integrate/quadpack/dqk31.f gfortran:f77: scipy/integrate/quadpack/dqk41.f gfortran:f77: scipy/integrate/quadpack/dqk51.f gfortran:f77: scipy/integrate/quadpack/dqk61.f gfortran:f77: scipy/integrate/quadpack/dqmomo.f scipy/integrate/quadpack/dqmomo.f:126.5: 90 return 1 Warning: Label 90 at (1) defined but not used gfortran:f77: scipy/integrate/quadpack/dqng.f scipy/integrate/quadpack/dqng.f: In function ?dqng?: scipy/integrate/quadpack/dqng.f:365:0: warning: ?resasc? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/quadpack/dqng.f:366:0: warning: ?resabs? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/quadpack/dqng.f:363:0: warning: ?res43? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/quadpack/dqng.f:348:0: warning: ?res21? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/quadpack/dqng.f:338:0: warning: ?ipx? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/quadpack/dqpsrt.f gfortran:f77: scipy/integrate/quadpack/dqwgtc.f scipy/integrate/quadpack/dqwgtc.f:1.54: double precision function dqwgtc(x,c,p2,p3,p4,kp) 1 Warning: Unused dummy argument 'kp' at (1) scipy/integrate/quadpack/dqwgtc.f:1.45: double precision function dqwgtc(x,c,p2,p3,p4,kp) 1 Warning: Unused dummy argument 'p2' at (1) scipy/integrate/quadpack/dqwgtc.f:1.48: double precision function dqwgtc(x,c,p2,p3,p4,kp) 1 Warning: Unused dummy argument 'p3' at (1) scipy/integrate/quadpack/dqwgtc.f:1.51: double precision function dqwgtc(x,c,p2,p3,p4,kp) 1 Warning: Unused dummy argument 'p4' at (1) gfortran:f77: scipy/integrate/quadpack/dqwgtf.f scipy/integrate/quadpack/dqwgtf.f:1.49: double precision function dqwgtf(x,omega,p2,p3,p4,integr) 1 Warning: Unused dummy argument 'p2' at (1) scipy/integrate/quadpack/dqwgtf.f:1.52: double precision function dqwgtf(x,omega,p2,p3,p4,integr) 1 Warning: Unused dummy argument 'p3' at (1) scipy/integrate/quadpack/dqwgtf.f:1.55: double precision function dqwgtf(x,omega,p2,p3,p4,integr) 1 Warning: Unused dummy argument 'p4' at (1) gfortran:f77: scipy/integrate/quadpack/dqwgts.f ar: adding 35 object files to build/temp.macosx-10.4-x86_64-2.7/libquadpack.a ranlib:@ build/temp.macosx-10.4-x86_64-2.7/libquadpack.a building 'odepack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.4-x86_64-2.7/scipy/integrate/odepack compile options: '-I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' gfortran:f77: scipy/integrate/odepack/adjlr.f gfortran:f77: scipy/integrate/odepack/aigbt.f gfortran:f77: scipy/integrate/odepack/ainvg.f gfortran:f77: scipy/integrate/odepack/blkdta000.f gfortran:f77: scipy/integrate/odepack/bnorm.f gfortran:f77: scipy/integrate/odepack/cdrv.f gfortran:f77: scipy/integrate/odepack/cfode.f gfortran:f77: scipy/integrate/odepack/cntnzu.f gfortran:f77: scipy/integrate/odepack/ddasrt.f scipy/integrate/odepack/ddasrt.f:1538.3: 770 MSG = 'DASSL-- RUN TERMINATED. APPARENT INFINITE LOOP' 1 Warning: Label 770 at (1) defined but not used scipy/integrate/odepack/ddasrt.f:1080.3: 360 ITEMP = LPHI + NEQ 1 Warning: Label 360 at (1) defined but not used scipy/integrate/odepack/ddasrt.f:1022.3: 300 CONTINUE 1 Warning: Label 300 at (1) defined but not used scipy/integrate/odepack/ddasrt.f:1096.19: * RWORK(LGX),JROOT,IRT,RWORK(LROUND),INFO(3), 1 Warning: Rank mismatch in argument 'jroot' at (1) (rank-1 and scalar) scipy/integrate/odepack/ddasrt.f:1106.19: * RWORK(LGX),JROOT,IRT,RWORK(LROUND),INFO(3), 1 Warning: Rank mismatch in argument 'jroot' at (1) (rank-1 and scalar) scipy/integrate/odepack/ddasrt.f:1134.19: * RWORK(LGX),JROOT,IRT,RWORK(LROUND),INFO(3), 1 Warning: Rank mismatch in argument 'jroot' at (1) (rank-1 and scalar) scipy/integrate/odepack/ddasrt.f:1298.19: * RWORK(LGX),JROOT,IRT,RWORK(LROUND),INFO(3), 1 Warning: Rank mismatch in argument 'jroot' at (1) (rank-1 and scalar) scipy/integrate/odepack/ddasrt.f:1932.40: SUBROUTINE XERRWV (MSG, NMES, NERR, LEVEL, NI, I1, I2, NR, R1, R2) 1 Warning: Unused dummy argument 'nerr' at (1) gfortran:f77: scipy/integrate/odepack/ddassl.f scipy/integrate/odepack/ddassl.f:3153.5: 30 IF (LEVEL.LE.0 .OR. (LEVEL.EQ.1 .AND. MKNTRL.LE.1)) RETURN 1 Warning: Label 30 at (1) defined but not used scipy/integrate/odepack/ddassl.f:1647.62: DOUBLE PRECISION FUNCTION DDANRM (NEQ, V, WT, RPAR, IPAR) 1 Warning: Unused dummy argument 'ipar' at (1) scipy/integrate/odepack/ddassl.f:1647.56: DOUBLE PRECISION FUNCTION DDANRM (NEQ, V, WT, RPAR, IPAR) 1 Warning: Unused dummy argument 'rpar' at (1) scipy/integrate/odepack/ddassl.f:1605.64: SUBROUTINE DDAWTS (NEQ, IWT, RTOL, ATOL, Y, WT, RPAR, IPAR) 1 Warning: Unused dummy argument 'ipar' at (1) scipy/integrate/odepack/ddassl.f:1605.58: SUBROUTINE DDAWTS (NEQ, IWT, RTOL, ATOL, Y, WT, RPAR, IPAR) 1 Warning: Unused dummy argument 'rpar' at (1) scipy/integrate/odepack/ddassl.f:3170.30: SUBROUTINE XERHLT (MESSG) 1 Warning: Unused dummy argument 'messg' at (1) scipy/integrate/odepack/ddassl.f: In function ?ddastp?: scipy/integrate/odepack/ddassl.f:2456:0: warning: ?terkm1? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/odepack/ddassl.f:2481:0: warning: ?erkm1? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/odepack/ddassl.f: In function ?ddaini?: scipy/integrate/odepack/ddassl.f:1857:0: warning: ?s? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/odepack/decbt.f gfortran:f77: scipy/integrate/odepack/ewset.f gfortran:f77: scipy/integrate/odepack/fnorm.f gfortran:f77: scipy/integrate/odepack/intdy.f gfortran:f77: scipy/integrate/odepack/iprep.f gfortran:f77: scipy/integrate/odepack/jgroup.f gfortran:f77: scipy/integrate/odepack/lsoda.f scipy/integrate/odepack/lsoda.f: In function ?lsoda?: scipy/integrate/odepack/lsoda.f:1424:0: warning: ?ihit? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/odepack/lsoda.f:1112:0: warning: ?lenwm? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/odepack/lsodar.f scipy/integrate/odepack/lsodar.f: In function ?lsodar?: scipy/integrate/odepack/lsodar.f:1606:0: warning: ?ihit? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/odepack/lsodar.f:1255:0: warning: ?lenwm? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/odepack/lsode.f scipy/integrate/odepack/lsode.f: In function ?lsode?: scipy/integrate/odepack/lsode.f:1311:0: warning: ?ihit? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/odepack/lsodes.f scipy/integrate/odepack/lsodes.f: In function ?lsodes?: scipy/integrate/odepack/lsodes.f:1716:0: warning: ?ihit? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/odepack/lsodi.f scipy/integrate/odepack/lsodi.f: In function ?lsodi?: scipy/integrate/odepack/lsodi.f:1521:0: warning: ?ihit? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/odepack/lsoibt.f scipy/integrate/odepack/lsoibt.f: In function ?lsoibt?: scipy/integrate/odepack/lsoibt.f:1575:0: warning: ?ihit? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/odepack/md.f gfortran:f77: scipy/integrate/odepack/mdi.f gfortran:f77: scipy/integrate/odepack/mdm.f gfortran:f77: scipy/integrate/odepack/mdp.f scipy/integrate/odepack/mdp.f: In function ?mdp?: scipy/integrate/odepack/mdp.f:81:0: warning: ?free? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/odepack/mdu.f gfortran:f77: scipy/integrate/odepack/nnfc.f gfortran:f77: scipy/integrate/odepack/nnsc.f gfortran:f77: scipy/integrate/odepack/nntc.f gfortran:f77: scipy/integrate/odepack/nroc.f gfortran:f77: scipy/integrate/odepack/nsfc.f gfortran:f77: scipy/integrate/odepack/odrv.f gfortran:f77: scipy/integrate/odepack/pjibt.f gfortran:f77: scipy/integrate/odepack/prep.f gfortran:f77: scipy/integrate/odepack/prepj.f gfortran:f77: scipy/integrate/odepack/prepji.f gfortran:f77: scipy/integrate/odepack/prja.f gfortran:f77: scipy/integrate/odepack/prjs.f gfortran:f77: scipy/integrate/odepack/rchek.f gfortran:f77: scipy/integrate/odepack/roots.f gfortran:f77: scipy/integrate/odepack/slsbt.f scipy/integrate/odepack/slsbt.f:1.39: subroutine slsbt (wm, iwm, x, tem) 1 Warning: Unused dummy argument 'tem' at (1) gfortran:f77: scipy/integrate/odepack/slss.f scipy/integrate/odepack/slss.f:1.38: subroutine slss (wk, iwk, x, tem) 1 Warning: Unused dummy argument 'tem' at (1) gfortran:f77: scipy/integrate/odepack/solbt.f gfortran:f77: scipy/integrate/odepack/solsy.f scipy/integrate/odepack/solsy.f:1.39: subroutine solsy (wm, iwm, x, tem) 1 Warning: Unused dummy argument 'tem' at (1) gfortran:f77: scipy/integrate/odepack/srcar.f gfortran:f77: scipy/integrate/odepack/srcma.f gfortran:f77: scipy/integrate/odepack/srcms.f gfortran:f77: scipy/integrate/odepack/srcom.f gfortran:f77: scipy/integrate/odepack/sro.f gfortran:f77: scipy/integrate/odepack/stoda.f scipy/integrate/odepack/stoda.f: In function ?stoda?: scipy/integrate/odepack/stoda.f:578:0: warning: ?pdh? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/odepack/stoda.f:223:0: warning: ?iredo? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/odepack/stoda.f:372:0: warning: ?dsm? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/odepack/stoda.f:18:0: warning: ?rh? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/odepack/stode.f scipy/integrate/odepack/stode.f: In function ?stode?: scipy/integrate/odepack/stode.f:203:0: warning: ?iredo? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/odepack/stode.f:326:0: warning: ?dsm? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/odepack/stode.f:14:0: warning: ?rh? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/odepack/stodi.f scipy/integrate/odepack/stodi.f: In function ?stodi?: scipy/integrate/odepack/stodi.f:401:0: warning: ?dsm? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/odepack/stodi.f:15:0: warning: ?rh? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/odepack/stodi.f:211:0: warning: ?iredo? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/odepack/vmnorm.f gfortran:f77: scipy/integrate/odepack/vnorm.f gfortran:f77: scipy/integrate/odepack/vode.f scipy/integrate/odepack/vode.f:2373.4: 700 R = ONE/TQ(2) 1 Warning: Label 700 at (1) defined but not used scipy/integrate/odepack/vode.f:3514.40: SUBROUTINE XERRWD (MSG, NMES, NERR, LEVEL, NI, I1, I2, NR, R1, R2) 1 Warning: Unused dummy argument 'nerr' at (1) scipy/integrate/odepack/vode.f:3495.44: DOUBLE PRECISION FUNCTION D1MACH (IDUM) 1 Warning: Unused dummy argument 'idum' at (1) scipy/integrate/odepack/vode.f:2739.42: SUBROUTINE DVNLSD (Y, YH, LDYH, VSAV, SAVF, EWT, ACOR, IWM, WM, 1 Warning: Unused dummy argument 'vsav' at (1) scipy/integrate/odepack/vode.f: In function ?ixsav?: scipy/integrate/odepack/vode.f:3610:0: warning: ?__result_ixsav? may be used uninitialized in this function [-Wuninitialized] scipy/integrate/odepack/vode.f: In function ?dvode?: scipy/integrate/odepack/vode.f:1487:0: warning: ?ihit? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/odepack/xerrwv.f scipy/integrate/odepack/xerrwv.f:1.40: subroutine xerrwv (msg, nmes, nerr, level, ni, i1, i2, nr, r1, r2) 1 Warning: Unused dummy argument 'nerr' at (1) gfortran:f77: scipy/integrate/odepack/xsetf.f gfortran:f77: scipy/integrate/odepack/xsetun.f gfortran:f77: scipy/integrate/odepack/zvode.f scipy/integrate/odepack/zvode.f:2394.4: 700 R = ONE/TQ(2) 1 Warning: Label 700 at (1) defined but not used scipy/integrate/odepack/zvode.f:2760.42: SUBROUTINE ZVNLSD (Y, YH, LDYH, VSAV, SAVF, EWT, ACOR, IWM, WM, 1 Warning: Unused dummy argument 'vsav' at (1) scipy/integrate/odepack/zvode.f: In function ?zvode?: scipy/integrate/odepack/zvode.f:1502:0: warning: ?ihit? may be used uninitialized in this function [-Wuninitialized] ar: adding 50 object files to build/temp.macosx-10.4-x86_64-2.7/libodepack.a ar: adding 10 object files to build/temp.macosx-10.4-x86_64-2.7/libodepack.a ranlib:@ build/temp.macosx-10.4-x86_64-2.7/libodepack.a building 'dop' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.4-x86_64-2.7/scipy/integrate/dop compile options: '-I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' gfortran:f77: scipy/integrate/dop/dop853.f scipy/integrate/dop/dop853.f:364.42: & SOLOUT,IOUT,IDID,NMAX,UROUND,METH,NSTIFF,SAFE,BETA,FAC1,FAC2, 1 Warning: Unused dummy argument 'meth' at (1) scipy/integrate/dop/dop853.f:791.38: FUNCTION HINIT853(N,FCN,X,Y,XEND,POSNEG,F0,F1,Y1,IORD, 1 Warning: Unused dummy argument 'xend' at (1) scipy/integrate/dop/dop853.f: In function ?contd8?: scipy/integrate/dop/dop853.f:870:0: warning: control reaches end of non-void function [-Wreturn-type] scipy/integrate/dop/dop853.f: In function ?dp86co?: scipy/integrate/dop/dop853.f:686:0: warning: ?nonsti? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/integrate/dop/dopri5.f scipy/integrate/dop/dopri5.f:558.35: FUNCTION HINIT(N,FCN,X,Y,XEND,POSNEG,F0,F1,Y1,IORD, 1 Warning: Unused dummy argument 'xend' at (1) scipy/integrate/dop/dopri5.f: In function ?contd5?: scipy/integrate/dop/dopri5.f:637:0: warning: control reaches end of non-void function [-Wreturn-type] scipy/integrate/dop/dopri5.f: In function ?dopcor?: scipy/integrate/dop/dopri5.f:491:0: warning: ?nonsti? may be used uninitialized in this function [-Wuninitialized] ar: adding 2 object files to build/temp.macosx-10.4-x86_64-2.7/libdop.a ranlib:@ build/temp.macosx-10.4-x86_64-2.7/libdop.a building 'fitpack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.4-x86_64-2.7/scipy/interpolate creating build/temp.macosx-10.4-x86_64-2.7/scipy/interpolate/fitpack compile options: '-I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' gfortran:f77: scipy/interpolate/fitpack/bispeu.f scipy/interpolate/fitpack/bispeu.f:50.18: integer i,iw,lwest 1 Warning: Unused variable 'iw' declared at (1) scipy/interpolate/fitpack/bispeu.f:44.37: integer nx,ny,kx,ky,m,lwrk,kwrk,ier 1 Warning: Unused variable 'kwrk' declared at (1) gfortran:f77: scipy/interpolate/fitpack/bispev.f gfortran:f77: scipy/interpolate/fitpack/clocur.f gfortran:f77: scipy/interpolate/fitpack/cocosp.f gfortran:f77: scipy/interpolate/fitpack/concon.f gfortran:f77: scipy/interpolate/fitpack/concur.f scipy/interpolate/fitpack/concur.f:287.21: real*8 tol,dist 1 Warning: Unused variable 'dist' declared at (1) gfortran:f77: scipy/interpolate/fitpack/cualde.f gfortran:f77: scipy/interpolate/fitpack/curev.f gfortran:f77: scipy/interpolate/fitpack/curfit.f gfortran:f77: scipy/interpolate/fitpack/dblint.f gfortran:f77: scipy/interpolate/fitpack/evapol.f gfortran:f77: scipy/interpolate/fitpack/fourco.f gfortran:f77: scipy/interpolate/fitpack/fpader.f gfortran:f77: scipy/interpolate/fitpack/fpadno.f gfortran:f77: scipy/interpolate/fitpack/fpadpo.f gfortran:f77: scipy/interpolate/fitpack/fpback.f gfortran:f77: scipy/interpolate/fitpack/fpbacp.f gfortran:f77: scipy/interpolate/fitpack/fpbfout.f scipy/interpolate/fitpack/fpbfout.f: In function ?fpbfou?: scipy/interpolate/fitpack/fpbfout.f:117:0: warning: ?term? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpbisp.f gfortran:f77: scipy/interpolate/fitpack/fpbspl.f gfortran:f77: scipy/interpolate/fitpack/fpchec.f gfortran:f77: scipy/interpolate/fitpack/fpched.f gfortran:f77: scipy/interpolate/fitpack/fpchep.f gfortran:f77: scipy/interpolate/fitpack/fpclos.f scipy/interpolate/fitpack/fpclos.f:395.35: if(fpold-fp.gt.acc) npl1 = rn*fpms/(fpold-fp) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fpclos.f: In function ?fpclos?: scipy/interpolate/fitpack/fpclos.f:396:0: warning: ?nplus? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpclos.f:438:0: warning: ?nmax? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpclos.f:473:0: warning: ?n10? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpclos.f:16:0: warning: ?i1? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpclos.f:395:0: warning: ?fpold? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpclos.f:472:0: warning: ?fpms? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpclos.f:470:0: warning: ?fp0? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpclos.f:647:0: warning: ?acc? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpcoco.f scipy/interpolate/fitpack/fpcoco.f: In function ?fpcoco?: scipy/interpolate/fitpack/fpcoco.f:137:0: warning: ?k? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpcons.f scipy/interpolate/fitpack/fpcons.f:224.35: if(fpold-fp.gt.acc) npl1 = rn*fpms/(fpold-fp) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fpcons.f: In function ?fpcons?: scipy/interpolate/fitpack/fpcons.f:225:0: warning: ?nplus? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpcons.f:264:0: warning: ?nmax? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpcons.f:383:0: warning: ?nk1? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpcons.f:15:0: warning: ?mm? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpcons.f:224:0: warning: ?fpold? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpcons.f:301:0: warning: ?fpms? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpcons.f:299:0: warning: ?fp0? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpcons.f:402:0: warning: ?acc? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpcosp.f gfortran:f77: scipy/interpolate/fitpack/fpcsin.f gfortran:f77: scipy/interpolate/fitpack/fpcurf.f scipy/interpolate/fitpack/fpcurf.f:186.35: if(fpold-fp.gt.acc) npl1 = rn*fpms/(fpold-fp) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fpcurf.f: In function ?fpcurf?: scipy/interpolate/fitpack/fpcurf.f:187:0: warning: ?nplus? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpcurf.f:219:0: warning: ?nmax? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpcurf.f:186:0: warning: ?fpold? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpcurf.f:256:0: warning: ?fpms? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpcurf.f:254:0: warning: ?fp0? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpcurf.f:319:0: warning: ?acc? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpcuro.f gfortran:f77: scipy/interpolate/fitpack/fpcyt1.f gfortran:f77: scipy/interpolate/fitpack/fpcyt2.f gfortran:f77: scipy/interpolate/fitpack/fpdeno.f gfortran:f77: scipy/interpolate/fitpack/fpdisc.f gfortran:f77: scipy/interpolate/fitpack/fpfrno.f scipy/interpolate/fitpack/fpfrno.f: In function ?fpfrno?: scipy/interpolate/fitpack/fpfrno.f:42:0: warning: ?k? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpgivs.f scipy/interpolate/fitpack/fpgivs.f: In function ?fpgivs?: scipy/interpolate/fitpack/fpgivs.f:16:0: warning: ?dd? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpgrdi.f scipy/interpolate/fitpack/fpgrdi.f:296.4: 400 if(nrold.eq.number) go to 420 1 Warning: Label 400 at (1) defined but not used scipy/interpolate/fitpack/fpgrdi.f: In function ?fpgrdi?: scipy/interpolate/fitpack/fpgrdi.f:204:0: warning: ?pinv? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpgrpa.f gfortran:f77: scipy/interpolate/fitpack/fpgrre.f scipy/interpolate/fitpack/fpgrre.f: In function ?fpgrre?: scipy/interpolate/fitpack/fpgrre.f:199:0: warning: ?pinv? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpgrsp.f scipy/interpolate/fitpack/fpgrsp.f:348.4: 400 if(nrold.eq.number) go to 420 1 Warning: Label 400 at (1) defined but not used scipy/interpolate/fitpack/fpgrsp.f: In function ?fpgrsp?: scipy/interpolate/fitpack/fpgrsp.f:239:0: warning: ?pinv? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpinst.f gfortran:f77: scipy/interpolate/fitpack/fpintb.f scipy/interpolate/fitpack/fpintb.f: In function ?fpintb?: scipy/interpolate/fitpack/fpintb.f:114:0: warning: ?ia? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpknot.f scipy/interpolate/fitpack/fpknot.f: In function ?fpknot?: scipy/interpolate/fitpack/fpknot.f:42:0: warning: ?number? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpknot.f:40:0: warning: ?maxpt? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpknot.f:41:0: warning: ?maxbeg? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpopdi.f gfortran:f77: scipy/interpolate/fitpack/fpopsp.f scipy/interpolate/fitpack/fpopsp.f:58.16: real*8 res,sq,sqq,sq0,sq1,step1,step2,three 1 Warning: Unused variable 'res' declared at (1) gfortran:f77: scipy/interpolate/fitpack/fporde.f gfortran:f77: scipy/interpolate/fitpack/fppara.f scipy/interpolate/fitpack/fppara.f:202.35: if(fpold-fp.gt.acc) npl1 = rn*fpms/(fpold-fp) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fppara.f: In function ?fppara?: scipy/interpolate/fitpack/fppara.f:203:0: warning: ?nplus? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppara.f:242:0: warning: ?nmax? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppara.f:202:0: warning: ?fpold? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppara.f:279:0: warning: ?fpms? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppara.f:277:0: warning: ?fp0? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppara.f:362:0: warning: ?acc? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fppasu.f scipy/interpolate/fitpack/fppasu.f:272.33: if(reducu.gt.acc) npl1 = rn*fpms/reducu 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fppasu.f:279.33: if(reducv.gt.acc) npl1 = rn*fpms/reducv 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fppasu.f: In function ?fppasu?: scipy/interpolate/fitpack/fppasu.f:336:0: warning: ?fpms? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppasu.f:308:0: warning: ?nve? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppasu.f:295:0: warning: ?nue? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppasu.f:251:0: warning: ?nmaxv? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppasu.f:251:0: warning: ?nmaxu? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppasu.f:367:0: warning: ?acc? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppasu.f:231:0: warning: ?perv? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppasu.f:209:0: warning: ?peru? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpperi.f scipy/interpolate/fitpack/fpperi.f:339.35: if(fpold-fp.gt.acc) npl1 = rn*fpms/(fpold-fp) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fpperi.f: In function ?fpperi?: scipy/interpolate/fitpack/fpperi.f:340:0: warning: ?nplus? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpperi.f:375:0: warning: ?nmax? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpperi.f:410:0: warning: ?n10? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpperi.f:16:0: warning: ?i1? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpperi.f:339:0: warning: ?fpold? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpperi.f:409:0: warning: ?fpms? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpperi.f:407:0: warning: ?fp0? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpperi.f:558:0: warning: ?acc? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fppocu.f gfortran:f77: scipy/interpolate/fitpack/fppogr.f scipy/interpolate/fitpack/fppogr.f:286.33: if(reducu.gt.acc) npl1 = rn*fpms/reducu 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fppogr.f:293.33: if(reducv.gt.acc) npl1 = rn*fpms/reducv 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fppogr.f: In function ?fppogr?: scipy/interpolate/fitpack/fppogr.f:353:0: warning: ?fpms? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppogr.f:307:0: warning: ?nplu? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppogr.f:260:0: warning: ?nvmax? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppogr.f:325:0: warning: ?nve? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppogr.f:260:0: warning: ?numax? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppogr.f:312:0: warning: ?nue? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppogr.f:368:0: warning: ?acc? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fppola.f scipy/interpolate/fitpack/fppola.f:440.4: 440 do 450 i=1,nrint 1 Warning: Label 440 at (1) defined but not used scipy/interpolate/fitpack/fppola.f:377.4: 370 in = nummer(in) 1 Warning: Label 370 at (1) defined but not used scipy/interpolate/fitpack/fppola.f:23.25: * iter,i1,i2,i3,j,jl,jrot,j1,j2,k,l,la,lf,lh,ll,lu,lv,lwest,l1,l2, 1 Warning: Unused variable 'jl' declared at (1) scipy/interpolate/fitpack/fppola.f: In function ?fppola?: scipy/interpolate/fitpack/fppola.f:1:0: warning: ?nv4? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppola.f:578:0: warning: ?nu4? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppola.f:821:0: warning: ?lwest? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppola.f:25:0: warning: ?iband1? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppola.f:565:0: warning: ?fpms? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fppola.f:788:0: warning: ?acc? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fprank.f gfortran:f77: scipy/interpolate/fitpack/fprati.f gfortran:f77: scipy/interpolate/fitpack/fpregr.f scipy/interpolate/fitpack/fpregr.f:246.33: if(reducx.gt.acc) npl1 = rn*fpms/reducx 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fpregr.f:253.33: if(reducy.gt.acc) npl1 = rn*fpms/reducy 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fpregr.f: In function ?fpregr?: scipy/interpolate/fitpack/fpregr.f:310:0: warning: ?fpms? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpregr.f:282:0: warning: ?nye? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpregr.f:269:0: warning: ?nxe? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpregr.f:225:0: warning: ?nmaxy? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpregr.f:225:0: warning: ?nmaxx? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpregr.f:341:0: warning: ?acc? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fprota.f gfortran:f77: scipy/interpolate/fitpack/fprppo.f scipy/interpolate/fitpack/fprppo.f: In function ?fprppo?: scipy/interpolate/fitpack/fprppo.f:1:0: warning: ?j? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fprpsp.f gfortran:f77: scipy/interpolate/fitpack/fpseno.f gfortran:f77: scipy/interpolate/fitpack/fpspgr.f scipy/interpolate/fitpack/fpspgr.f:315.33: if(reducu.gt.acc) npl1 = rn*fpms/reducu 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fpspgr.f:322.33: if(reducv.gt.acc) npl1 = rn*fpms/reducv 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/interpolate/fitpack/fpspgr.f: In function ?fpspgr?: scipy/interpolate/fitpack/fpspgr.f:382:0: warning: ?fpms? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpspgr.f:336:0: warning: ?nplu? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpspgr.f:287:0: warning: ?nvmax? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpspgr.f:354:0: warning: ?nve? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpspgr.f:287:0: warning: ?numax? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpspgr.f:341:0: warning: ?nue? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpspgr.f:397:0: warning: ?acc? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpsphe.f scipy/interpolate/fitpack/fpsphe.f:390.4: 440 do 450 i=1,nrint 1 Warning: Label 440 at (1) defined but not used scipy/interpolate/fitpack/fpsphe.f:327.4: 330 in = nummer(in) 1 Warning: Label 330 at (1) defined but not used scipy/interpolate/fitpack/fpsphe.f: In function ?fpsphe?: scipy/interpolate/fitpack/fpsphe.f:519:0: warning: ?ntt? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpsphe.f:23:0: warning: ?nt4? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpsphe.f:1:0: warning: ?np4? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpsphe.f:746:0: warning: ?lwest? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpsphe.f:21:0: warning: ?iband1? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpsphe.f:510:0: warning: ?fpms? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpsphe.f:713:0: warning: ?acc? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpsuev.f gfortran:f77: scipy/interpolate/fitpack/fpsurf.f scipy/interpolate/fitpack/fpsurf.f:305.4: 310 do 320 i=1,nrint 1 Warning: Label 310 at (1) defined but not used scipy/interpolate/fitpack/fpsurf.f:245.4: 240 in = nummer(in) 1 Warning: Label 240 at (1) defined but not used scipy/interpolate/fitpack/fpsurf.f: In function ?fpsurf?: scipy/interpolate/fitpack/fpsurf.f:567:0: warning: ?nyy? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpsurf.f:433:0: warning: ?nk1y? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpsurf.f:495:0: warning: ?nk1x? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpsurf.f:621:0: warning: ?lwest? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpsurf.f:19:0: warning: ?iband1? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpsurf.f:425:0: warning: ?fpms? may be used uninitialized in this function [-Wuninitialized] scipy/interpolate/fitpack/fpsurf.f:588:0: warning: ?acc? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fpsysy.f gfortran:f77: scipy/interpolate/fitpack/fptrnp.f scipy/interpolate/fitpack/fptrnp.f: In function ?fptrnp?: scipy/interpolate/fitpack/fptrnp.f:53:0: warning: ?pinv? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/fptrpe.f scipy/interpolate/fitpack/fptrpe.f:17.21: integer i,iband,irot,it,ii,i2,i3,j,jj,l,mid,nmd,m2,m3, 1 Warning: Unused variable 'iband' declared at (1) scipy/interpolate/fitpack/fptrpe.f: In function ?fptrpe?: scipy/interpolate/fitpack/fptrpe.f:64:0: warning: ?pinv? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/insert.f gfortran:f77: scipy/interpolate/fitpack/parcur.f gfortran:f77: scipy/interpolate/fitpack/parder.f gfortran:f77: scipy/interpolate/fitpack/parsur.f gfortran:f77: scipy/interpolate/fitpack/percur.f gfortran:f77: scipy/interpolate/fitpack/pogrid.f gfortran:f77: scipy/interpolate/fitpack/polar.f scipy/interpolate/fitpack/polar.f:353.10: * lbv,lco,lf,lff,lfp,lh,lq,lsu,lsv,lwest,maxit,ncest,ncc,nuu, 1 Warning: Unused variable 'jlbv' declared at (1) gfortran:f77: scipy/interpolate/fitpack/profil.f gfortran:f77: scipy/interpolate/fitpack/regrid.f gfortran:f77: scipy/interpolate/fitpack/spalde.f gfortran:f77: scipy/interpolate/fitpack/spgrid.f gfortran:f77: scipy/interpolate/fitpack/sphere.f scipy/interpolate/fitpack/sphere.f:318.10: * lbp,lco,lf,lff,lfp,lh,lq,lst,lsp,lwest,maxit,ncest,ncc,ntt, 1 Warning: Unused variable 'jlbp' declared at (1) gfortran:f77: scipy/interpolate/fitpack/splder.f scipy/interpolate/fitpack/splder.f:84.4: 30 ier = 0 1 Warning: Label 30 at (1) defined but not used scipy/interpolate/fitpack/splder.f: In function ?splder?: scipy/interpolate/fitpack/splder.f:135:0: warning: ?k2? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/interpolate/fitpack/splev.f scipy/interpolate/fitpack/splev.f:80.4: 30 ier = 0 1 Warning: Label 30 at (1) defined but not used gfortran:f77: scipy/interpolate/fitpack/splint.f gfortran:f77: scipy/interpolate/fitpack/sproot.f gfortran:f77: scipy/interpolate/fitpack/surev.f gfortran:f77: scipy/interpolate/fitpack/surfit.f ar: adding 50 object files to build/temp.macosx-10.4-x86_64-2.7/libfitpack.a ar: adding 34 object files to build/temp.macosx-10.4-x86_64-2.7/libfitpack.a ranlib:@ build/temp.macosx-10.4-x86_64-2.7/libfitpack.a building 'odrpack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.4-x86_64-2.7/scipy/odr creating build/temp.macosx-10.4-x86_64-2.7/scipy/odr/odrpack compile options: '-I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' gfortran:f77: scipy/odr/odrpack/d_odr.f scipy/odr/odrpack/d_odr.f:1014.13: NETA = MAX(TWO,P5-LOG10(ETA)) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/odr/odrpack/d_odr.f:2955.13: NTOL = MAX(ONE,P5-LOG10(TOL)) 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) scipy/odr/odrpack/d_odr.f:6032.16: J = WORK(WRK3+I) - 1 1 Warning: Possible change of value in conversion from REAL(8) to INTEGER(4) at (1) gfortran:f77: scipy/odr/odrpack/d_mprec.f gfortran:f77: scipy/odr/odrpack/dlunoc.f gfortran:f77: scipy/odr/odrpack/d_lpk.f ar: adding 4 object files to build/temp.macosx-10.4-x86_64-2.7/libodrpack.a ranlib:@ build/temp.macosx-10.4-x86_64-2.7/libodrpack.a building 'minpack' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second-underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops creating build/temp.macosx-10.4-x86_64-2.7/scipy/optimize creating build/temp.macosx-10.4-x86_64-2.7/scipy/optimize/minpack compile options: '-I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' gfortran:f77: scipy/optimize/minpack/chkder.f gfortran:f77: scipy/optimize/minpack/dogleg.f gfortran:f77: scipy/optimize/minpack/dpmpar.f gfortran:f77: scipy/optimize/minpack/enorm.f scipy/optimize/minpack/enorm.f: In function ?enorm?: scipy/optimize/minpack/enorm.f:1:0: warning: ?__result_enorm? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/optimize/minpack/fdjac1.f gfortran:f77: scipy/optimize/minpack/fdjac2.f gfortran:f77: scipy/optimize/minpack/hybrd.f scipy/optimize/minpack/hybrd.f: In function ?hybrd?: scipy/optimize/minpack/hybrd.f:404:0: warning: ?xnorm? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/optimize/minpack/hybrd1.f gfortran:f77: scipy/optimize/minpack/hybrj.f scipy/optimize/minpack/hybrj.f: In function ?hybrj?: scipy/optimize/minpack/hybrj.f:386:0: warning: ?xnorm? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/optimize/minpack/hybrj1.f gfortran:f77: scipy/optimize/minpack/lmder.f scipy/optimize/minpack/lmder.f: In function ?lmder?: scipy/optimize/minpack/lmder.f:420:0: warning: ?xnorm? may be used uninitialized in this function [-Wuninitialized] scipy/optimize/minpack/lmder.f:387:0: warning: ?temp? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/optimize/minpack/lmder1.f gfortran:f77: scipy/optimize/minpack/lmdif.f scipy/optimize/minpack/lmdif.f: In function ?lmdif?: scipy/optimize/minpack/lmdif.f:422:0: warning: ?xnorm? may be used uninitialized in this function [-Wuninitialized] scipy/optimize/minpack/lmdif.f:389:0: warning: ?temp? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/optimize/minpack/lmdif1.f gfortran:f77: scipy/optimize/minpack/lmpar.f gfortran:f77: scipy/optimize/minpack/lmstr.f scipy/optimize/minpack/lmstr.f: In function ?lmstr?: scipy/optimize/minpack/lmstr.f:434:0: warning: ?xnorm? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/optimize/minpack/lmstr1.f gfortran:f77: scipy/optimize/minpack/qform.f gfortran:f77: scipy/optimize/minpack/qrfac.f gfortran:f77: scipy/optimize/minpack/qrsolv.f gfortran:f77: scipy/optimize/minpack/r1mpyq.f scipy/optimize/minpack/r1mpyq.f: In function ?r1mpyq?: scipy/optimize/minpack/r1mpyq.f:54:0: warning: ?sin? may be used uninitialized in this function [-Wuninitialized] scipy/optimize/minpack/r1mpyq.f:68:0: warning: ?cos? may be used uninitialized in this function [-Wuninitialized] gfortran:f77: scipy/optimize/minpack/r1updt.f gfortran:f77: scipy/optimize/minpack/rwupdt.f ar: adding 23 object files to build/temp.macosx-10.4-x86_64-2.7/libminpack.a ranlib:@ build/temp.macosx-10.4-x86_64-2.7/libminpack.a building 'rootfind' library compiling C sources C compiler: /usr/bin/llvm-gcc -fno-strict-aliasing -O3 -march=core2 -w -pipe -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes creating build/temp.macosx-10.4-x86_64-2.7/scipy/optimize/Zeros compile options: '-I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' llvm-gcc: scipy/optimize/Zeros/bisect.c llvm-gcc: scipy/optimize/Zeros/brenth.c llvm-gcc: scipy/optimize/Zeros/ridder.c llvm-gcc: scipy/optimize/Zeros/brentq.c ar: adding 4 object files to build/temp.macosx-10.4-x86_64-2.7/librootfind.a ranlib:@ build/temp.macosx-10.4-x86_64-2.7/librootfind.a building 'superlu_src' library compiling C sources C compiler: /usr/bin/llvm-gcc -fno-strict-aliasing -O3 -march=core2 -w -pipe -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes creating build/temp.macosx-10.4-x86_64-2.7/scipy/sparse creating build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg creating build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve creating build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve/SuperLU creating build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/dsolve/SuperLU/SRC compile options: '-DUSE_VENDOR_BLAS=1 -Iscipy/sparse/linalg/dsolve/SuperLU/SRC -I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cutil.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ccolumn_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dpanel_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/spivotgrowth.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/creadrb.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_scolumn_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dpanel_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ccopy_to_ucol.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zlaqgs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cpivotgrowth.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrsL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ddiagonal.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_blas2.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cgstrf.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zlacon.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/colamd.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dpruneL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dlaqgs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dgssv.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zsnode_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsrfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dreadhb.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_heap_relax_snode.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cpanel_bmod.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsitrf.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sutil.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dpanel_bmod.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zgstrf.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ccopy_to_ucol.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zpivotL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dcomplex.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/scolumn_bmod.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ddrop_row.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dsnode_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_blas3.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zcopy_to_ucol.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dutil.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrsU.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sgssv.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dcopy_to_ucol.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_blas2.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zgssvx.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dlangs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zcolumn_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/get_perm_c.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/spruneL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zpivotL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/spanel_bmod.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/xerbla.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/mark_relax.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cgscon.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cgssvx.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_blas3.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cpanel_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sldperm.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cdrop_row.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsequ.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/slacon.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrf.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sdiagonal.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ssnode_bmod.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zcopy_to_ucol.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsequ.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zcolumn_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsrfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/util.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/scolumn_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zreadtriple.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_ienv.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cpruneL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sreadrb.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/slaqgs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/spivotL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/izmax1.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cldperm.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_blas2.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dcolumn_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/claqgs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/creadhb.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cpivotL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sgstrs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zreadhb.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cpivotL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_blas3.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/spanel_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dgscon.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dlamch.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zgstrs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsisx.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zlangs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_spivotL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_blas2.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/relax_snode.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_spanel_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dpivotgrowth.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zcolumn_bmod.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsisx.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/clangs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zldperm.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sreadhb.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dcolumn_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/csnode_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/scsum1.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dcopy_to_ucol.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ccolumn_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsequ.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dsnode_bmod.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zpruneL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cdiagonal.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsitrf.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sgscon.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/slamch.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dgssvx.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/mmd.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_coletree.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dreadrb.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_relax_snode.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_sdrop_row.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dsnode_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsisx.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsisx.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dzsum1.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/smemory.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsequ.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sgssvx.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ccolumn_bmod.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cgssv.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dpivotL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/icmax1.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dldperm.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zdiagonal.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zdrop_row.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/lsame.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cmemory.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_scopy_to_ucol.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dcolumn_bmod.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zreadrb.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsrfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/memory.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dlacon.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_csnode_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zpanel_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/csnode_bmod.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_preorder.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dreadtriple.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ssnode_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_blas3.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zsnode_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/scopy_to_ucol.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsrfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sgstrf.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/cgstrs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dpivotL.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zgscon.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sreadtriple.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsitrf.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ssnode_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zpivotgrowth.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zsnode_bmod.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zmemory.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zpanel_bmod.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zpanel_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cpanel_dfs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zutil.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/zgssv.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/heap_relax_snode.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/creadtriple.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/clacon.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsitrf.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dmemory.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/superlu_timer.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/slangs.c llvm-gcc: scipy/sparse/linalg/dsolve/SuperLU/SRC/dGetDiagU.c ar: adding 50 object files to build/temp.macosx-10.4-x86_64-2.7/libsuperlu_src.a ar: adding 50 object files to build/temp.macosx-10.4-x86_64-2.7/libsuperlu_src.a ar: adding 50 object files to build/temp.macosx-10.4-x86_64-2.7/libsuperlu_src.a ar: adding 25 object files to build/temp.macosx-10.4-x86_64-2.7/libsuperlu_src.a ranlib:@ build/temp.macosx-10.4-x86_64-2.7/libsuperlu_src.a building 'arpack' library compiling C sources C compiler: /usr/bin/llvm-gcc -fno-strict-aliasing -O3 -march=core2 -w -pipe -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes creating build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen creating build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack creating build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/ARPACK creating build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS compile options: '-Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c' llvm-gcc: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:4: error: expected ?;?, ?,? or ?)? before ?float? scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:10: error: expected ?;?, ?,? or ?)? before ?float? scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:16: error: expected ?;?, ?,? or ?)? before ?*? token scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:21: error: expected ?;?, ?,? or ?)? before ?*? token scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:4: error: expected ?;?, ?,? or ?)? before ?float? scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:10: error: expected ?;?, ?,? or ?)? before ?float? scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:16: error: expected ?;?, ?,? or ?)? before ?*? token scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:21: error: expected ?;?, ?,? or ?)? before ?*? token error: Command "/usr/bin/llvm-gcc -fno-strict-aliasing -O3 -march=core2 -w -pipe -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -I/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/include -c scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c -o build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.o" failed with exit status 1 From cournape at gmail.com Sun Oct 9 07:10:06 2011 From: cournape at gmail.com (David Cournapeau) Date: Sun, 9 Oct 2011 12:10:06 +0100 Subject: [SciPy-User] OS X Lion build fails: arpack In-Reply-To: References: Message-ID: On Sun, Oct 9, 2011 at 10:49 AM, Paul Anton Letnes wrote: > Hi everyone. > > I am trying to build scipy using gcc and gfortran downloaded from here: > http://hpc.sourceforge.net/ Don't use this compiler, it is known to cause many issues. Use the one from here: http://r.research.att.com/tools/ cheers, David From ckkart at hoc.net Sun Oct 9 07:51:46 2011 From: ckkart at hoc.net (Christian K.) Date: Sun, 9 Oct 2011 11:51:46 +0000 (UTC) Subject: [SciPy-User] MLE with stats.lognorm Message-ID: Hi, I wonder whether I am doing something wrong or if the following is to be expected (using sciyp 0.9): In [38]: from scipy import stats In [39]: dist = stats.lognorm(0.25,scale=200.0) In [40]: samples = dist.rvs(size=100) In [41]: print stats.lognorm.fit(samples) C:\Python26\lib\site-packages\scipy\optimize\optimize.py:280: RuntimeWarning: invalid value encountered in subtract and max(abs(fsim[0]-fsim[1:])) <= ftol): (1.0, 158.90310231282845, 21.013288720647015) In [42]: print stats.lognorm.fit(samples, floc=0) [2.2059200167655884, 0, 21.013288720647015] Even when fixing loc=0.0, the results from the MLE for s and scale are very different from the input parameters. Is lognorm Any hints are highly appreciated. Best regards, Christian From paul.anton.letnes at gmail.com Sun Oct 9 07:58:46 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Sun, 9 Oct 2011 13:58:46 +0200 Subject: [SciPy-User] OS X Lion build fails: arpack In-Reply-To: References: Message-ID: <5987CDDA-4EFF-43CD-BA21-87CBA196C953@gmail.com> Aha? I tried the homebrew gfortran, which downloads gfortran from r.research.att.com. However, I am getting exactly the same error message. Hence, I suspect the problem isn't simply the gfortran compiler version/build. Paul On 9. okt. 2011, at 13:10, David Cournapeau wrote: > On Sun, Oct 9, 2011 at 10:49 AM, Paul Anton Letnes > wrote: >> Hi everyone. >> >> I am trying to build scipy using gcc and gfortran downloaded from here: >> http://hpc.sourceforge.net/ > > Don't use this compiler, it is known to cause many issues. Use the one > from here: > > http://r.research.att.com/tools/ > > cheers, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Sun Oct 9 08:06:32 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 9 Oct 2011 08:06:32 -0400 Subject: [SciPy-User] MLE with stats.lognorm In-Reply-To: References: Message-ID: On Sun, Oct 9, 2011 at 7:51 AM, Christian K. wrote: > Hi, > > I wonder whether I am doing something wrong or if the following is to be > expected (using sciyp 0.9): > > In [38]: from scipy import stats > > In [39]: dist = stats.lognorm(0.25,scale=200.0) > > In [40]: samples = dist.rvs(size=100) > > In [41]: print stats.lognorm.fit(samples) > C:\Python26\lib\site-packages\scipy\optimize\optimize.py:280: RuntimeWarning: > invalid value encountered in subtract > ?and max(abs(fsim[0]-fsim[1:])) <= ftol): > (1.0, 158.90310231282845, 21.013288720647015) > > In [42]: print stats.lognorm.fit(samples, floc=0) > [2.2059200167655884, 0, 21.013288720647015] > > Even when fixing loc=0.0, the results from the MLE for s and scale are very > different from the input parameters. Is lognorm > > Any hints are highly appreciated. I just looked at similar cases, for the changes in scipy 0.9 and starting values, see http://projects.scipy.org/scipy/ticket/1530 Essentially, you need to find better starting values and give it to fit. Can you add it to the ticket? It's not quite the same, but I guess it is also that fix_loc_scale doesn't make sense. Note, I also get many of these warnings, > invalid value encountered in subtract > and max(abs(fsim[0]-fsim[1:])) <= ftol): they are caused when np.inf is returned for invalid arguments. In many cases optimize.fmin evaluates parameters that are not valid, but most of the time that doesn't seem to cause any problems, exept it's annoying. Josef > > Best regards, Christian > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From eneide.odissea at gmail.com Sun Oct 9 08:06:56 2011 From: eneide.odissea at gmail.com (eneide.odissea) Date: Sun, 9 Oct 2011 08:06:56 -0400 Subject: [SciPy-User] Multivariate Student's t distribution in python Message-ID: Hi all Just a quick info that I cannot sort it out. Do you know if it is available in Python a Multivariate Student's t distribution? I cannot find it anywhere.There is a multivariate normal in the package '* numpy.random*' ( http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multivariate_normal.html ) ,but nothing in '*scipy.stats*'. Can you help me? Kind Regards EO -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Oct 9 08:14:29 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 9 Oct 2011 08:14:29 -0400 Subject: [SciPy-User] MLE with stats.lognorm In-Reply-To: References: Message-ID: On Sun, Oct 9, 2011 at 8:06 AM, wrote: > On Sun, Oct 9, 2011 at 7:51 AM, Christian K. wrote: >> Hi, >> >> I wonder whether I am doing something wrong or if the following is to be >> expected (using sciyp 0.9): >> >> In [38]: from scipy import stats >> >> In [39]: dist = stats.lognorm(0.25,scale=200.0) >> >> In [40]: samples = dist.rvs(size=100) >> >> In [41]: print stats.lognorm.fit(samples) >> C:\Python26\lib\site-packages\scipy\optimize\optimize.py:280: RuntimeWarning: >> invalid value encountered in subtract >> ?and max(abs(fsim[0]-fsim[1:])) <= ftol): >> (1.0, 158.90310231282845, 21.013288720647015) >> >> In [42]: print stats.lognorm.fit(samples, floc=0) >> [2.2059200167655884, 0, 21.013288720647015] >> >> Even when fixing loc=0.0, the results from the MLE for s and scale are very >> different from the input parameters. Is lognorm >> >> Any hints are highly appreciated. > > I just looked at similar cases, for the changes in scipy 0.9 and > starting values, see > http://projects.scipy.org/scipy/ticket/1530 > > Essentially, you need to find better starting values and give it to fit. > > Can you add it to the ticket? It's not quite the same, but I guess it > is also that fix_loc_scale doesn't make sense. > > Note, I also get many of these warnings, > >> invalid value encountered in subtract >> ?and max(abs(fsim[0]-fsim[1:])) <= ftol): > > they are caused when np.inf is returned for invalid arguments. In many > cases optimize.fmin evaluates parameters that are not valid, but most > of the time that doesn't seem to cause any problems, exept it's > annoying. for example with starting value for loc >>> print stats.lognorm.fit(x, loc=0) (0.23800805074491538, 0.034900026034516723, 196.31113801786194) Josef > > Josef > > >> >> Best regards, Christian >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From josef.pktd at gmail.com Sun Oct 9 08:19:09 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 9 Oct 2011 08:19:09 -0400 Subject: [SciPy-User] Multivariate Student's t distribution in python In-Reply-To: References: Message-ID: On Sun, Oct 9, 2011 at 8:06 AM, eneide.odissea wrote: > Hi all > Just a quick info that I cannot sort it out. > Do you know if it is available in Python a?Multivariate Student's t > distribution? > I cannot find it anywhere.There is a multivariate normal?in the package > 'numpy.random' > (?http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multivariate_normal.html?)?,but > nothing in 'scipy.stats'. > Can you help me? Which methods do you need? I have most of it in scikits.statsmodels, but it's in the sandbox, mostly tested. If you just need rvs, then it's just a few lines of code. Josef > Kind Regards > EO > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From ralf.gommers at googlemail.com Sun Oct 9 08:38:28 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 9 Oct 2011 14:38:28 +0200 Subject: [SciPy-User] OS X Lion build fails: arpack In-Reply-To: <5987CDDA-4EFF-43CD-BA21-87CBA196C953@gmail.com> References: <5987CDDA-4EFF-43CD-BA21-87CBA196C953@gmail.com> Message-ID: On Sun, Oct 9, 2011 at 1:58 PM, Paul Anton Letnes < paul.anton.letnes at gmail.com> wrote: > Aha? I tried the homebrew gfortran, which downloads gfortran from > r.research.att.com. However, I am getting exactly the same error message. > Hence, I suspect the problem isn't simply the gfortran compiler > version/build. > > You're using llvm-gcc, which is known to not work with scipy (although it should compile as far as I know). Use plain gcc instead. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.anton.letnes at gmail.com Sun Oct 9 08:54:34 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Sun, 9 Oct 2011 14:54:34 +0200 Subject: [SciPy-User] OS X Lion build fails: arpack In-Reply-To: References: <5987CDDA-4EFF-43CD-BA21-87CBA196C953@gmail.com> Message-ID: <442034F2-311D-462F-B05D-F83DEAB1EACB@gmail.com> On 9. okt. 2011, at 14:38, Ralf Gommers wrote: > > > On Sun, Oct 9, 2011 at 1:58 PM, Paul Anton Letnes wrote: > Aha? I tried the homebrew gfortran, which downloads gfortran from r.research.att.com. However, I am getting exactly the same error message. Hence, I suspect the problem isn't simply the gfortran compiler version/build. > > You're using llvm-gcc, which is known to not work with scipy (although it should compile as far as I know). Use plain gcc instead. > > Ralf > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user I tried exporting the environment variables: CXX=/usr/bin/g++-4.2 CC=/usr/bin/gcc-4.2 CPP=/usr/bin/cpp-4.2 LD=/usr/bin/ld as suggested here (for other reasons, of course): http://solarianprogrammer.com/2011/09/20/compiling-gcc-4-6-1-on-mac-osx-lion/ However, this did not change even the error message when trying to install scipy 0.9. scipy 0.10b2 installs nicely; but I suppose it may be buggier than 0.9.0? Cheers Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From brendanarnold at gmail.com Sun Oct 9 09:24:27 2011 From: brendanarnold at gmail.com (Brendan Arnold) Date: Sun, 9 Oct 2011 14:24:27 +0100 Subject: [SciPy-User] [ANN] Fortranformat - Fortran format emulation for Python Message-ID: Hi there, Fortranformat is a library that emulates the functionality of FORTRAN format statements in Python. >>> import fortranformat as ff >>> line = ff.FortranRecordReader('(4I4)') >>> line.read(' 1 2 3 4') [1, 2, 3, 4] >>> line = ff.FortranRecordWriter('(4I4)') >>> line.read([1, 2, 3, 4]) ' 1 2 3 4' It is extensively unit tested (to a degree that I was able to file a bug report or two to the gfortran people!) but not tested in practice. I'd appreciate some feedback. It is available on PyPi and the project homepage is https://bitbucket.org/brendanarnold/py-fortranformat Try it out with: pip install fortranformat Kind regards, Brendan From eneide.odissea at gmail.com Sun Oct 9 10:35:05 2011 From: eneide.odissea at gmail.com (eneide.odissea) Date: Sun, 9 Oct 2011 10:35:05 -0400 Subject: [SciPy-User] Multivariate Student's t distribution in python In-Reply-To: References: Message-ID: Hi Joseph Thanks for your response. I didn't know about your http://scikits.appspot.com/statsmodels I need just 'rvs' EO On Sun, Oct 9, 2011 at 8:19 AM, wrote: > On Sun, Oct 9, 2011 at 8:06 AM, eneide.odissea > wrote: > > Hi all > > Just a quick info that I cannot sort it out. > > Do you know if it is available in Python a Multivariate Student's t > > distribution? > > I cannot find it anywhere.There is a multivariate normal in the package > > 'numpy.random' > > ( > http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multivariate_normal.html > ) ,but > > nothing in 'scipy.stats'. > > Can you help me? > > Which methods do you need? > > I have most of it in scikits.statsmodels, but it's in the sandbox, > mostly tested. If you just need rvs, then it's just a few lines of > code. > > Josef > > > Kind Regards > > EO > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eneide.odissea at gmail.com Sun Oct 9 10:38:45 2011 From: eneide.odissea at gmail.com (eneide.odissea) Date: Sun, 9 Oct 2011 10:38:45 -0400 Subject: [SciPy-User] Multivariate Student's t distribution in python In-Reply-To: References: Message-ID: Hi again Joseph quick and dummy one. Can you point me out where I can find that function in the sandbox? I am looking at it now, but I cannot find it EO On Sun, Oct 9, 2011 at 8:19 AM, wrote: > On Sun, Oct 9, 2011 at 8:06 AM, eneide.odissea > wrote: > > Hi all > > Just a quick info that I cannot sort it out. > > Do you know if it is available in Python a Multivariate Student's t > > distribution? > > I cannot find it anywhere.There is a multivariate normal in the package > > 'numpy.random' > > ( > http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multivariate_normal.html > ) ,but > > nothing in 'scipy.stats'. > > Can you help me? > > Which methods do you need? > > I have most of it in scikits.statsmodels, but it's in the sandbox, > mostly tested. If you just need rvs, then it's just a few lines of > code. > > Josef > > > Kind Regards > > EO > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckkart at hoc.net Sun Oct 9 11:46:44 2011 From: ckkart at hoc.net (Christian K.) Date: Sun, 09 Oct 2011 17:46:44 +0200 Subject: [SciPy-User] MLE with stats.lognorm In-Reply-To: References: Message-ID: Am 09.10.11 14:14, schrieb josef.pktd at gmail.com: > On Sun, Oct 9, 2011 at 8:06 AM, wrote: >> On Sun, Oct 9, 2011 at 7:51 AM, Christian K. wrote: >>> Hi, >>> >>> I wonder whether I am doing something wrong or if the following is to be >>> expected (using sciyp 0.9): >>> >>> In [38]: from scipy import stats >>> >>> In [39]: dist = stats.lognorm(0.25,scale=200.0) >>> >>> In [40]: samples = dist.rvs(size=100) >>> >>> In [41]: print stats.lognorm.fit(samples) >>> C:\Python26\lib\site-packages\scipy\optimize\optimize.py:280: RuntimeWarning: >>> invalid value encountered in subtract >>> and max(abs(fsim[0]-fsim[1:])) <= ftol): >>> (1.0, 158.90310231282845, 21.013288720647015) >>> >>> In [42]: print stats.lognorm.fit(samples, floc=0) >>> [2.2059200167655884, 0, 21.013288720647015] >>> >>> Even when fixing loc=0.0, the results from the MLE for s and scale are very >>> different from the input parameters. Is lognorm >>> >>> Any hints are highly appreciated. >> >> I just looked at similar cases, for the changes in scipy 0.9 and >> starting values, see >> http://projects.scipy.org/scipy/ticket/1530 >> >> Essentially, you need to find better starting values and give it to fit. >> >> Can you add it to the ticket? It's not quite the same, but I guess it >> is also that fix_loc_scale doesn't make sense. Ok. I'll do it. >> Note, I also get many of these warnings, >> >>> invalid value encountered in subtract >>> and max(abs(fsim[0]-fsim[1:])) <= ftol): >> >> they are caused when np.inf is returned for invalid arguments. In many >> cases optimize.fmin evaluates parameters that are not valid, but most >> of the time that doesn't seem to cause any problems, exept it's >> annoying. > > for example with starting value for loc >>>> print stats.lognorm.fit(x, loc=0) > (0.23800805074491538, 0.034900026034516723, 196.31113801786194) I see. Is there any workaround/patch to force loc=0.0? What is the meaning of loc anyway? I have some more observations: in case the fmin warning is shown, the result equals the initial guess: In [17]: stats.lognorm.fit(samples) /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/optimize/optimize.py:280: RuntimeWarning: invalid value encountered in subtract and max(abs(fsim[0]-fsim[1:])) <= ftol): Out[17]: (1.0, 172.83866358041575, 24.677880663838486) In [18]: stats.lognorm._fitstart(samples) Out[18]: (1.0, 172.83866358041575, 24.677880663838486) Christian From pav at iki.fi Sun Oct 9 12:09:33 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 09 Oct 2011 18:09:33 +0200 Subject: [SciPy-User] OS X Lion build fails: arpack In-Reply-To: <442034F2-311D-462F-B05D-F83DEAB1EACB@gmail.com> References: <5987CDDA-4EFF-43CD-BA21-87CBA196C953@gmail.com> <442034F2-311D-462F-B05D-F83DEAB1EACB@gmail.com> Message-ID: 09.10.2011 14:54, Paul Anton Letnes kirjoitti: [clip] > scipy 0.10b2 installs nicely; but I suppose it may be buggier than 0.9.0? I'd expect it to be less buggy, given that more bugs have been fixed in it. -- Pauli Virtanen From josef.pktd at gmail.com Sun Oct 9 14:59:55 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 9 Oct 2011 14:59:55 -0400 Subject: [SciPy-User] Multivariate Student's t distribution in python In-Reply-To: References: Message-ID: On Sun, Oct 9, 2011 at 10:38 AM, eneide.odissea wrote: > Hi again Joseph > quick and dummy one. Can you point me out where I can find that function in > the sandbox? > I am looking at it now, but I cannot find it here is the rvs as function https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/sandbox/distributions/multivariate.py#L87 here is the class MVT https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/sandbox/distributions/mv_normal.py#L1010 I'm actually not quite sure what the status in master is since I was also working on other branches on some of these things. Josef > EO > > On Sun, Oct 9, 2011 at 8:19 AM, wrote: >> >> On Sun, Oct 9, 2011 at 8:06 AM, eneide.odissea >> wrote: >> > Hi all >> > Just a quick info that I cannot sort it out. >> > Do you know if it is available in Python a?Multivariate Student's t >> > distribution? >> > I cannot find it anywhere.There is a multivariate normal?in the package >> > 'numpy.random' >> > >> > (?http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multivariate_normal.html?)?,but >> > nothing in 'scipy.stats'. >> > Can you help me? >> >> Which methods do you need? >> >> I have most of it in scikits.statsmodels, but it's in the sandbox, >> mostly tested. If you just need rvs, then it's just a few lines of >> code. >> >> Josef >> >> > Kind Regards >> > EO >> > >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From josef.pktd at gmail.com Sun Oct 9 15:18:40 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 9 Oct 2011 15:18:40 -0400 Subject: [SciPy-User] MLE with stats.lognorm In-Reply-To: References: Message-ID: On Sun, Oct 9, 2011 at 11:46 AM, Christian K. wrote: > Am 09.10.11 14:14, schrieb josef.pktd at gmail.com: >> On Sun, Oct 9, 2011 at 8:06 AM, ? wrote: >>> On Sun, Oct 9, 2011 at 7:51 AM, Christian K. wrote: >>>> Hi, >>>> >>>> I wonder whether I am doing something wrong or if the following is to be >>>> expected (using sciyp 0.9): >>>> >>>> In [38]: from scipy import stats >>>> >>>> In [39]: dist = stats.lognorm(0.25,scale=200.0) >>>> >>>> In [40]: samples = dist.rvs(size=100) >>>> >>>> In [41]: print stats.lognorm.fit(samples) >>>> C:\Python26\lib\site-packages\scipy\optimize\optimize.py:280: RuntimeWarning: >>>> invalid value encountered in subtract >>>> ?and max(abs(fsim[0]-fsim[1:])) <= ftol): >>>> (1.0, 158.90310231282845, 21.013288720647015) >>>> >>>> In [42]: print stats.lognorm.fit(samples, floc=0) >>>> [2.2059200167655884, 0, 21.013288720647015] >>>> >>>> Even when fixing loc=0.0, the results from the MLE for s and scale are very >>>> different from the input parameters. Is lognorm >>>> >>>> Any hints are highly appreciated. >>> >>> I just looked at similar cases, for the changes in scipy 0.9 and >>> starting values, see >>> http://projects.scipy.org/scipy/ticket/1530 >>> >>> Essentially, you need to find better starting values and give it to fit. >>> >>> Can you add it to the ticket? It's not quite the same, but I guess it >>> is also that fix_loc_scale doesn't make sense. > > Ok. I'll do it. > >>> Note, I also get many of these warnings, >>> >>>> invalid value encountered in subtract >>>> ?and max(abs(fsim[0]-fsim[1:])) <= ftol): >>> >>> they are caused when np.inf is returned for invalid arguments. In many >>> cases optimize.fmin evaluates parameters that are not valid, but most >>> of the time that doesn't seem to cause any problems, exept it's >>> annoying. >> >> for example with starting value for loc >>>>> print stats.lognorm.fit(x, loc=0) >> (0.23800805074491538, 0.034900026034516723, 196.31113801786194) > > I see. Is there any workaround/patch to force loc=0.0? What is the > meaning of loc anyway? loc is the starting value for fmin, I don't remember how to specify starting values for shape parameters, I never used it. As in the ticket you could monkey patch the _fitstart function >>> stats.cauchy._fitstart = lambda x:(0,1) >>> stats.cauchy.fit(x) or what I do to experiment with starting values is stats.distributions.lognorm_gen._fitstart = fitstart_lognormal where fitstart_lognormal is my own function, that takes the sample as argument, and needs to return 3 starting values for (shape, loc, and scale) > I have some more observations: in case the fmin warning is shown, the > result equals the initial guess: > > In [17]: stats.lognorm.fit(samples) > /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/optimize/optimize.py:280: > RuntimeWarning: invalid value encountered in subtract > ?and max(abs(fsim[0]-fsim[1:])) <= ftol): > Out[17]: (1.0, 172.83866358041575, 24.677880663838486) > > In [18]: stats.lognorm._fitstart(samples) > Out[18]: (1.0, 172.83866358041575, 24.677880663838486) OK, that needs a closer look. I tried for a while with different starting values for cauchy and my impression was that most of the time fmin converged in spite of the warning. My Monte Carlo experiments with some distributions look pretty good but I didn't check yet how many of the replications have parameters that didn't move away from the starting values. Maybe another way of imposing constraints than just to return np.inf for out of bounds parameters would be more robust. Josef > > Christian > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From eneide.odissea at gmail.com Sun Oct 9 17:59:20 2011 From: eneide.odissea at gmail.com (eneide.odissea) Date: Sun, 9 Oct 2011 17:59:20 -0400 Subject: [SciPy-User] Multivariate Student's t distribution in python In-Reply-To: References: Message-ID: HI Joseph! Thanks a lot for your help Mn On Sun, Oct 9, 2011 at 2:59 PM, wrote: > On Sun, Oct 9, 2011 at 10:38 AM, eneide.odissea > wrote: > > Hi again Joseph > > quick and dummy one. Can you point me out where I can find that function > in > > the sandbox? > > I am looking at it now, but I cannot find it > > here is the rvs as function > > https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/sandbox/distributions/multivariate.py#L87 > > here is the class MVT > > https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/sandbox/distributions/mv_normal.py#L1010 > > I'm actually not quite sure what the status in master is since I was > also working on other branches on some of these things. > > Josef > > > > EO > > > > On Sun, Oct 9, 2011 at 8:19 AM, wrote: > >> > >> On Sun, Oct 9, 2011 at 8:06 AM, eneide.odissea < > eneide.odissea at gmail.com> > >> wrote: > >> > Hi all > >> > Just a quick info that I cannot sort it out. > >> > Do you know if it is available in Python a Multivariate Student's t > >> > distribution? > >> > I cannot find it anywhere.There is a multivariate normal in the > package > >> > 'numpy.random' > >> > > >> > ( > http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multivariate_normal.html > ) ,but > >> > nothing in 'scipy.stats'. > >> > Can you help me? > >> > >> Which methods do you need? > >> > >> I have most of it in scikits.statsmodels, but it's in the sandbox, > >> mostly tested. If you just need rvs, then it's just a few lines of > >> code. > >> > >> Josef > >> > >> > Kind Regards > >> > EO > >> > > >> > > >> > _______________________________________________ > >> > SciPy-User mailing list > >> > SciPy-User at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > >> > > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesmckinn at gmail.com Sun Oct 9 20:49:43 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Sun, 9 Oct 2011 20:49:43 -0400 Subject: [SciPy-User] ANN: pandas 0.4.3 (with Python 3 support) Message-ID: I'm pleased to announce the pandas 0.4.3 release. In addition to a number of new features, speed optimizations, and bug fixes, this release notably brings Python 3 support thanks to the help of Thomas Kluyver. Please see the release notes below for full details. best, Wes What is it ========== pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with ?relational? or ?labeled? data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. Links ===== Release Notes: https://github.com/wesm/pandas/blob/master/RELEASE.rst Documentation: http://pandas.sourceforge.net Installers: http://pypi.python.org/pypi/pandas Code Repository: http://github.com/wesm/pandas Mailing List: http://groups.google.com/group/pystatsmodels Blog: http://blog.wesmckinney.com pandas 0.4.3 release notes ========================== **Release date:** 10/9/2011 This is largely a bugfix release from 0.4.2 but also includes a handful of new and enhanced features. Also, pandas can now be installed and used on Python 3 (thanks Thomas Kluyver!). **New features / modules** - Python 3 support using 2to3 (PR #200, Thomas Kluyver) - Add `name` attribute to `Series` and added relevant logic and tests. Name now prints as part of `Series.__repr__` - Add `name` attribute to standard Index so that stacking / unstacking does not discard names and so that indexed DataFrame objects can be reliably round-tripped to flat files, pickle, HDF5, etc. - Add `isnull` and `notnull` as instance methods on Series (PR #209, GH #203) **Improvements to existing features** - Skip xlrd-related unit tests if not installed - `Index.append` and `MultiIndex.append` can accept a list of Index objects to concatenate together - Altered binary operations on differently-indexed SparseSeries objects to use the integer-based (dense) alignment logic which is faster with a larger number of blocks (GH #205) - Refactored `Series.__repr__` to be a bit more clean and consistent **API Changes** - `Series.describe` and `DataFrame.describe` now bring the 25% and 75% quartiles instead of the 10% and 90% deciles. The other outputs have not changed - `Series.toString` will print deprecation warning, has been de-camelCased to `to_string` **Bug fixes** - Fix broken interaction between `Index` and `Int64Index` when calling intersection. Implement `Int64Index.intersection` - `MultiIndex.sortlevel` discarded the level names (GH #202) - Fix bugs in groupby, join, and append due to improper concatenation of `MultiIndex` objects (GH #201) - Fix regression from 0.4.1, `isnull` and `notnull` ceased to work on other kinds of Python scalar objects like `datetime.datetime` - Raise more helpful exception when attempting to write empty DataFrame or LongPanel to `HDFStore` (GH #204) - Use stdlib csv module to properly escape strings with commas in `DataFrame.to_csv` (PR #206, Thomas Kluyver) - Fix Python ndarray access in Cython code for sparse blocked index integrity check - Fix bug writing Series to CSV in Python 3 (PR #209) - Miscellaneous Python 3 bugfixes Thanks ------ - Thomas Kluyver - rsamson From scipy at samueljohn.de Mon Oct 10 04:12:31 2011 From: scipy at samueljohn.de (Samuel John) Date: Mon, 10 Oct 2011 10:12:31 +0200 Subject: [SciPy-User] OS X Lion build fails: arpack In-Reply-To: References: <5987CDDA-4EFF-43CD-BA21-87CBA196C953@gmail.com> <442034F2-311D-462F-B05D-F83DEAB1EACB@gmail.com> Message-ID: <50DBE4B0-FEA7-42EB-BAFB-F5093F797336@samueljohn.de> I used the homebrew gfortran and then successfully compiled numpy and scipy with: CC=gcc-4.2 CXX=g++-4.2 FFLAGS=-ff2c python setup.py build --fcompiler=gfortran hope this helps, Samuel From paul.anton.letnes at gmail.com Mon Oct 10 04:49:36 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Mon, 10 Oct 2011 10:49:36 +0200 Subject: [SciPy-User] OS X Lion build fails: arpack In-Reply-To: <50DBE4B0-FEA7-42EB-BAFB-F5093F797336@samueljohn.de> References: <5987CDDA-4EFF-43CD-BA21-87CBA196C953@gmail.com> <442034F2-311D-462F-B05D-F83DEAB1EACB@gmail.com> <50DBE4B0-FEA7-42EB-BAFB-F5093F797336@samueljohn.de> Message-ID: <06D3ABB2-EDC2-4BA1-A5F3-4CCA5966E8F2@gmail.com> On 10. okt. 2011, at 10:12, Samuel John wrote: > I used the homebrew gfortran and then successfully compiled numpy and scipy with: > > CC=gcc-4.2 > CXX=g++-4.2 > FFLAGS=-ff2c > python setup.py build --fcompiler=gfortran > > hope this helps, > Samuel > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Thanks for helpful advice, everyone. I finally got it working yesterday. I decided to go with scipy 0.10b2, I always got into problems with 0.9. For the record, scipy 0.10b2 failed some tests and hangs on one of them. That's why I remain somewhat skeptical with regard to its non-bugginess. Would it be of interest to share the test results? Recently I've become a big fan of cmake, and the same people have created cdash; maybe something similar should be set up for scipy? It would allow people to share test results and benchmarks more easily and systematically during development and beta/rc release. For example, LAPACK uses this system: http://my.cdash.org/index.php?project=LAPACK Probably several similar solutions exist out there. I, for one, would be more inclined to run tests for my compiler/OS if I could simply 'git pull' and then run 'python setup.py test_with_upload'. Cheers Paul From pav at iki.fi Mon Oct 10 05:17:56 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 10 Oct 2011 11:17:56 +0200 Subject: [SciPy-User] OS X Lion build fails: arpack In-Reply-To: <06D3ABB2-EDC2-4BA1-A5F3-4CCA5966E8F2@gmail.com> References: <5987CDDA-4EFF-43CD-BA21-87CBA196C953@gmail.com> <442034F2-311D-462F-B05D-F83DEAB1EACB@gmail.com> <50DBE4B0-FEA7-42EB-BAFB-F5093F797336@samueljohn.de> <06D3ABB2-EDC2-4BA1-A5F3-4CCA5966E8F2@gmail.com> Message-ID: 10.10.2011 10:49, Paul Anton Letnes kirjoitti: [clip] > For the record, scipy 0.10b2 failed some tests and hangs on one of them. > That's why I remain somewhat skeptical with regard to its non-bugginess. > Would it be of interest to share the test results? Please do, if it is different from the one known OSX-specific release blocker issue: http://projects.scipy.org/scipy/ticket/1523 -- Pauli Virtanen From paul.anton.letnes at gmail.com Mon Oct 10 05:35:32 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Mon, 10 Oct 2011 11:35:32 +0200 Subject: [SciPy-User] OS X Lion build fails: arpack In-Reply-To: References: <5987CDDA-4EFF-43CD-BA21-87CBA196C953@gmail.com> <442034F2-311D-462F-B05D-F83DEAB1EACB@gmail.com> <50DBE4B0-FEA7-42EB-BAFB-F5093F797336@samueljohn.de> <06D3ABB2-EDC2-4BA1-A5F3-4CCA5966E8F2@gmail.com> Message-ID: <5C5E151C-2BD2-4528-999B-4B7D9EEA1C53@gmail.com> On 10. okt. 2011, at 11:17, Pauli Virtanen wrote: > 10.10.2011 10:49, Paul Anton Letnes kirjoitti: > [clip] >> For the record, scipy 0.10b2 failed some tests and hangs on one of them. >> That's why I remain somewhat skeptical with regard to its non-bugginess. >> Would it be of interest to share the test results? > > Please do, if it is different from the one known OSX-specific release > blocker issue: http://projects.scipy.org/scipy/ticket/1523 > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Very well. After a while the test hangs, Ctrl-C does nothing, so I end up using Ctrl-Z and kill to stop the python process. I assume the test is supposed to run in not too long on a Core i7; it takes about a minute on an older desktop machine (Core 2 quad). I compiled using gcc and gfortran 4.6.0 build from homebrew-alt (I think - maybe setup.py dug up Apple's gcc). I suppose there's a theoretical possibility that there's an issue with the compiler? I'd like my scipy installation to be nice and bug-free, so I'll gladly help debug if I'm able to. i-courant /tmp/paulanto % python -c 'import scipy;scipy.test()' 2>&1 Running unit tests for scipy NumPy version 1.6.1 NumPy is installed in /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy SciPy version 0.10.0b2 SciPy is installed in /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy Python version 2.7.2 (default, Oct 9 2011, 18:03:13) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] nose version 1.1.2 ............................................................................................................................................................................................................................K............................................................................................................/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/interpolate/fitpack2.py:674: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ....../usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/interpolate/fitpack2.py:605: UserWarning: The required storage space exceeds the available storage space: nxest or nyest too small, or s too small. The weighted least-squares spline corresponds to the current set of knots. warnings.warn(message) ........................K..K....../usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/numeric.py:1920: RuntimeWarning: invalid value encountered in absolute return all(less_equal(absolute(x-y), atol + rtol * absolute(y))) ............................................................................................................................................................................................................................................................................................................................................................................................................................................/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/wavfile.py:31: WavFileWarning: Unfamiliar format bytes warnings.warn("Unfamiliar format bytes", WavFileWarning) /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/wavfile.py:121: WavFileWarning: chunk not understood warnings.warn("chunk not understood", WavFileWarning) ....................................................................................F..FF......................................................................................................................................SSSSSS......SSSSSS......SSSS.....................FFF.........................................F....FF.......S............................................................................................................................................................................................................................................................K......................................................................................................................................................................................................SSSSS............S........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSSSSSSS.........../usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py:103: RuntimeWarning: invalid value encountered in absolute ind = np.argsort(abs(reval)) FEEEEEEFEEEFEEEEEEEEEEEFEFEFEEEEEEEEFEEF/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py:86: RuntimeWarning: divide by zero encountered in divide reval = 1. / (eval - sigma) FEEEEEEFEEEFEEEEEEE/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/fromnumeric.py:2296: RuntimeWarning: overflow encountered in multiply return round(decimals, out) F............................................................EEEEEEEEEEEEEEEFEEEEEEEEEEEEE/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py:96: RuntimeWarning: divide by zero encountered in divide reval = eval / (eval - sigma) FEEE^C^C^C^Z zsh: suspended python -c 'import scipy;scipy.test()' 2>&1 i-courant /tmp/paulanto % kill %1i-courant /tmp/paulanto % [1] + terminated python -c 'import scipy;scipy.test()' 2>&1 From pav at iki.fi Mon Oct 10 06:20:41 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 10 Oct 2011 12:20:41 +0200 Subject: [SciPy-User] OS X Lion build fails: arpack In-Reply-To: <5C5E151C-2BD2-4528-999B-4B7D9EEA1C53@gmail.com> References: <5987CDDA-4EFF-43CD-BA21-87CBA196C953@gmail.com> <442034F2-311D-462F-B05D-F83DEAB1EACB@gmail.com> <50DBE4B0-FEA7-42EB-BAFB-F5093F797336@samueljohn.de> <06D3ABB2-EDC2-4BA1-A5F3-4CCA5966E8F2@gmail.com> <5C5E151C-2BD2-4528-999B-4B7D9EEA1C53@gmail.com> Message-ID: 10.10.2011 11:35, Paul Anton Letnes kirjoitti: [clip] > I compiled using gcc and gfortran 4.6.0 build from homebrew-alt > (I think - maybe setup.py dug up Apple's gcc). I suppose there's > a theoretical possibility that there's an issue with the compiler? > I'd like my scipy installation to be nice and bug-free, so I'll > gladly help debug if I'm able to. > > i-courant /tmp/paulanto % python -c 'import scipy;scipy.test()' 2>&1 Please run scipy.test(verbose=2) so that we can also see which test hangs. Thanks, -- Pauli Virtanen From paul.anton.letnes at gmail.com Mon Oct 10 06:42:26 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Mon, 10 Oct 2011 12:42:26 +0200 Subject: [SciPy-User] OS X Lion build fails: arpack In-Reply-To: References: <5987CDDA-4EFF-43CD-BA21-87CBA196C953@gmail.com> <442034F2-311D-462F-B05D-F83DEAB1EACB@gmail.com> <50DBE4B0-FEA7-42EB-BAFB-F5093F797336@samueljohn.de> <06D3ABB2-EDC2-4BA1-A5F3-4CCA5966E8F2@gmail.com> <5C5E151C-2BD2-4528-999B-4B7D9EEA1C53@gmail.com> Message-ID: <5BDC6F3A-B62B-43B8-B3B8-726F5D6EA518@gmail.com> On 10. okt. 2011, at 12:20, Pauli Virtanen wrote: > 10.10.2011 11:35, Paul Anton Letnes kirjoitti: > [clip] >> I compiled using gcc and gfortran 4.6.0 build from homebrew-alt >> (I think - maybe setup.py dug up Apple's gcc). I suppose there's >> a theoretical possibility that there's an issue with the compiler? >> I'd like my scipy installation to be nice and bug-free, so I'll >> gladly help debug if I'm able to. >> >> i-courant /tmp/paulanto % python -c 'import scipy;scipy.test()' 2>&1 > > Please run > > scipy.test(verbose=2) > > so that we can also see which test hangs. > > Thanks, > > -- > Pauli Virtanen OK, here goes. CPU goes to 100% on one core. Before the test hangs, it also prints a lot of 'ERROR', probably meaning that several tests are failing. Looks like this is the line that hangs: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'buckling') ? Full log at the bottom of the email. Before that, I printed out which libraries are linked to, because since Ctrl-C does nothing, I am guessing the 'hang' is somewhere in a C/Fortran routine. Libraries: ++++++++++ /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy % otool -L **/*so cluster/_hierarchy_wrap.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) cluster/_vq.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) fftpack/_fftpack.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) fftpack/convolve.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) integrate/_dop.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) integrate/_odepack.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) integrate/_quadpack.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) integrate/vode.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) interpolate/_fitpack.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) interpolate/_interpolate.so: /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 52.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) interpolate/dfitpack.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) interpolate/interpnd.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) io/matlab/mio5_utils.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) io/matlab/mio_utils.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) io/matlab/streams.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) lib/blas/cblas.so: /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) lib/blas/fblas.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) lib/lapack/atlas_version.so: /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) lib/lapack/calc_lwork.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) lib/lapack/clapack.so: /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) lib/lapack/flapack.so: /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) linalg/_flinalg.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) linalg/atlas_version.so: /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) linalg/calc_lwork.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) linalg/cblas.so: /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) linalg/clapack.so: /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) linalg/fblas.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) linalg/flapack.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) ndimage/_nd_image.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) odr/__odrpack.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) optimize/_cobyla.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) optimize/_lbfgsb.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) optimize/_minpack.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) optimize/_nnls.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) optimize/_slsqp.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) optimize/_zeros.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) optimize/minpack2.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) optimize/moduleTNC.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) signal/sigtools.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) signal/spectral.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) signal/spline.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) sparse/linalg/dsolve/_superlu.so: /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) sparse/linalg/eigen/arpack/_arpack.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) sparse/linalg/isolve/_iterative.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) sparse/sparsetools/_bsr.so: /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 52.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) sparse/sparsetools/_coo.so: /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 52.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) sparse/sparsetools/_csc.so: /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 52.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) sparse/sparsetools/_csgraph.so: /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 52.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) sparse/sparsetools/_csr.so: /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 52.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) sparse/sparsetools/_dia.so: /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 52.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) spatial/_distance_wrap.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) spatial/ckdtree.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) spatial/qhull.so: /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate (compatibility version 1.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) special/_cephes.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) special/_logit.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) special/lambertw.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) special/orthogonal_eval.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) special/specfun.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) stats/futil.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) stats/mvn.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) stats/statlib.so: /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgfortran.3.dylib (compatibility version 4.0.0, current version 4.0.0) /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/local/Cellar/gcc/4.6.0/gcc/lib/libquadmath.0.dylib (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) stats/vonmises_cython.so: /usr/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1105.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 159.0.0) i-dhcp-49211 /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy % Test log: +++++++++ /tmp/paulanto % python -c 'import scipy;scipy.test(verbose=2)' 2>&1 [12:36:21 on 11-10-10] Running unit tests for scipy NumPy version 1.6.1 NumPy is installed in /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy SciPy version 0.10.0b2 SciPy is installed in /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy Python version 2.7.2 (default, Oct 9 2011, 18:03:13) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] nose version 1.1.2 Tests cophenet(Z) on tdist data set. ... ok Tests cophenet(Z, Y) on tdist data set. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. Correspondance should be false. ... ok Tests correspond(Z, y) on linkage and CDMs over observation sets of different sizes. Correspondance should be false. ... ok Tests correspond(Z, y) with empty linkage and condensed distance matrix. ... ok Tests num_obs_linkage with observation matrices of multiple sizes. ... ok Tests fcluster(Z, criterion='maxclust', t=2) on a random 3-cluster data set. ... ok Tests fcluster(Z, criterion='maxclust', t=3) on a random 3-cluster data set. ... ok Tests fcluster(Z, criterion='maxclust', t=4) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=2) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=3) on a random 3-cluster data set. ... ok Tests fclusterdata(X, criterion='maxclust', t=4) on a random 3-cluster data set. ... ok Tests from_mlab_linkage on empty linkage array. ... ok Tests from_mlab_linkage on linkage array with multiple rows. ... ok Tests from_mlab_linkage on linkage array with single row. ... ok Tests inconsistency matrix calculation (depth=1) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=2) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=3) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=4) on a complete linkage. ... ok Tests inconsistency matrix calculation (depth=1, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=2, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=3, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=4, dataset=Q) with single linkage. ... ok Tests inconsistency matrix calculation (depth=1) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=2) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=3) on a single linkage. ... ok Tests inconsistency matrix calculation (depth=4) on a single linkage. ... ok Tests is_isomorphic on test case #1 (one flat cluster, different labellings) ... ok Tests is_isomorphic on test case #2 (two flat clusters, different labelings) ... ok Tests is_isomorphic on test case #3 (no flat clusters) ... ok Tests is_isomorphic on test case #4A (3 flat clusters, different labelings, isomorphic) ... ok Tests is_isomorphic on test case #4B (3 flat clusters, different labelings, nonisomorphic) ... ok Tests is_isomorphic on test case #4C (3 flat clusters, different labelings, isomorphic) ... ok Tests is_isomorphic on test case #5A (1000 observations, 2 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5B (1000 observations, 3 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5C (1000 observations, 5 random clusters, random permutation of the labeling). Run 3 times. ... ok Tests is_isomorphic on test case #5A (1000 observations, 2 random clusters, random permutation of the labeling, slightly nonisomorphic.) Run 3 times. ... ok Tests is_isomorphic on test case #5B (1000 observations, 3 random clusters, random permutation of the labeling, slightly nonisomorphic.) Run 3 times. ... ok Tests is_isomorphic on test case #5C (1000 observations, 5 random clusters, random permutation of the labeling, slightly non-isomorphic.) Run 3 times. ... ok Tests is_monotonic(Z) on 1x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on 2x4 linkage. Expecting False. ... ok Tests is_monotonic(Z) on 2x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 1). Expecting False. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 2). Expecting False. ... ok Tests is_monotonic(Z) on 3x4 linkage (case 3). Expecting False ... ok Tests is_monotonic(Z) on 3x4 linkage. Expecting True. ... ok Tests is_monotonic(Z) on an empty linkage. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on Iris data set. Expecting True. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on tdist data set. Expecting True. ... ok Tests is_monotonic(Z) on clustering generated by single linkage on tdist data set. Perturbing. Expecting False. ... ok Tests is_valid_im(R) on im over 2 observations. ... ok Tests is_valid_im(R) on im over 3 observations. ... ok Tests is_valid_im(R) with 3 columns. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3). ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link counts. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link height means. ... ok Tests is_valid_im(R) on im on observation sets between sizes 4 and 15 (step size 3) with negative link height standard deviations. ... ok Tests is_valid_im(R) with 5 columns. ... ok Tests is_valid_im(R) with empty inconsistency matrix. ... ok Tests is_valid_im(R) with integer type. ... ok Tests is_valid_linkage(Z) on linkage over 2 observations. ... ok Tests is_valid_linkage(Z) on linkage over 3 observations. ... ok Tests is_valid_linkage(Z) with 3 columns. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3). ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative counts. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative distances. ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative indices (left). ... ok Tests is_valid_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3) with negative indices (right). ... ok Tests is_valid_linkage(Z) with 5 columns. ... ok Tests is_valid_linkage(Z) with empty linkage. ... ok Tests is_valid_linkage(Z) with integer type. ... ok Tests leaders using a flat clustering generated by single linkage. ... ok Tests leaves_list(Z) on a 1x4 linkage. ... ok Tests leaves_list(Z) on a 2x4 linkage. ... ok Tests leaves_list(Z) on the Iris data set using average linkage. ... ok Tests leaves_list(Z) on the Iris data set using centroid linkage. ... ok Tests leaves_list(Z) on the Iris data set using complete linkage. ... ok Tests leaves_list(Z) on the Iris data set using median linkage. ... ok Tests leaves_list(Z) on the Iris data set using single linkage. ... ok Tests leaves_list(Z) on the Iris data set using ward linkage. ... ok Tests linkage(Y, 'average') on the tdist data set. ... ok Tests linkage(Y, 'centroid') on the Q data set. ... ok Tests linkage(Y, 'complete') on the Q data set. ... ok Tests linkage(Y, 'complete') on the tdist data set. ... ok Tests linkage(Y) where Y is a 0x4 linkage matrix. Exception expected. ... ok Tests linkage(Y, 'single') on the Q data set. ... ok Tests linkage(Y, 'single') on the tdist data set. ... ok Tests linkage(Y, 'weighted') on the Q data set. ... ok Tests linkage(Y, 'weighted') on the tdist data set. ... ok Tests maxdists(Z) on the Q data set using centroid linkage. ... ok Tests maxdists(Z) on the Q data set using complete linkage. ... ok Tests maxdists(Z) on the Q data set using median linkage. ... ok Tests maxdists(Z) on the Q data set using single linkage. ... ok Tests maxdists(Z) on the Q data set using Ward linkage. ... ok Tests maxdists(Z) on empty linkage. Expecting exception. ... ok Tests maxdists(Z) on linkage with one cluster. ... ok Tests maxinconsts(Z, R) on the Q data set using centroid linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using complete linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using median linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using single linkage. ... ok Tests maxinconsts(Z, R) on the Q data set using Ward linkage. ... ok Tests maxinconsts(Z, R) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxinconsts(Z, R) on empty linkage. Expecting exception. ... ok Tests maxinconsts(Z, R) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 0) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 0) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 0) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 0) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 0) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 1) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 1) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 1) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 2) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 2) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 3) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3.3). Expecting exception. ... ok Tests maxRstat(Z, R, -1). Expecting exception. ... ok Tests maxRstat(Z, R, 4). Expecting exception. ... ok Tests num_obs_linkage(Z) on linkage over 2 observations. ... ok Tests num_obs_linkage(Z) on linkage over 3 observations. ... ok Tests num_obs_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3). ... ok Tests num_obs_linkage(Z) with empty linkage. ... ok Tests to_mlab_linkage on linkage array with multiple rows. ... ok Tests to_mlab_linkage on empty linkage array. ... ok Tests to_mlab_linkage on linkage array with single row. ... ok test_hierarchy.load_testing_files ... ok Ticket #505. ... ok Testing that kmeans2 init methods work. ... ok Testing simple call to kmeans2 with rank 1 data. ... ok Testing simple call to kmeans2 with rank 1 data. ... ok Testing simple call to kmeans2 and its results. ... ok Regression test for #546: fail when k arg is 0. ... ok This will cause kmean to have a cluster with no points. ... ok test_kmeans_simple (test_vq.TestKMean) ... ok test_large_features (test_vq.TestKMean) ... ok test_py_vq (test_vq.TestVq) ... ok test_py_vq2 (test_vq.TestVq) ... ok test_vq (test_vq.TestVq) ... ok Test special rank 1 vq algo, python implementation. ... ok test_codata.test_find ... ok test_codata.test_basic_table_parse ... ok test_codata.test_basic_lookup ... ok test_codata.test_find_all ... ok test_codata.test_find_single ... ok test_codata.test_2002_vs_2006 ... ok Check that updating stored values with exact ones worked. ... ok test_constants.test_fahrenheit_to_celcius ... ok test_constants.test_celcius_to_kelvin ... ok test_constants.test_kelvin_to_celcius ... ok test_constants.test_fahrenheit_to_kelvin ... ok test_constants.test_kelvin_to_fahrenheit ... ok test_constants.test_celcius_to_fahrenheit ... ok test_constants.test_lambda_to_nu ... ok test_constants.test_nu_to_lambda ... ok test_definition (test_basic.TestDoubleFFT) ... ok test_djbfft (test_basic.TestDoubleFFT) ... ok test_n_argument_real (test_basic.TestDoubleFFT) ... ok test_definition (test_basic.TestDoubleIFFT) ... ok test_definition_real (test_basic.TestDoubleIFFT) ... ok test_djbfft (test_basic.TestDoubleIFFT) ... ok test_random_complex (test_basic.TestDoubleIFFT) ... ok test_random_real (test_basic.TestDoubleIFFT) ... ok test_size_accuracy (test_basic.TestDoubleIFFT) ... ok test_axes_argument (test_basic.TestFftn) ... ok test_definition (test_basic.TestFftn) ... ok test_shape_argument (test_basic.TestFftn) ... ok Test that fftn raises ValueError when s.shape is longer than x.shape ... ok test_shape_axes_argument (test_basic.TestFftn) ... ok test_shape_axes_argument2 (test_basic.TestFftn) ... ok test_definition (test_basic.TestFftnSingle) ... ok test_size_accuracy (test_basic.TestFftnSingle) ... ok test_definition (test_basic.TestIRFFTDouble) ... ok test_djbfft (test_basic.TestIRFFTDouble) ... ok test_random_real (test_basic.TestIRFFTDouble) ... ok test_size_accuracy (test_basic.TestIRFFTDouble) ... ok test_definition (test_basic.TestIRFFTSingle) ... ok test_djbfft (test_basic.TestIRFFTSingle) ... ok test_random_real (test_basic.TestIRFFTSingle) ... ok test_size_accuracy (test_basic.TestIRFFTSingle) ... ok test_definition (test_basic.TestIfftnDouble) ... ok test_random_complex (test_basic.TestIfftnDouble) ... ok test_definition (test_basic.TestIfftnSingle) ... ok test_random_complex (test_basic.TestIfftnSingle) ... ok test_complex (test_basic.TestLongDoubleFailure) ... ok test_real (test_basic.TestLongDoubleFailure) ... ok test_basic.TestOverwrite.test_fft ... ok test_basic.TestOverwrite.test_fftn ... ok test_basic.TestOverwrite.test_ifft ... ok test_basic.TestOverwrite.test_ifftn ... ok test_basic.TestOverwrite.test_irfft ... ok test_basic.TestOverwrite.test_rfft ... ok test_definition (test_basic.TestRFFTDouble) ... ok test_djbfft (test_basic.TestRFFTDouble) ... ok test_definition (test_basic.TestRFFTSingle) ... ok test_djbfft (test_basic.TestRFFTSingle) ... ok test_definition (test_basic.TestSingleFFT) ... ok test_djbfft (test_basic.TestSingleFFT) ... ok test_n_argument_real (test_basic.TestSingleFFT) ... ok test_notice (test_basic.TestSingleFFT) ... KNOWNFAIL: single-precision FFT implementation is partially disabled, until accuracy issues with large prime powers are resolved test_definition (test_basic.TestSingleIFFT) ... ok test_definition_real (test_basic.TestSingleIFFT) ... ok test_djbfft (test_basic.TestSingleIFFT) ... ok test_random_complex (test_basic.TestSingleIFFT) ... ok test_random_real (test_basic.TestSingleIFFT) ... ok test_size_accuracy (test_basic.TestSingleIFFT) ... ok fft returns wrong result with axes parameter. ... ok test_definition (test_helper.TestFFTFreq) ... ok test_definition (test_helper.TestFFTShift) ... ok test_inverse (test_helper.TestFFTShift) ... ok test_definition (test_helper.TestRFFTFreq) ... ok test_definition (test_pseudo_diffs.TestDiff) ... ok test_expr (test_pseudo_diffs.TestDiff) ... ok test_expr_large (test_pseudo_diffs.TestDiff) ... ok test_int (test_pseudo_diffs.TestDiff) ... ok test_period (test_pseudo_diffs.TestDiff) ... ok test_random_even (test_pseudo_diffs.TestDiff) ... ok test_random_odd (test_pseudo_diffs.TestDiff) ... ok test_sin (test_pseudo_diffs.TestDiff) ... ok test_zero_nyquist (test_pseudo_diffs.TestDiff) ... ok test_definition (test_pseudo_diffs.TestHilbert) ... ok test_random_even (test_pseudo_diffs.TestHilbert) ... ok test_random_odd (test_pseudo_diffs.TestHilbert) ... ok test_tilbert_relation (test_pseudo_diffs.TestHilbert) ... ok test_definition (test_pseudo_diffs.TestIHilbert) ... ok test_itilbert_relation (test_pseudo_diffs.TestIHilbert) ... ok test_definition (test_pseudo_diffs.TestITilbert) ... ok test_pseudo_diffs.TestOverwrite.test_cc_diff ... ok test_pseudo_diffs.TestOverwrite.test_cs_diff ... ok test_pseudo_diffs.TestOverwrite.test_diff ... ok test_pseudo_diffs.TestOverwrite.test_hilbert ... ok test_pseudo_diffs.TestOverwrite.test_itilbert ... ok test_pseudo_diffs.TestOverwrite.test_sc_diff ... ok test_pseudo_diffs.TestOverwrite.test_shift ... ok test_pseudo_diffs.TestOverwrite.test_ss_diff ... ok test_pseudo_diffs.TestOverwrite.test_tilbert ... ok test_definition (test_pseudo_diffs.TestShift) ... ok test_definition (test_pseudo_diffs.TestTilbert) ... ok test_random_even (test_pseudo_diffs.TestTilbert) ... ok test_random_odd (test_pseudo_diffs.TestTilbert) ... ok test_axis (test_real_transforms.TestDCTIDouble) ... ok test_definition (test_real_transforms.TestDCTIDouble) ... ok test_axis (test_real_transforms.TestDCTIFloat) ... ok test_definition (test_real_transforms.TestDCTIFloat) ... ok test_axis (test_real_transforms.TestDCTIIDouble) ... ok test_definition (test_real_transforms.TestDCTIIDouble) ... ok Test correspondance with matlab (orthornomal mode). ... ok test_axis (test_real_transforms.TestDCTIIFloat) ... ok test_definition (test_real_transforms.TestDCTIIFloat) ... ok Test correspondance with matlab (orthornomal mode). ... ok test_axis (test_real_transforms.TestDCTIIIDouble) ... ok test_definition (test_real_transforms.TestDCTIIIDouble) ... ok Test orthornomal mode. ... ok test_axis (test_real_transforms.TestDCTIIIFloat) ... ok test_definition (test_real_transforms.TestDCTIIIFloat) ... ok Test orthornomal mode. ... ok test_definition (test_real_transforms.TestIDCTIDouble) ... ok test_definition (test_real_transforms.TestIDCTIFloat) ... ok test_definition (test_real_transforms.TestIDCTIIDouble) ... ok test_definition (test_real_transforms.TestIDCTIIFloat) ... ok test_definition (test_real_transforms.TestIDCTIIIDouble) ... ok test_definition (test_real_transforms.TestIDCTIIIFloat) ... ok test_real_transforms.TestOverwrite.test_dct ... ok test_real_transforms.TestOverwrite.test_idct ... ok test_no_params (test_integrate.DOP853CheckParameterUse) ... ok test_one_scalar_param (test_integrate.DOP853CheckParameterUse) ... ok test_two_scalar_params (test_integrate.DOP853CheckParameterUse) ... ok test_vector_param (test_integrate.DOP853CheckParameterUse) ... ok test_no_params (test_integrate.DOPRI5CheckParameterUse) ... ok test_one_scalar_param (test_integrate.DOPRI5CheckParameterUse) ... ok test_two_scalar_params (test_integrate.DOPRI5CheckParameterUse) ... ok test_vector_param (test_integrate.DOPRI5CheckParameterUse) ... ok Check the dop853 solver ... ok Check the dopri5 solver ... ok Check the vode solver ... ok test_concurrent_fail (test_integrate.TestOde) ... ok test_concurrent_ok (test_integrate.TestOde) ... ok Check the dop853 solver ... ok Check the dopri5 solver ... ok Check the vode solver ... ok Check the zvode solver ... ok test_odeint (test_integrate.TestOdeint) ... ok test_no_params (test_integrate.VODECheckParameterUse) ... ok test_one_scalar_param (test_integrate.VODECheckParameterUse) ... ok test_two_scalar_params (test_integrate.VODECheckParameterUse) ... ok test_vector_param (test_integrate.VODECheckParameterUse) ... ok test_no_params (test_integrate.ZVODECheckParameterUse) ... ok test_one_scalar_param (test_integrate.ZVODECheckParameterUse) ... ok test_two_scalar_params (test_integrate.ZVODECheckParameterUse) ... ok test_vector_param (test_integrate.ZVODECheckParameterUse) ... ok test_algebraic_log_weight (test_quadpack.TestQuad) ... ok test_cauchypv_weight (test_quadpack.TestQuad) ... ok test_cosine_weighted_infinite (test_quadpack.TestQuad) ... ok test_double_integral (test_quadpack.TestQuad) ... ok test_indefinite (test_quadpack.TestQuad) ... ok test_sine_weighted_finite (test_quadpack.TestQuad) ... ok test_sine_weighted_infinite (test_quadpack.TestQuad) ... ok test_singular (test_quadpack.TestQuad) ... ok test_triple_integral (test_quadpack.TestQuad) ... ok test_typical (test_quadpack.TestQuad) ... ok Test the first few degrees, for evenly spaced points. ... ok Test newton_cotes with points that are not evenly spaced. ... ok test_non_dtype (test_quadrature.TestQuadrature) ... ok test_quadrature (test_quadrature.TestQuadrature) ... ok test_quadrature_rtol (test_quadrature.TestQuadrature) ... ok test_romb (test_quadrature.TestQuadrature) ... ok test_romberg (test_quadrature.TestQuadrature) ... ok test_romberg_rtol (test_quadrature.TestQuadrature) ... ok test_bilinearity (test_fitpack.TestLSQBivariateSpline) ... /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/interpolate/fitpack2.py:674: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ok Test whether empty inputs returns an empty output. Ticket 1014 ... ok test_integral (test_fitpack.TestLSQBivariateSpline) ... ok test_linear_constant (test_fitpack.TestLSQBivariateSpline) ... ok test_defaults (test_fitpack.TestRectBivariateSpline) ... ok test_evaluate (test_fitpack.TestRectBivariateSpline) ... ok test_integral (test_fitpack.TestSmoothBivariateSpline) ... /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/interpolate/fitpack2.py:605: UserWarning: The required storage space exceeds the available storage space: nxest or nyest too small, or s too small. The weighted least-squares spline corresponds to the current set of knots. warnings.warn(message) ok test_linear_1d (test_fitpack.TestSmoothBivariateSpline) ... ok test_linear_constant (test_fitpack.TestSmoothBivariateSpline) ... ok Test whether empty input returns an empty output. Ticket 1014 ... ok test_linear_1d (test_fitpack.TestUnivariateSpline) ... ok test_linear_constant (test_fitpack.TestUnivariateSpline) ... ok test_preserve_shape (test_fitpack.TestUnivariateSpline) ... ok Regression test for #1375. ... ok test_subclassing (test_fitpack.TestUnivariateSpline) ... ok test_interpnd.TestCloughTocher2DInterpolator.test_dense ... ok test_interpnd.TestCloughTocher2DInterpolator.test_linear_smoketest ... ok test_interpnd.TestCloughTocher2DInterpolator.test_quadratic_smoketest ... ok test_interpnd.TestCloughTocher2DInterpolator.test_wrong_ndim ... ok test_interpnd.TestEstimateGradients2DGlobal.test_smoketest ... ok test_interpnd.TestLinearNDInterpolation.test_complex_smoketest ... ok test_interpnd.TestLinearNDInterpolation.test_smoketest ... ok test_interpnd.TestLinearNDInterpolation.test_smoketest_alternate ... ok test_interpnd.TestLinearNDInterpolation.test_square ... ok test_interpolate.TestInterp1D.test_bounds ... ok test_interpolate.TestInterp1D.test_complex ... ok Check the actual implementation of spline interpolation. ... ok Check that the attributes are initialized appropriately by the ... ok Check the actual implementation of linear interpolation. ... ok test_interpolate.TestInterp1D.test_nd ... ok test_interpolate.TestInterp1D.test_nd_zero_spline ... KNOWNFAIL: zero-order splines fail for the last point Check the actual implementation of nearest-neighbour interpolation. ... ok Make sure that appropriate exceptions are raised when invalid values ... ok Check the actual implementation of zero-order spline interpolation. ... KNOWNFAIL: zero-order splines fail for the last point test_interp2d (test_interpolate.TestInterp2D) ... ok test_interp2d_meshgrid_input (test_interpolate.TestInterp2D) ... ok test_lagrange (test_interpolate.TestLagrange) ... ok test_block_average_above (test_interpolate_wrapper.Test) ... ok test_linear (test_interpolate_wrapper.Test) ... ok test_linear2 (test_interpolate_wrapper.Test) ... ok test_logarithmic (test_interpolate_wrapper.Test) ... /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/numeric.py:1920: RuntimeWarning: invalid value encountered in absolute return all(less_equal(absolute(x-y), atol + rtol * absolute(y))) ok test_nearest (test_interpolate_wrapper.Test) ... ok test_ndgriddata.TestGriddata.test_1d ... ok test_ndgriddata.TestGriddata.test_1d_unsorted ... ok test_ndgriddata.TestGriddata.test_alternative_call ... ok test_ndgriddata.TestGriddata.test_complex_2d ... ok test_ndgriddata.TestGriddata.test_fill_value ... ok test_ndgriddata.TestGriddata.test_multipoint_2d ... ok test_ndgriddata.TestGriddata.test_multivalue_2d ... ok test_append (test_polyint.CheckBarycentric) ... ok test_delayed (test_polyint.CheckBarycentric) ... ok test_lagrange (test_polyint.CheckBarycentric) ... ok test_scalar (test_polyint.CheckBarycentric) ... ok test_shapes_1d_vectorvalue (test_polyint.CheckBarycentric) ... ok test_shapes_scalarvalue (test_polyint.CheckBarycentric) ... ok test_shapes_vectorvalue (test_polyint.CheckBarycentric) ... ok test_vector (test_polyint.CheckBarycentric) ... ok test_wrapper (test_polyint.CheckBarycentric) ... ok test_derivative (test_polyint.CheckKrogh) ... ok test_derivatives (test_polyint.CheckKrogh) ... ok test_empty (test_polyint.CheckKrogh) ... ok test_hermite (test_polyint.CheckKrogh) ... ok test_high_derivative (test_polyint.CheckKrogh) ... ok test_lagrange (test_polyint.CheckKrogh) ... ok test_low_derivatives (test_polyint.CheckKrogh) ... ok test_scalar (test_polyint.CheckKrogh) ... ok test_shapes_1d_vectorvalue (test_polyint.CheckKrogh) ... ok test_shapes_scalarvalue (test_polyint.CheckKrogh) ... ok test_shapes_scalarvalue_derivative (test_polyint.CheckKrogh) ... ok test_shapes_vectorvalue (test_polyint.CheckKrogh) ... ok test_shapes_vectorvalue_derivative (test_polyint.CheckKrogh) ... ok test_vector (test_polyint.CheckKrogh) ... ok test_wrapper (test_polyint.CheckKrogh) ... ok test_construction (test_polyint.CheckPiecewise) ... ok test_derivative (test_polyint.CheckPiecewise) ... ok test_derivatives (test_polyint.CheckPiecewise) ... ok test_incremental (test_polyint.CheckPiecewise) ... ok test_scalar (test_polyint.CheckPiecewise) ... ok test_shapes_scalarvalue (test_polyint.CheckPiecewise) ... ok test_shapes_scalarvalue_derivative (test_polyint.CheckPiecewise) ... ok test_shapes_vectorvalue (test_polyint.CheckPiecewise) ... ok test_shapes_vectorvalue_1d (test_polyint.CheckPiecewise) ... ok test_shapes_vectorvalue_derivative (test_polyint.CheckPiecewise) ... ok test_vector (test_polyint.CheckPiecewise) ... ok test_wrapper (test_polyint.CheckPiecewise) ... ok test_exponential (test_polyint.CheckTaylor) ... ok test_rbf.test_rbf_interpolation('multiquadric',) ... ok test_rbf.test_rbf_interpolation('multiquadric',) ... ok test_rbf.test_rbf_interpolation('multiquadric',) ... ok test_rbf.test_rbf_interpolation('inverse multiquadric',) ... ok test_rbf.test_rbf_interpolation('inverse multiquadric',) ... ok test_rbf.test_rbf_interpolation('inverse multiquadric',) ... ok test_rbf.test_rbf_interpolation('gaussian',) ... ok test_rbf.test_rbf_interpolation('gaussian',) ... ok test_rbf.test_rbf_interpolation('gaussian',) ... ok test_rbf.test_rbf_interpolation('cubic',) ... ok test_rbf.test_rbf_interpolation('cubic',) ... ok test_rbf.test_rbf_interpolation('cubic',) ... ok test_rbf.test_rbf_interpolation('quintic',) ... ok test_rbf.test_rbf_interpolation('quintic',) ... ok test_rbf.test_rbf_interpolation('quintic',) ... ok test_rbf.test_rbf_interpolation('thin-plate',) ... ok test_rbf.test_rbf_interpolation('thin-plate',) ... ok test_rbf.test_rbf_interpolation('thin-plate',) ... ok test_rbf.test_rbf_interpolation('linear',) ... ok test_rbf.test_rbf_interpolation('linear',) ... ok test_rbf.test_rbf_interpolation('linear',) ... ok test_rbf.test_rbf_regularity('multiquadric', 0.05) ... ok test_rbf.test_rbf_regularity('inverse multiquadric', 0.02) ... ok test_rbf.test_rbf_regularity('gaussian', 0.01) ... ok test_rbf.test_rbf_regularity('cubic', 0.15) ... ok test_rbf.test_rbf_regularity('quintic', 0.1) ... ok test_rbf.test_rbf_regularity('thin-plate', 0.1) ... ok test_rbf.test_rbf_regularity('linear', 0.2) ... ok Check that the Rbf class can be constructed with the default ... ok Check that the Rbf class can be constructed with function=callable. ... ok Check that the Rbf class can be constructed with a two argument ... ok Ticket #629 ... ok Parsing trivial file with nothing. ... ok Parsing trivial file with some comments in the data section. ... ok Test reading from file-like object (StringIO) ... ok Test parsing wrong type of attribute from their value. ... ok Parsing trivial header with nothing. ... ok Test parsing type of attribute from their value. ... ok test_missing (test_arffread.MissingDataTest) ... ok test_from_number (test_fortran_format.TestExpFormat) ... ok test_to_fortran (test_fortran_format.TestExpFormat) ... ok test_exp_exp (test_fortran_format.TestFortranFormatParser) ... ok test_repeat_exp (test_fortran_format.TestFortranFormatParser) ... ok test_repeat_exp_exp (test_fortran_format.TestFortranFormatParser) ... ok test_simple_exp (test_fortran_format.TestFortranFormatParser) ... ok test_simple_int (test_fortran_format.TestFortranFormatParser) ... ok test_simple_repeated_int (test_fortran_format.TestFortranFormatParser) ... ok test_wrong_formats (test_fortran_format.TestFortranFormatParser) ... ok test_from_number (test_fortran_format.TestIntFormat) ... ok test_to_fortran (test_fortran_format.TestIntFormat) ... ok test_simple (test_hb.TestHBReader) ... ok test_simple (test_hb.TestRBRoundtrip) ... ok test_byteordercodes.test_native ... ok test_byteordercodes.test_to_numpy ... ok test_mio.test_load('double', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testdouble_4.2c_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testdouble_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testdouble_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testdouble_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testdouble_7.4_GLNX86.mat'], {'testdouble': array([[ 0. , 0.78539816, 1.57079633, 2.35619449, 3.14159265, ... ok test_mio.test_load('string', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststring_4.2c_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststring_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststring_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststring_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststring_7.4_GLNX86.mat'], {'teststring': array([u'"Do nine men interpret?" "Nine men," I nod.'], ... ok test_mio.test_load('complex', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcomplex_4.2c_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcomplex_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcomplex_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcomplex_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcomplex_7.4_GLNX86.mat'], {'testcomplex': array([[ 1.00000000e+00 +0.00000000e+00j, ... ok test_mio.test_load('matrix', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testmatrix_4.2c_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testmatrix_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testmatrix_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testmatrix_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testmatrix_7.4_GLNX86.mat'], {'testmatrix': array([[ 1., 2., 3., 4., 5.], ... ok test_mio.test_load('sparse', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparse_4.2c_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparse_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparse_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparse_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparse_7.4_GLNX86.mat'], {'testsparse': <3x5 sparse matrix of type '' ... ok test_mio.test_load('sparsecomplex', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_4.2c_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_7.4_GLNX86.mat'], {'testsparsecomplex': <3x5 sparse matrix of type '' ... ok test_mio.test_load('multi', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testmulti_4.2c_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testmulti_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testmulti_7.4_GLNX86.mat'], {'a': array([[ 1., 2., 3., 4., 5.], ... ok test_mio.test_load('minus', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testminus_4.2c_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testminus_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testminus_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testminus_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testminus_7.4_GLNX86.mat'], {'testminus': array([[-1]])}) ... ok test_mio.test_load('onechar', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testonechar_4.2c_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testonechar_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testonechar_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testonechar_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testonechar_7.4_GLNX86.mat'], {'testonechar': array([u'r'], ... ok test_mio.test_load('cell', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcell_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcell_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcell_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcell_7.4_GLNX86.mat'], {'testcell': array([[[u'This cell contains this string and 3 arrays of increasing length'], ... ok test_mio.test_load('scalarcell', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testscalarcell_7.4_GLNX86.mat'], {'testscalarcell': array([[[[1]]]], dtype=object)}) ... ok test_mio.test_load('emptycell', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testemptycell_5.3_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testemptycell_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testemptycell_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testemptycell_7.4_GLNX86.mat'], {'testemptycell': array([[[[1]], [[2]], [], [], [[3]]]], dtype=object)}) ... ok test_mio.test_load('stringarray', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststringarray_4.2c_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststringarray_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststringarray_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststringarray_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststringarray_7.4_GLNX86.mat'], {'teststringarray': array([u'one ', u'two ', u'three'], ... ok test_mio.test_load('3dmatrix', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/test3dmatrix_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/test3dmatrix_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/test3dmatrix_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/test3dmatrix_7.4_GLNX86.mat'], {'test3dmatrix': array([[[ 1, 7, 13, 19], ... ok test_mio.test_load('struct', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststruct_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststruct_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststruct_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststruct_7.4_GLNX86.mat'], {'teststruct': array([[ ([u'Rats live on no evil star.'], [[1.4142135623730951, 2.718281828459045, 3.141592653589793]], [[(1.4142135623730951+1.4142135623730951j), (2.718281828459045+2.718281828459045j), (3.141592653589793+3.141592653589793j)]])]], ... ok test_mio.test_load('cellnest', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcellnest_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcellnest_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcellnest_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testcellnest_7.4_GLNX86.mat'], {'testcellnest': array([[[[1]], [[[[2]] [[3]] [[[[4]] [[5]]]]]]]], dtype=object)}) ... ok test_mio.test_load('structnest', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststructnest_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststructnest_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststructnest_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststructnest_7.4_GLNX86.mat'], {'teststructnest': array([[([[1]], [[(array([u'number 3'], ... ok test_mio.test_load('structarr', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststructarr_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststructarr_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststructarr_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/teststructarr_7.4_GLNX86.mat'], {'teststructarr': array([[([[1]], [[2]]), ([u'number 1'], [u'number 2'])]], ... ok test_mio.test_load('object', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testobject_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testobject_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testobject_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testobject_7.4_GLNX86.mat'], {'testobject': MatlabObject([[([u'x'], [u' x = INLINE_INPUTS_{1};'], [u'x'], [[0]], [[1]], [[1]])]], ... ok test_mio.test_load('unicode', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testunicode_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testunicode_7.4_GLNX86.mat'], {'testunicode': array([ u'Japanese: \n\u3059\u3079\u3066\u306e\u4eba\u9593\u306f\u3001\u751f\u307e\u308c\u306a\u304c\u3089\u306b\u3057\u3066\u81ea\u7531\u3067\u3042\u308a\u3001\n\u304b\u3064\u3001\u5c0a\u53b3\u3068\u6a29\u5229\u3068 \u306b\u3064\u3044\u3066\u5e73\u7b49\u3067\u3042\u308b\u3002\n\u4eba\u9593\u306f\u3001\u7406\u6027\u3068\u826f\u5fc3\u3068\u3092\u6388\u3051\u3089\u308c\u3066\u304a\u308a\u3001\n\u4e92\u3044\u306b\u540c\u80de\u306e\u7cbe\u795e\u3092\u3082\u3063\u3066\u884c\u52d5\u3057\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\u3002'], ... ok test_mio.test_load('sparse', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparse_4.2c_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparse_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparse_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparse_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparse_7.4_GLNX86.mat'], {'testsparse': <3x5 sparse matrix of type '' ... ok test_mio.test_load('sparsecomplex', ['/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_4.2c_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_6.1_SOL2.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_6.5.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_7.1_GLNX86.mat', '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testsparsecomplex_7.4_GLNX86.mat'], {'testsparsecomplex': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('double_round_trip', {'testdouble': array([[ 0. , 0.78539816, 1.57079633, 2.35619449, 3.14159265, ... ok test_mio.test_round_trip('string_round_trip', {'teststring': array([u'"Do nine men interpret?" "Nine men," I nod.'], ... ok test_mio.test_round_trip('complex_round_trip', {'testcomplex': array([[ 1.00000000e+00 +0.00000000e+00j, ... ok test_mio.test_round_trip('matrix_round_trip', {'testmatrix': array([[ 1., 2., 3., 4., 5.], ... ok test_mio.test_round_trip('sparse_round_trip', {'testsparse': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('sparsecomplex_round_trip', {'testsparsecomplex': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('multi_round_trip', {'a': array([[ 1., 2., 3., 4., 5.], ... ok test_mio.test_round_trip('minus_round_trip', {'testminus': array([[-1]])}, '4') ... ok test_mio.test_round_trip('onechar_round_trip', {'testonechar': array([u'r'], ... ok test_mio.test_round_trip('cell_round_trip', {'testcell': array([[[u'This cell contains this string and 3 arrays of increasing length'], ... ok test_mio.test_round_trip('scalarcell_round_trip', {'testscalarcell': array([[[[1]]]], dtype=object)}, '5') ... ok test_mio.test_round_trip('emptycell_round_trip', {'testemptycell': array([[[[1]], [[2]], [], [], [[3]]]], dtype=object)}, '5') ... ok test_mio.test_round_trip('stringarray_round_trip', {'teststringarray': array([u'one ', u'two ', u'three'], ... ok test_mio.test_round_trip('3dmatrix_round_trip', {'test3dmatrix': array([[[ 1, 7, 13, 19], ... ok test_mio.test_round_trip('struct_round_trip', {'teststruct': array([[ ([u'Rats live on no evil star.'], [[1.4142135623730951, 2.718281828459045, 3.141592653589793]], [[(1.4142135623730951+1.4142135623730951j), (2.718281828459045+2.718281828459045j), (3.141592653589793+3.141592653589793j)]])]], ... ok test_mio.test_round_trip('cellnest_round_trip', {'testcellnest': array([[[[1]], [[[[2]] [[3]] [[[[4]] [[5]]]]]]]], dtype=object)}, '5') ... ok test_mio.test_round_trip('structnest_round_trip', {'teststructnest': array([[([[1]], [[(array([u'number 3'], ... ok test_mio.test_round_trip('structarr_round_trip', {'teststructarr': array([[([[1]], [[2]]), ([u'number 1'], [u'number 2'])]], ... ok test_mio.test_round_trip('object_round_trip', {'testobject': MatlabObject([[([u'x'], [u' x = INLINE_INPUTS_{1};'], [u'x'], [[0]], [[1]], [[1]])]], ... ok test_mio.test_round_trip('unicode_round_trip', {'testunicode': array([ u'Japanese: \n\u3059\u3079\u3066\u306e\u4eba\u9593\u306f\u3001\u751f\u307e\u308c\u306a\u304c\u3089\u306b\u3057\u3066\u81ea\u7531\u3067\u3042\u308a\u3001\n\u304b\u3064\u3001\u5c0a\u53b3\u3068\u6a29\u5229\u3068 \u306b\u3064\u3044\u3066\u5e73\u7b49\u3067\u3042\u308b\u3002\n\u4eba\u9593\u306f\u3001\u7406\u6027\u3068\u826f\u5fc3\u3068\u3092\u6388\u3051\u3089\u308c\u3066\u304a\u308a\u3001\n\u4e92\u3044\u306b\u540c\u80de\u306e\u7cbe\u795e\u3092\u3082\u3063\u3066\u884c\u52d5\u3057\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\u3002'], ... ok test_mio.test_round_trip('sparse_round_trip', {'testsparse': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('sparsecomplex_round_trip', {'testsparsecomplex': <3x5 sparse matrix of type '' ... ok test_mio.test_round_trip('objectarray_round_trip', {'testobjectarray': MatlabObject([[([u'x'], [u' x = INLINE_INPUTS_{1};'], [u'x'], [[0]], [[1]], [[1]]), ... ok test_mio.test_gzip_simple ... ok test_mio.test_multiple_open ... ok test_mio.test_mat73 ... ok test_mio.test_warnings(, , '/usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/matlab/tests/data/testdouble_7.1_GLNX86.mat') ... ok Regression test for #653. ... ok test_mio.test_structname_len ... ok test_mio.test_4_and_long_field_names_incompatible ... ok test_mio.test_long_field_names ... ok test_mio.test_long_field_names_in_struct ... ok test_mio.test_cell_with_one_thing_in_it ... ok test_mio.test_writer_properties([], []) ... ok test_mio.test_writer_properties(['avar'], ['avar']) ... ok test_mio.test_writer_properties(False, False) ... ok test_mio.test_writer_properties(True, True) ... ok test_mio.test_writer_properties(False, False) ... ok test_mio.test_writer_properties(True, True) ... ok test_mio.test_use_small_element(True,) ... ok test_mio.test_use_small_element(True,) ... ok test_mio.test_save_dict ... ok test_mio.test_1d_shape ... ok test_mio.test_compression(array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ... ok test_mio.test_compression(array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ... ok test_mio.test_compression(True,) ... ok test_mio.test_compression(array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ... ok test_mio.test_compression(array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ... ok test_mio.test_single_object ... ok test_mio.test_skip_variable(True,) ... ok test_mio.test_skip_variable(True,) ... ok test_mio.test_skip_variable(True,) ... ok test_mio.test_empty_struct ... ok test_mio.test_recarray(array([[ 0.5]]), 0.5) ... ok test_mio.test_recarray(array([u'python'], ... ok test_mio.test_recarray(array([[ 0.5]]), 0.5) ... ok test_mio.test_recarray(array([u'python'], ... ok test_mio.test_recarray(dtype([('f1', '|O8'), ('f2', '|O8')]), dtype([('f1', '|O8'), ('f2', '|O8')])) ... ok test_mio.test_recarray(array([[ 99.]]), 99) ... ok test_mio.test_recarray(array([u'not perl'], ... ok test_mio.test_save_object ... ok test_mio.test_read_opts ... ok test_mio.test_empty_string ... ok test_mio.test_mat4_3d ... ok test_mio.test_func_read(True,) ... ok test_mio.test_func_read(, >, {'__version__': '1.0', '__header__': 'MATLAB 5.0 MAT-file, Platform: GLNX86, Created on: Fri Feb 20 15:26:59 2009', 'testfunc': MatlabFunction([[ ([u'/opt/matlab-2007a'], [u'/'], [u'@'], [[(array([u'afunc'], ... ok test_mio.test_mat_dtype('u', 'u') ... ok test_mio.test_mat_dtype('f', 'f') ... ok test_mio.test_sparse_in_struct(matrix([[ 1., 0., 0., 0.], ... ok test_mio.test_mat_struct_squeeze ... ok test_mio.test_str_round ... ok test_mio.test_fieldnames ... ok test_mio.test_loadmat_varnames ... ok test_mio.test_round_types ... ok test_mio.test_varmats_from_mat ... ok Test 1x0 chars get read correctly ... ok test_mio5_utils.test_byteswap(16777216L, 16777216L) ... ok test_mio5_utils.test_byteswap(1L, 1L) ... ok test_mio5_utils.test_byteswap(65536L, 65536L) ... ok test_mio5_utils.test_byteswap(256L, 256L) ... ok test_mio5_utils.test_byteswap(256L, 256L) ... ok test_mio5_utils.test_byteswap(65536L, 65536L) ... ok test_mio5_utils.test_read_tag(, ) ... ok test_mio5_utils.test_read_tag(, ) ... ok test_mio5_utils.test_read_stream('\x05\x00\x04\x00\x01\x00\x00\x00', '\x05\x00\x04\x00\x01\x00\x00\x00') ... ok test_mio5_utils.test_read_numeric(1, True) ... ok test_mio5_utils.test_read_numeric(0, False) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(0, False) ... ok test_mio5_utils.test_read_numeric(1, True) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(array([30], dtype=uint16), 30) ... ok test_mio5_utils.test_read_numeric(1, True) ... ok test_mio5_utils.test_read_numeric(0, False) ... ok test_mio5_utils.test_read_numeric(array([1], dtype=int32), 1) ... ok test_mio5_utils.test_read_numeric(array([1], dtype=int32), 1) ... ok test_mio5_utils.test_read_numeric(array([1], dtype=int32), 1) ... ok test_mio5_utils.test_read_numeric(array([1], dtype=int32), 1) ... ok test_mio5_utils.test_read_numeric(array([1], dtype=int32), 1) ... ok test_mio5_utils.test_read_numeric(array([1], dtype=int32), 1) ... ok test_mio5_utils.test_read_numeric(0, False) ... ok test_mio5_utils.test_read_numeric(1, True) ... ok test_mio5_utils.test_read_numeric(array([1], dtype=int32), 1) ... ok test_mio5_utils.test_read_numeric(array([1], dtype=int32), 1) ... ok test_mio5_utils.test_read_numeric(array([1], dtype=int32), 1) ... ok test_mio5_utils.test_read_numeric(array([1], dtype=int32), 1) ... ok test_mio5_utils.test_read_numeric(array([1], dtype=int32), 1) ... ok test_mio5_utils.test_read_numeric(array([1], dtype=int32), 1) ... ok test_mio5_utils.test_read_numeric(1, True) ... ok test_mio5_utils.test_read_numeric(0, False) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(0, False) ... ok test_mio5_utils.test_read_numeric(1, True) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric(array([-1], dtype=int16), -1) ... ok test_mio5_utils.test_read_numeric_writeable(True,) ... ok test_mio5_utils.test_zero_byte_string ... ok test_mio_funcs.test_jottings ... ok test_mio_utils.test_cproduct ... ok test_mio_utils.test_squeeze_element ... ok test_mio_utils.test_chars_strings ... ok test_pathological.test_multiple_fieldnames ... ok test_streams.test_make_stream ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(5, 5) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(7, 7) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(6, 6) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(5, 5) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(7, 7) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(6, 6) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(5, 5) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(7, 7) ... ok test_streams.test_tell_seek(0, 0) ... ok test_streams.test_tell_seek(6, 6) ... ok test_streams.test_read('a\x00string', 'a\x00string') ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('ring', 'ring') ... ok test_streams.test_read(, , , 2) ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('ring', 'ring') ... ok test_streams.test_read(, , , 2) ... ok test_streams.test_read('a\x00string', 'a\x00string') ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('ring', 'ring') ... ok test_streams.test_read(, , , 2) ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('ring', 'ring') ... ok test_streams.test_read(, , , 2) ... ok test_streams.test_read('a\x00string', 'a\x00string') ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('ring', 'ring') ... ok test_streams.test_read(, , , 2) ... ok test_streams.test_read('a\x00st', 'a\x00st') ... ok test_streams.test_read('ring', 'ring') ... ok test_streams.test_read(, , , 2) ... ok test_idl.TestArrayDimensions.test_1d ... ok test_idl.TestArrayDimensions.test_2d ... ok test_idl.TestArrayDimensions.test_3d ... ok test_idl.TestArrayDimensions.test_4d ... ok test_idl.TestArrayDimensions.test_5d ... ok test_idl.TestArrayDimensions.test_6d ... ok test_idl.TestArrayDimensions.test_7d ... ok test_idl.TestArrayDimensions.test_8d ... ok test_idl.TestCompressed.test_byte ... ok test_idl.TestCompressed.test_bytes ... ok test_idl.TestCompressed.test_complex32 ... ok test_idl.TestCompressed.test_complex64 ... ok test_idl.TestCompressed.test_compressed ... ok test_idl.TestCompressed.test_float32 ... ok test_idl.TestCompressed.test_float64 ... ok test_idl.TestCompressed.test_heap_pointer ... ok test_idl.TestCompressed.test_int16 ... ok test_idl.TestCompressed.test_int32 ... ok test_idl.TestCompressed.test_int64 ... ok test_idl.TestCompressed.test_object_reference ... ok test_idl.TestCompressed.test_structure ... ok test_idl.TestCompressed.test_uint16 ... ok test_idl.TestCompressed.test_uint32 ... ok test_idl.TestCompressed.test_uint64 ... ok test_idl.TestIdict.test_idict ... ok test_idl.TestPointerArray.test_1d ... ok test_idl.TestPointerArray.test_2d ... ok test_idl.TestPointerArray.test_3d ... ok test_idl.TestPointerArray.test_4d ... ok test_idl.TestPointerArray.test_5d ... ok test_idl.TestPointerArray.test_6d ... ok test_idl.TestPointerArray.test_7d ... ok test_idl.TestPointerArray.test_8d ... ok test_idl.TestPointerStructures.test_arrays ... ok test_idl.TestPointerStructures.test_arrays_replicated ... ok test_idl.TestPointerStructures.test_arrays_replicated_3d ... ok test_idl.TestPointerStructures.test_pointers_replicated ... ok test_idl.TestPointerStructures.test_pointers_replicated_3d ... ok test_idl.TestPointerStructures.test_scalars ... ok test_idl.TestPointers.test_pointers ... ok test_idl.TestScalars.test_byte ... ok test_idl.TestScalars.test_bytes ... ok test_idl.TestScalars.test_complex32 ... ok test_idl.TestScalars.test_complex64 ... ok test_idl.TestScalars.test_float32 ... ok test_idl.TestScalars.test_float64 ... ok test_idl.TestScalars.test_heap_pointer ... ok test_idl.TestScalars.test_int16 ... ok test_idl.TestScalars.test_int32 ... ok test_idl.TestScalars.test_int64 ... ok test_idl.TestScalars.test_object_reference ... ok test_idl.TestScalars.test_structure ... ok test_idl.TestScalars.test_uint16 ... ok test_idl.TestScalars.test_uint32 ... ok test_idl.TestScalars.test_uint64 ... ok test_idl.TestStructures.test_arrays ... ok test_idl.TestStructures.test_arrays_replicated ... ok test_idl.TestStructures.test_arrays_replicated_3d ... ok test_idl.TestStructures.test_inheritance ... ok test_idl.TestStructures.test_scalars ... ok test_idl.TestStructures.test_scalars_replicated ... ok test_idl.TestStructures.test_scalars_replicated_3d ... ok test_random_rect_real (test_mmio.TestMMIOArray) ... ok test_random_symmetric_real (test_mmio.TestMMIOArray) ... ok test_simple (test_mmio.TestMMIOArray) ... ok test_simple_complex (test_mmio.TestMMIOArray) ... ok test_simple_hermitian (test_mmio.TestMMIOArray) ... ok test_simple_real (test_mmio.TestMMIOArray) ... ok test_simple_rectangular (test_mmio.TestMMIOArray) ... ok test_simple_rectangular_real (test_mmio.TestMMIOArray) ... ok test_simple_skew_symmetric (test_mmio.TestMMIOArray) ... ok test_simple_skew_symmetric_float (test_mmio.TestMMIOArray) ... ok test_simple_symmetric (test_mmio.TestMMIOArray) ... ok test_complex_write_read (test_mmio.TestMMIOCoordinate) ... ok test_empty_write_read (test_mmio.TestMMIOCoordinate) ... ok read a general matrix ... ok read a hermitian matrix ... ok read a skew-symmetric matrix ... ok read a symmetric matrix ... ok read a symmetric pattern matrix ... ok test_real_write_read (test_mmio.TestMMIOCoordinate) ... ok test_sparse_formats (test_mmio.TestMMIOCoordinate) ... ok test_netcdf.test_read_write_files(True,) ... ok test_netcdf.test_read_write_files('Created for a test', 'Created for a test') ... ok test_netcdf.test_read_write_files('days since 2008-01-01', 'days since 2008-01-01') ... ok test_netcdf.test_read_write_files((11,), (11,)) ... ok test_netcdf.test_read_write_files(10, 10) ... ok test_netcdf.test_read_write_files(False,) ... ok test_netcdf.test_read_write_files('Created for a test', 'Created for a test') ... ok test_netcdf.test_read_write_files('days since 2008-01-01', 'days since 2008-01-01') ... ok test_netcdf.test_read_write_files((11,), (11,)) ... ok test_netcdf.test_read_write_files(10, 10) ... ok test_netcdf.test_read_write_files(False,) ... ok test_netcdf.test_read_write_files('Created for a test', 'Created for a test') ... ok test_netcdf.test_read_write_files('days since 2008-01-01', 'days since 2008-01-01') ... ok test_netcdf.test_read_write_files((11,), (11,)) ... ok test_netcdf.test_read_write_files(10, 10) ... ok test_netcdf.test_read_write_sio('Created for a test', 'Created for a test') ... ok test_netcdf.test_read_write_sio('days since 2008-01-01', 'days since 2008-01-01') ... ok test_netcdf.test_read_write_sio((11,), (11,)) ... ok test_netcdf.test_read_write_sio(10, 10) ... ok test_netcdf.test_read_write_sio(, , , 'r', True) ... ok test_netcdf.test_read_write_sio('Created for a test', 'Created for a test') ... ok test_netcdf.test_read_write_sio('days since 2008-01-01', 'days since 2008-01-01') ... ok test_netcdf.test_read_write_sio((11,), (11,)) ... ok test_netcdf.test_read_write_sio(10, 10) ... ok test_netcdf.test_read_write_sio(2, 2) ... ok test_netcdf.test_read_write_sio('Created for a test', 'Created for a test') ... ok test_netcdf.test_read_write_sio('days since 2008-01-01', 'days since 2008-01-01') ... ok test_netcdf.test_read_write_sio((11,), (11,)) ... ok test_netcdf.test_read_write_sio(10, 10) ... ok test_netcdf.test_read_write_sio(2, 2) ... ok test_netcdf.test_read_example_data ... ok test_netcdf.test_write_invalid_dtype(, >, 'time', 'int64', ('time',)) ... ok test_netcdf.test_write_invalid_dtype(, >, 'time', 'uint64', ('time',)) ... ok test_netcdf.test_write_invalid_dtype(, >, 'time', 'int', ('time',)) ... ok test_netcdf.test_write_invalid_dtype(, >, 'time', 'uint', ('time',)) ... ok test_wavfile.test_read_1 ... /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/wavfile.py:31: WavFileWarning: Unfamiliar format bytes warnings.warn("Unfamiliar format bytes", WavFileWarning) /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/io/wavfile.py:121: WavFileWarning: chunk not understood warnings.warn("chunk not understood", WavFileWarning) ok test_wavfile.test_read_2 ... ok test_wavfile.test_read_fail ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i2'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i2'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i2'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i2'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i2'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i2'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int16'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int16'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int16'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int16'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int16'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int16'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i4'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i4'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i4'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i4'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i4'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i4'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int32'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int32'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int32'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int32'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int32'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int32'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i8'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i8'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>i8'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i8'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i8'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>i8'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int64'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int64'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('int64'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int64'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int64'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('int64'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint8'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint8'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint8'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint8'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint8'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint8'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u2'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u2'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u2'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u2'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u2'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u2'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint16'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint16'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint16'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint16'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint16'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint16'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u4'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u4'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u4'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u4'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u4'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u4'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint32'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint32'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint32'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint32'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint32'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint32'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u8'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u8'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('>u8'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u8'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u8'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('>u8'), 5) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint64'), 1) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint64'), 2) ... ok test_wavfile.test_write_roundtrip(8000, dtype('uint64'), 5) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint64'), 1) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint64'), 2) ... ok test_wavfile.test_write_roundtrip(32000, dtype('uint64'), 5) ... ok test_blas (test_blas.TestBLAS) ... ok test_axpy (test_blas.TestCBLAS1Simple) ... ok test_amax (test_blas.TestFBLAS1Simple) ... ok test_asum (test_blas.TestFBLAS1Simple) ... FAIL test_axpy (test_blas.TestFBLAS1Simple) ... ok test_copy (test_blas.TestFBLAS1Simple) ... ok test_dot (test_blas.TestFBLAS1Simple) ... FAIL test_nrm2 (test_blas.TestFBLAS1Simple) ... FAIL test_scal (test_blas.TestFBLAS1Simple) ... ok test_swap (test_blas.TestFBLAS1Simple) ... ok test_gemv (test_blas.TestFBLAS2Simple) ... ok test_ger (test_blas.TestFBLAS2Simple) ... ok test_gemm (test_blas.TestFBLAS3Simple) ... ok test_gemm2 (test_blas.TestFBLAS3Simple) ... ok test_default_a (test_fblas.TestCaxpy) ... ok test_simple (test_fblas.TestCaxpy) ... ok test_x_and_y_stride (test_fblas.TestCaxpy) ... ok test_x_bad_size (test_fblas.TestCaxpy) ... ok test_x_stride (test_fblas.TestCaxpy) ... ok test_y_bad_size (test_fblas.TestCaxpy) ... ok test_y_stride (test_fblas.TestCaxpy) ... ok test_simple (test_fblas.TestCcopy) ... ok test_x_and_y_stride (test_fblas.TestCcopy) ... ok test_x_bad_size (test_fblas.TestCcopy) ... ok test_x_stride (test_fblas.TestCcopy) ... ok test_y_bad_size (test_fblas.TestCcopy) ... ok test_y_stride (test_fblas.TestCcopy) ... ok test_default_beta_y (test_fblas.TestCgemv) ... ok test_simple (test_fblas.TestCgemv) ... ok test_simple_transpose (test_fblas.TestCgemv) ... ok test_simple_transpose_conj (test_fblas.TestCgemv) ... ok test_x_stride (test_fblas.TestCgemv) ... ok test_x_stride_assert (test_fblas.TestCgemv) ... ok test_x_stride_transpose (test_fblas.TestCgemv) ... ok test_y_stride (test_fblas.TestCgemv) ... ok test_y_stride_assert (test_fblas.TestCgemv) ... ok test_y_stride_transpose (test_fblas.TestCgemv) ... ok test_simple (test_fblas.TestCscal) ... ok test_x_bad_size (test_fblas.TestCscal) ... ok test_x_stride (test_fblas.TestCscal) ... ok test_simple (test_fblas.TestCswap) ... ok test_x_and_y_stride (test_fblas.TestCswap) ... ok test_x_bad_size (test_fblas.TestCswap) ... ok test_x_stride (test_fblas.TestCswap) ... ok test_y_bad_size (test_fblas.TestCswap) ... ok test_y_stride (test_fblas.TestCswap) ... ok test_default_a (test_fblas.TestDaxpy) ... ok test_simple (test_fblas.TestDaxpy) ... ok test_x_and_y_stride (test_fblas.TestDaxpy) ... ok test_x_bad_size (test_fblas.TestDaxpy) ... ok test_x_stride (test_fblas.TestDaxpy) ... ok test_y_bad_size (test_fblas.TestDaxpy) ... ok test_y_stride (test_fblas.TestDaxpy) ... ok test_simple (test_fblas.TestDcopy) ... ok test_x_and_y_stride (test_fblas.TestDcopy) ... ok test_x_bad_size (test_fblas.TestDcopy) ... ok test_x_stride (test_fblas.TestDcopy) ... ok test_y_bad_size (test_fblas.TestDcopy) ... ok test_y_stride (test_fblas.TestDcopy) ... ok test_default_beta_y (test_fblas.TestDgemv) ... ok test_simple (test_fblas.TestDgemv) ... ok test_simple_transpose (test_fblas.TestDgemv) ... ok test_simple_transpose_conj (test_fblas.TestDgemv) ... ok test_x_stride (test_fblas.TestDgemv) ... ok test_x_stride_assert (test_fblas.TestDgemv) ... ok test_x_stride_transpose (test_fblas.TestDgemv) ... ok test_y_stride (test_fblas.TestDgemv) ... ok test_y_stride_assert (test_fblas.TestDgemv) ... ok test_y_stride_transpose (test_fblas.TestDgemv) ... ok test_simple (test_fblas.TestDscal) ... ok test_x_bad_size (test_fblas.TestDscal) ... ok test_x_stride (test_fblas.TestDscal) ... ok test_simple (test_fblas.TestDswap) ... ok test_x_and_y_stride (test_fblas.TestDswap) ... ok test_x_bad_size (test_fblas.TestDswap) ... ok test_x_stride (test_fblas.TestDswap) ... ok test_y_bad_size (test_fblas.TestDswap) ... ok test_y_stride (test_fblas.TestDswap) ... ok test_default_a (test_fblas.TestSaxpy) ... ok test_simple (test_fblas.TestSaxpy) ... ok test_x_and_y_stride (test_fblas.TestSaxpy) ... ok test_x_bad_size (test_fblas.TestSaxpy) ... ok test_x_stride (test_fblas.TestSaxpy) ... ok test_y_bad_size (test_fblas.TestSaxpy) ... ok test_y_stride (test_fblas.TestSaxpy) ... ok test_simple (test_fblas.TestScopy) ... ok test_x_and_y_stride (test_fblas.TestScopy) ... ok test_x_bad_size (test_fblas.TestScopy) ... ok test_x_stride (test_fblas.TestScopy) ... ok test_y_bad_size (test_fblas.TestScopy) ... ok test_y_stride (test_fblas.TestScopy) ... ok test_default_beta_y (test_fblas.TestSgemv) ... ok test_simple (test_fblas.TestSgemv) ... ok test_simple_transpose (test_fblas.TestSgemv) ... ok test_simple_transpose_conj (test_fblas.TestSgemv) ... ok test_x_stride (test_fblas.TestSgemv) ... ok test_x_stride_assert (test_fblas.TestSgemv) ... ok test_x_stride_transpose (test_fblas.TestSgemv) ... ok test_y_stride (test_fblas.TestSgemv) ... ok test_y_stride_assert (test_fblas.TestSgemv) ... ok test_y_stride_transpose (test_fblas.TestSgemv) ... ok test_simple (test_fblas.TestSscal) ... ok test_x_bad_size (test_fblas.TestSscal) ... ok test_x_stride (test_fblas.TestSscal) ... ok test_simple (test_fblas.TestSswap) ... ok test_x_and_y_stride (test_fblas.TestSswap) ... ok test_x_bad_size (test_fblas.TestSswap) ... ok test_x_stride (test_fblas.TestSswap) ... ok test_y_bad_size (test_fblas.TestSswap) ... ok test_y_stride (test_fblas.TestSswap) ... ok test_default_a (test_fblas.TestZaxpy) ... ok test_simple (test_fblas.TestZaxpy) ... ok test_x_and_y_stride (test_fblas.TestZaxpy) ... ok test_x_bad_size (test_fblas.TestZaxpy) ... ok test_x_stride (test_fblas.TestZaxpy) ... ok test_y_bad_size (test_fblas.TestZaxpy) ... ok test_y_stride (test_fblas.TestZaxpy) ... ok test_simple (test_fblas.TestZcopy) ... ok test_x_and_y_stride (test_fblas.TestZcopy) ... ok test_x_bad_size (test_fblas.TestZcopy) ... ok test_x_stride (test_fblas.TestZcopy) ... ok test_y_bad_size (test_fblas.TestZcopy) ... ok test_y_stride (test_fblas.TestZcopy) ... ok test_default_beta_y (test_fblas.TestZgemv) ... ok test_simple (test_fblas.TestZgemv) ... ok test_simple_transpose (test_fblas.TestZgemv) ... ok test_simple_transpose_conj (test_fblas.TestZgemv) ... ok test_x_stride (test_fblas.TestZgemv) ... ok test_x_stride_assert (test_fblas.TestZgemv) ... ok test_x_stride_transpose (test_fblas.TestZgemv) ... ok test_y_stride (test_fblas.TestZgemv) ... ok test_y_stride_assert (test_fblas.TestZgemv) ... ok test_y_stride_transpose (test_fblas.TestZgemv) ... ok test_simple (test_fblas.TestZscal) ... ok test_x_bad_size (test_fblas.TestZscal) ... ok test_x_stride (test_fblas.TestZscal) ... ok test_simple (test_fblas.TestZswap) ... ok test_x_and_y_stride (test_fblas.TestZswap) ... ok test_x_bad_size (test_fblas.TestZswap) ... ok test_x_stride (test_fblas.TestZswap) ... ok test_y_bad_size (test_fblas.TestZswap) ... ok test_y_stride (test_fblas.TestZswap) ... ok test_clapack_dsyev (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_dsyev Clapack empty, skip clapack test test_clapack_dsyevr (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_dsyevr Clapack empty, skip clapack test test_clapack_dsyevr_ranges (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_dsyevr_ranges Clapack empty, skip clapack test test_clapack_ssyev (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_ssyev Clapack empty, skip clapack test test_clapack_ssyevr (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_ssyevr Clapack empty, skip clapack test test_clapack_ssyevr_ranges (test_esv.TestEsv) ... SKIP: Skipping test: test_clapack_ssyevr_ranges Clapack empty, skip clapack test test_dsyev (test_esv.TestEsv) ... ok test_dsyevr (test_esv.TestEsv) ... ok test_dsyevr_ranges (test_esv.TestEsv) ... ok test_ssyev (test_esv.TestEsv) ... ok test_ssyevr (test_esv.TestEsv) ... ok test_ssyevr_ranges (test_esv.TestEsv) ... ok test_clapack_dsygv_1 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_dsygv_1 Clapack empty, skip flapack test test_clapack_dsygv_2 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_dsygv_2 Clapack empty, skip flapack test test_clapack_dsygv_3 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_dsygv_3 Clapack empty, skip flapack test test_clapack_ssygv_1 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ssygv_1 Clapack empty, skip flapack test test_clapack_ssygv_2 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ssygv_2 Clapack empty, skip flapack test test_clapack_ssygv_3 (test_gesv.TestSygv) ... SKIP: Skipping test: test_clapack_ssygv_3 Clapack empty, skip flapack test test_dsygv_1 (test_gesv.TestSygv) ... ok test_dsygv_2 (test_gesv.TestSygv) ... ok test_dsygv_3 (test_gesv.TestSygv) ... ok test_ssygv_1 (test_gesv.TestSygv) ... ok test_ssygv_2 (test_gesv.TestSygv) ... ok test_ssygv_3 (test_gesv.TestSygv) ... ok test_clapack_dgebal (test_lapack.TestLapack) ... SKIP: Skipping test: test_clapack_dgebal Clapack empty, skip flapack test test_clapack_dgehrd (test_lapack.TestLapack) ... SKIP: Skipping test: test_clapack_dgehrd Clapack empty, skip flapack test test_clapack_sgebal (test_lapack.TestLapack) ... SKIP: Skipping test: test_clapack_sgebal Clapack empty, skip flapack test test_clapack_sgehrd (test_lapack.TestLapack) ... SKIP: Skipping test: test_clapack_sgehrd Clapack empty, skip flapack test test_dgebal (test_lapack.TestLapack) ... ok test_dgehrd (test_lapack.TestLapack) ... ok test_sgebal (test_lapack.TestLapack) ... ok test_sgehrd (test_lapack.TestLapack) ... ok test_random (test_basic.TestDet) ... ok test_random_complex (test_basic.TestDet) ... ok test_simple (test_basic.TestDet) ... ok test_simple_complex (test_basic.TestDet) ... ok test_random (test_basic.TestInv) ... ok test_random_complex (test_basic.TestInv) ... ok test_simple (test_basic.TestInv) ... ok test_simple_complex (test_basic.TestInv) ... ok test_random_complex_exact (test_basic.TestLstsq) ... ok test_random_complex_overdet (test_basic.TestLstsq) ... ok test_random_exact (test_basic.TestLstsq) ... ok test_random_overdet (test_basic.TestLstsq) ... ok test_random_overdet_large (test_basic.TestLstsq) ... ok test_simple_exact (test_basic.TestLstsq) ... ok test_simple_overdet (test_basic.TestLstsq) ... ok test_simple_overdet_complex (test_basic.TestLstsq) ... ok test_simple_underdet (test_basic.TestLstsq) ... ok test_basic.TestNorm.test_overflow ... FAIL test_basic.TestNorm.test_stable ... FAIL test_basic.TestNorm.test_types ... FAIL test_basic.TestNorm.test_zero_norm ... ok test_basic.TestOverwrite.test_det ... ok test_basic.TestOverwrite.test_inv ... ok test_basic.TestOverwrite.test_lstsq ... ok test_basic.TestOverwrite.test_pinv ... ok test_basic.TestOverwrite.test_pinv2 ... ok test_basic.TestOverwrite.test_solve ... ok test_basic.TestOverwrite.test_solve_banded ... ok test_basic.TestOverwrite.test_solve_triangular ... ok test_basic.TestOverwrite.test_solveh_banded ... ok test_simple (test_basic.TestPinv) ... ok test_simple_0det (test_basic.TestPinv) ... ok test_simple_cols (test_basic.TestPinv) ... ok test_simple_rows (test_basic.TestPinv) ... ok test_20Feb04_bug (test_basic.TestSolve) ... ok test_nils_20Feb04 (test_basic.TestSolve) ... ok test_random (test_basic.TestSolve) ... ok test_random_complex (test_basic.TestSolve) ... ok test_random_sym (test_basic.TestSolve) ... ok test_random_sym_complex (test_basic.TestSolve) ... ok test_simple (test_basic.TestSolve) ... ok test_simple_complex (test_basic.TestSolve) ... ok test_simple_sym (test_basic.TestSolve) ... ok test_simple_sym_complex (test_basic.TestSolve) ... ok test_bad_shape (test_basic.TestSolveBanded) ... ok test_complex (test_basic.TestSolveBanded) ... ok test_real (test_basic.TestSolveBanded) ... ok test_01_complex (test_basic.TestSolveHBanded) ... ok test_01_float32 (test_basic.TestSolveHBanded) ... ok test_01_lower (test_basic.TestSolveHBanded) ... ok test_01_upper (test_basic.TestSolveHBanded) ... ok test_02_complex (test_basic.TestSolveHBanded) ... ok test_02_float32 (test_basic.TestSolveHBanded) ... ok test_02_lower (test_basic.TestSolveHBanded) ... ok test_02_upper (test_basic.TestSolveHBanded) ... ok test_03_upper (test_basic.TestSolveHBanded) ... ok test_bad_shapes (test_basic.TestSolveHBanded) ... ok solve_triangular on a simple 2x2 matrix. ... ok solve_triangular on a simple 2x2 complex matrix ... ok test_axpy (test_blas.TestCBLAS1Simple) ... ok test_amax (test_blas.TestFBLAS1Simple) ... ok test_asum (test_blas.TestFBLAS1Simple) ... FAIL test_axpy (test_blas.TestFBLAS1Simple) ... ok test_complex_dotc (test_blas.TestFBLAS1Simple) ... ok test_complex_dotu (test_blas.TestFBLAS1Simple) ... ok test_copy (test_blas.TestFBLAS1Simple) ... ok test_dot (test_blas.TestFBLAS1Simple) ... FAIL test_nrm2 (test_blas.TestFBLAS1Simple) ... FAIL test_scal (test_blas.TestFBLAS1Simple) ... ok test_swap (test_blas.TestFBLAS1Simple) ... ok test_gemv (test_blas.TestFBLAS2Simple) ... ok test_ger (test_blas.TestFBLAS2Simple) ... ok test_gemm (test_blas.TestFBLAS3Simple) ... ok test_blas.test_get_blas_funcs ... ok test_blas.test_get_blas_funcs_alias ... ok test_lapack (test_build.TestF77Mismatch) ... SKIP: Skipping test: test_lapack Skipping fortran compiler mismatch on non Linux platform test_datacopied (test_decomp.TestDatacopied) ... ok test_simple (test_decomp.TestDiagSVD) ... ok test_decomp.TestEig.test_bad_geneig ... ok Test matrices giving some Nan generalized eigen values. ... ok Check that passing a non-square array raises a ValueError. ... ok Check that passing arrays of with different shapes raises a ValueError. ... ok test_decomp.TestEig.test_simple ... ok test_decomp.TestEig.test_simple_complex ... ok test_decomp.TestEig.test_simple_complex_eig ... ok Test singular pair ... ok Compare dgbtrf LU factorisation with the LU factorisation result ... ok Compare dgbtrs solutions for linear equation system A*x = b ... ok Compare dsbev eigenvalues and eigenvectors with ... ok Compare dsbevd eigenvalues and eigenvectors with ... ok Compare dsbevx eigenvalues and eigenvectors ... ok Compare eigenvalues and eigenvectors of eig_banded ... ok Compare eigenvalues of eigvals_banded with those of linalg.eig. ... ok Compare zgbtrf LU factorisation with the LU factorisation result ... ok Compare zgbtrs solutions for linear equation system A*x = b ... ok Compare zhbevd eigenvalues and eigenvectors ... ok Compare zhbevx eigenvalues and eigenvectors ... ok test_simple (test_decomp.TestEigVals) ... ok test_simple_complex (test_decomp.TestEigVals) ... ok test_simple_tr (test_decomp.TestEigVals) ... ok test_random (test_decomp.TestHessenberg) ... ok test_random_complex (test_decomp.TestHessenberg) ... ok test_simple (test_decomp.TestHessenberg) ... ok test_simple2 (test_decomp.TestHessenberg) ... ok test_simple_complex (test_decomp.TestHessenberg) ... ok test_hrectangular (test_decomp.TestLU) ... ok test_hrectangular_complex (test_decomp.TestLU) ... ok Check lu decomposition on medium size, rectangular matrix. ... ok Check lu decomposition on medium size, rectangular matrix. ... ok test_simple (test_decomp.TestLU) ... ok test_simple2 (test_decomp.TestLU) ... ok test_simple2_complex (test_decomp.TestLU) ... ok test_simple_complex (test_decomp.TestLU) ... ok test_simple_known (test_decomp.TestLU) ... ok test_vrectangular (test_decomp.TestLU) ... ok test_vrectangular_complex (test_decomp.TestLU) ... ok test_hrectangular (test_decomp.TestLUSingle) ... ok test_hrectangular_complex (test_decomp.TestLUSingle) ... ok Check lu decomposition on medium size, rectangular matrix. ... ok Check lu decomposition on medium size, rectangular matrix. ... ok test_simple (test_decomp.TestLUSingle) ... ok test_simple2 (test_decomp.TestLUSingle) ... ok test_simple2_complex (test_decomp.TestLUSingle) ... ok test_simple_complex (test_decomp.TestLUSingle) ... ok test_simple_known (test_decomp.TestLUSingle) ... ok test_vrectangular (test_decomp.TestLUSingle) ... ok test_vrectangular_complex (test_decomp.TestLUSingle) ... ok test_lu (test_decomp.TestLUSolve) ... ok test_decomp.TestOverwrite.test_eig ... ok test_decomp.TestOverwrite.test_eig_banded ... ok test_decomp.TestOverwrite.test_eigh ... ok test_decomp.TestOverwrite.test_eigvals ... ok test_decomp.TestOverwrite.test_eigvals_banded ... ok test_decomp.TestOverwrite.test_eigvalsh ... ok test_decomp.TestOverwrite.test_hessenberg ... ok test_decomp.TestOverwrite.test_lu ... ok test_decomp.TestOverwrite.test_lu_factor ... ok test_decomp.TestOverwrite.test_lu_solve ... ok test_decomp.TestOverwrite.test_qr ... ok test_decomp.TestOverwrite.test_rq ... ok test_decomp.TestOverwrite.test_schur ... ok test_decomp.TestOverwrite.test_schur_complex ... ok test_decomp.TestOverwrite.test_svd ... ok test_decomp.TestOverwrite.test_svdvals ... ok test_random (test_decomp.TestQR) ... ok test_random_complex (test_decomp.TestQR) ... ok test_random_complex_pivoting (test_decomp.TestQR) ... ok test_random_pivoting (test_decomp.TestQR) ... ok test_random_tall (test_decomp.TestQR) ... ok test_random_tall_e (test_decomp.TestQR) ... ok test_random_tall_e_pivoting (test_decomp.TestQR) ... ok test_random_tall_pivoting (test_decomp.TestQR) ... ok test_random_trap (test_decomp.TestQR) ... ok test_random_trap_pivoting (test_decomp.TestQR) ... ok test_simple (test_decomp.TestQR) ... ok test_simple_complex (test_decomp.TestQR) ... ok test_simple_complex_pivoting (test_decomp.TestQR) ... ok test_simple_fat (test_decomp.TestQR) ... ok test_simple_fat_e (test_decomp.TestQR) ... ok test_simple_fat_e_pivoting (test_decomp.TestQR) ... ok test_simple_fat_pivoting (test_decomp.TestQR) ... ok test_simple_pivoting (test_decomp.TestQR) ... ok test_simple_tall (test_decomp.TestQR) ... ok test_simple_tall_e (test_decomp.TestQR) ... ok test_simple_tall_e_pivoting (test_decomp.TestQR) ... ok test_simple_tall_pivoting (test_decomp.TestQR) ... ok test_simple_trap (test_decomp.TestQR) ... ok test_simple_trap_pivoting (test_decomp.TestQR) ... ok test_r (test_decomp.TestRQ) ... ok test_random (test_decomp.TestRQ) ... ok test_random_complex (test_decomp.TestRQ) ... ok test_random_complex_economic (test_decomp.TestRQ) ... ok test_random_tall (test_decomp.TestRQ) ... ok test_random_trap (test_decomp.TestRQ) ... ok test_random_trap_economic (test_decomp.TestRQ) ... ok test_simple (test_decomp.TestRQ) ... ok test_simple_complex (test_decomp.TestRQ) ... ok test_simple_fat (test_decomp.TestRQ) ... ok test_simple_tall (test_decomp.TestRQ) ... ok test_simple_trap (test_decomp.TestRQ) ... ok test_random (test_decomp.TestSVD) ... ok test_random_complex (test_decomp.TestSVD) ... ok test_simple (test_decomp.TestSVD) ... ok test_simple_complex (test_decomp.TestSVD) ... ok test_simple_overdet (test_decomp.TestSVD) ... ok test_simple_singular (test_decomp.TestSVD) ... ok test_simple_underdet (test_decomp.TestSVD) ... ok test_simple (test_decomp.TestSVDVals) ... ok test_simple_complex (test_decomp.TestSVDVals) ... ok test_simple_overdet (test_decomp.TestSVDVals) ... ok test_simple_overdet_complex (test_decomp.TestSVDVals) ... ok test_simple_underdet (test_decomp.TestSVDVals) ... ok test_simple_underdet_complex (test_decomp.TestSVDVals) ... ok test_simple (test_decomp.TestSchur) ... ok test_sort (test_decomp.TestSchur) ... ok test_sort_errors (test_decomp.TestSchur) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'f', True, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'f', True, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'f', True, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'f', True, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'f', False, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'f', False, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'f', False, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'f', False, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'f', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'f', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'd', True, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'd', True, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'd', True, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'd', True, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'd', False, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'd', False, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'F', True, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'F', True, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'F', True, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'F', True, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'F', False, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'F', False, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'F', False, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'F', False, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'F', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'D', True, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'D', True, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', True, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', True, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'D', True, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'D', True, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', True, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', True, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, True, True, None) ... ok test_decomp.test_eigh('general ', 6, 'D', False, True, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, False, True, None) ... ok test_decomp.test_eigh('general ', 6, 'D', False, False, True, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', False, True, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', False, False, True, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, True, False, None) ... ok test_decomp.test_eigh('general ', 6, 'D', False, True, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, False, False, None) ... ok test_decomp.test_eigh('general ', 6, 'D', False, False, False, None) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'D', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'D', False, False, False, (2, 4)) ... ok test_decomp.test_eigh_integer ... ok Check linalg works with non-aligned memory ... ok Check linalg works with non-aligned memory ... ok Check that complex objects don't need to be completely aligned ... ok test_decomp.test_lapack_misaligned ... KNOWNFAIL: Ticket #1152, triggers a segfault in rare cases. test_random (test_decomp_cholesky.TestCholesky) ... ok test_random_complex (test_decomp_cholesky.TestCholesky) ... ok test_simple (test_decomp_cholesky.TestCholesky) ... ok test_simple_complex (test_decomp_cholesky.TestCholesky) ... ok test_lower_complex (test_decomp_cholesky.TestCholeskyBanded) ... ok test_lower_real (test_decomp_cholesky.TestCholeskyBanded) ... ok test_upper_complex (test_decomp_cholesky.TestCholeskyBanded) ... ok test_upper_real (test_decomp_cholesky.TestCholeskyBanded) ... ok test_decomp_cholesky.TestOverwrite.test_cho_factor ... ok test_decomp_cholesky.TestOverwrite.test_cho_solve ... ok test_decomp_cholesky.TestOverwrite.test_cho_solve_banded ... ok test_decomp_cholesky.TestOverwrite.test_cholesky ... ok test_decomp_cholesky.TestOverwrite.test_cholesky_banded ... ok test_default_a (test_fblas.TestCaxpy) ... ok test_simple (test_fblas.TestCaxpy) ... ok test_x_and_y_stride (test_fblas.TestCaxpy) ... ok test_x_bad_size (test_fblas.TestCaxpy) ... ok test_x_stride (test_fblas.TestCaxpy) ... ok test_y_bad_size (test_fblas.TestCaxpy) ... ok test_y_stride (test_fblas.TestCaxpy) ... ok test_simple (test_fblas.TestCcopy) ... ok test_x_and_y_stride (test_fblas.TestCcopy) ... ok test_x_bad_size (test_fblas.TestCcopy) ... ok test_x_stride (test_fblas.TestCcopy) ... ok test_y_bad_size (test_fblas.TestCcopy) ... ok test_y_stride (test_fblas.TestCcopy) ... ok test_default_beta_y (test_fblas.TestCgemv) ... ok test_simple (test_fblas.TestCgemv) ... ok test_simple_transpose (test_fblas.TestCgemv) ... ok test_simple_transpose_conj (test_fblas.TestCgemv) ... ok test_x_stride (test_fblas.TestCgemv) ... ok test_x_stride_assert (test_fblas.TestCgemv) ... ok test_x_stride_transpose (test_fblas.TestCgemv) ... ok test_y_stride (test_fblas.TestCgemv) ... ok test_y_stride_assert (test_fblas.TestCgemv) ... ok test_y_stride_transpose (test_fblas.TestCgemv) ... ok test_simple (test_fblas.TestCscal) ... ok test_x_bad_size (test_fblas.TestCscal) ... ok test_x_stride (test_fblas.TestCscal) ... ok test_simple (test_fblas.TestCswap) ... ok test_x_and_y_stride (test_fblas.TestCswap) ... ok test_x_bad_size (test_fblas.TestCswap) ... ok test_x_stride (test_fblas.TestCswap) ... ok test_y_bad_size (test_fblas.TestCswap) ... ok test_y_stride (test_fblas.TestCswap) ... ok test_default_a (test_fblas.TestDaxpy) ... ok test_simple (test_fblas.TestDaxpy) ... ok test_x_and_y_stride (test_fblas.TestDaxpy) ... ok test_x_bad_size (test_fblas.TestDaxpy) ... ok test_x_stride (test_fblas.TestDaxpy) ... ok test_y_bad_size (test_fblas.TestDaxpy) ... ok test_y_stride (test_fblas.TestDaxpy) ... ok test_simple (test_fblas.TestDcopy) ... ok test_x_and_y_stride (test_fblas.TestDcopy) ... ok test_x_bad_size (test_fblas.TestDcopy) ... ok test_x_stride (test_fblas.TestDcopy) ... ok test_y_bad_size (test_fblas.TestDcopy) ... ok test_y_stride (test_fblas.TestDcopy) ... ok test_default_beta_y (test_fblas.TestDgemv) ... ok test_simple (test_fblas.TestDgemv) ... ok test_simple_transpose (test_fblas.TestDgemv) ... ok test_simple_transpose_conj (test_fblas.TestDgemv) ... ok test_x_stride (test_fblas.TestDgemv) ... ok test_x_stride_assert (test_fblas.TestDgemv) ... ok test_x_stride_transpose (test_fblas.TestDgemv) ... ok test_y_stride (test_fblas.TestDgemv) ... ok test_y_stride_assert (test_fblas.TestDgemv) ... ok test_y_stride_transpose (test_fblas.TestDgemv) ... ok test_simple (test_fblas.TestDscal) ... ok test_x_bad_size (test_fblas.TestDscal) ... ok test_x_stride (test_fblas.TestDscal) ... ok test_simple (test_fblas.TestDswap) ... ok test_x_and_y_stride (test_fblas.TestDswap) ... ok test_x_bad_size (test_fblas.TestDswap) ... ok test_x_stride (test_fblas.TestDswap) ... ok test_y_bad_size (test_fblas.TestDswap) ... ok test_y_stride (test_fblas.TestDswap) ... ok test_default_a (test_fblas.TestSaxpy) ... ok test_simple (test_fblas.TestSaxpy) ... ok test_x_and_y_stride (test_fblas.TestSaxpy) ... ok test_x_bad_size (test_fblas.TestSaxpy) ... ok test_x_stride (test_fblas.TestSaxpy) ... ok test_y_bad_size (test_fblas.TestSaxpy) ... ok test_y_stride (test_fblas.TestSaxpy) ... ok test_simple (test_fblas.TestScopy) ... ok test_x_and_y_stride (test_fblas.TestScopy) ... ok test_x_bad_size (test_fblas.TestScopy) ... ok test_x_stride (test_fblas.TestScopy) ... ok test_y_bad_size (test_fblas.TestScopy) ... ok test_y_stride (test_fblas.TestScopy) ... ok test_default_beta_y (test_fblas.TestSgemv) ... ok test_simple (test_fblas.TestSgemv) ... ok test_simple_transpose (test_fblas.TestSgemv) ... ok test_simple_transpose_conj (test_fblas.TestSgemv) ... ok test_x_stride (test_fblas.TestSgemv) ... ok test_x_stride_assert (test_fblas.TestSgemv) ... ok test_x_stride_transpose (test_fblas.TestSgemv) ... ok test_y_stride (test_fblas.TestSgemv) ... ok test_y_stride_assert (test_fblas.TestSgemv) ... ok test_y_stride_transpose (test_fblas.TestSgemv) ... ok test_simple (test_fblas.TestSscal) ... ok test_x_bad_size (test_fblas.TestSscal) ... ok test_x_stride (test_fblas.TestSscal) ... ok test_simple (test_fblas.TestSswap) ... ok test_x_and_y_stride (test_fblas.TestSswap) ... ok test_x_bad_size (test_fblas.TestSswap) ... ok test_x_stride (test_fblas.TestSswap) ... ok test_y_bad_size (test_fblas.TestSswap) ... ok test_y_stride (test_fblas.TestSswap) ... ok test_default_a (test_fblas.TestZaxpy) ... ok test_simple (test_fblas.TestZaxpy) ... ok test_x_and_y_stride (test_fblas.TestZaxpy) ... ok test_x_bad_size (test_fblas.TestZaxpy) ... ok test_x_stride (test_fblas.TestZaxpy) ... ok test_y_bad_size (test_fblas.TestZaxpy) ... ok test_y_stride (test_fblas.TestZaxpy) ... ok test_simple (test_fblas.TestZcopy) ... ok test_x_and_y_stride (test_fblas.TestZcopy) ... ok test_x_bad_size (test_fblas.TestZcopy) ... ok test_x_stride (test_fblas.TestZcopy) ... ok test_y_bad_size (test_fblas.TestZcopy) ... ok test_y_stride (test_fblas.TestZcopy) ... ok test_default_beta_y (test_fblas.TestZgemv) ... ok test_simple (test_fblas.TestZgemv) ... ok test_simple_transpose (test_fblas.TestZgemv) ... ok test_simple_transpose_conj (test_fblas.TestZgemv) ... ok test_x_stride (test_fblas.TestZgemv) ... ok test_x_stride_assert (test_fblas.TestZgemv) ... ok test_x_stride_transpose (test_fblas.TestZgemv) ... ok test_y_stride (test_fblas.TestZgemv) ... ok test_y_stride_assert (test_fblas.TestZgemv) ... ok test_y_stride_transpose (test_fblas.TestZgemv) ... ok test_simple (test_fblas.TestZscal) ... ok test_x_bad_size (test_fblas.TestZscal) ... ok test_x_stride (test_fblas.TestZscal) ... ok test_simple (test_fblas.TestZswap) ... ok test_x_and_y_stride (test_fblas.TestZswap) ... ok test_x_bad_size (test_fblas.TestZswap) ... ok test_x_stride (test_fblas.TestZswap) ... ok test_y_bad_size (test_fblas.TestZswap) ... ok test_y_stride (test_fblas.TestZswap) ... ok test_gebal (test_lapack.TestFlapackSimple) ... ok test_gehrd (test_lapack.TestFlapackSimple) ... ok test_clapack (test_lapack.TestLapack) ... ok test_flapack (test_lapack.TestLapack) ... ok test_consistency (test_matfuncs.TestExpM) ... ok test_zero (test_matfuncs.TestExpM) ... ok test_nils (test_matfuncs.TestLogM) ... ok test_defective1 (test_matfuncs.TestSignM) ... ok test_defective2 (test_matfuncs.TestSignM) ... ok test_defective3 (test_matfuncs.TestSignM) ... ok test_nils (test_matfuncs.TestSignM) ... ok test_bad (test_matfuncs.TestSqrtM) ... ok test_special_matrices.TestBlockDiag.test_bad_arg ... ok test_special_matrices.TestBlockDiag.test_basic ... ok test_special_matrices.TestBlockDiag.test_dtype ... ok test_special_matrices.TestBlockDiag.test_no_args ... ok test_special_matrices.TestBlockDiag.test_scalar_and_1d_args ... ok test_basic (test_special_matrices.TestCirculant) ... ok test_bad_shapes (test_special_matrices.TestCompanion) ... ok test_basic (test_special_matrices.TestCompanion) ... ok test_basic (test_special_matrices.TestHadamard) ... ok test_basic (test_special_matrices.TestHankel) ... ok test_basic (test_special_matrices.TestHilbert) ... ok test_basic (test_special_matrices.TestInvHilbert) ... ok test_inverse (test_special_matrices.TestInvHilbert) ... ok test_special_matrices.TestKron.test_basic ... ok test_bad_shapes (test_special_matrices.TestLeslie) ... ok test_basic (test_special_matrices.TestLeslie) ... ok test_basic (test_special_matrices.TestToeplitz) ... ok test_complex_01 (test_special_matrices.TestToeplitz) ... ok Scalar arguments still produce a 2D array. ... ok test_scalar_01 (test_special_matrices.TestToeplitz) ... ok test_scalar_02 (test_special_matrices.TestToeplitz) ... ok test_scalar_03 (test_special_matrices.TestToeplitz) ... ok test_scalar_04 (test_special_matrices.TestToeplitz) ... ok test_2d (test_special_matrices.TestTri) ... ok test_basic (test_special_matrices.TestTri) ... ok test_diag (test_special_matrices.TestTri) ... ok test_diag2d (test_special_matrices.TestTri) ... ok test_basic (test_special_matrices.TestTril) ... ok test_diag (test_special_matrices.TestTril) ... ok test_basic (test_special_matrices.TestTriu) ... ok test_diag (test_special_matrices.TestTriu) ... ok test_logsumexp (test_maxentropy.TestMaxentropy) ... ok test_common.test_pade_trivial ... ok test_common.test_pade_4term_exp ... ok Test whether logsumexp() function correctly handles large inputs. ... ok test_doccer.test_unindent('Another test\n with some indent', 'Another test\n with some indent') ... ok test_doccer.test_unindent('Another test, one line', 'Another test, one line') ... ok test_doccer.test_unindent('Another test\n with some indent', 'Another test\n with some indent') ... ok test_doccer.test_unindent_dict('Another test\n with some indent', 'Another test\n with some indent') ... ok test_doccer.test_unindent_dict('Another test, one line', 'Another test, one line') ... ok test_doccer.test_unindent_dict('Another test\n with some indent', 'Another test\n with some indent') ... ok test_doccer.test_docformat('Docstring\n Another test\n with some indent\n Another test, one line\n Another test\n with some indent\n', 'Docstring\n Another test\n with some indent\n Another test, one line\n Another test\n with some indent\n') ... ok test_doccer.test_docformat('Single line doc Another test\n with some indent', 'Single line doc Another test\n with some indent') ... ok test_doccer.test_decorator(' Docstring\n Another test\n with some indent\n ', ' Docstring\n Another test\n with some indent\n ') ... ok test_doccer.test_decorator(' Docstring\n Another test\n with some indent\n ', ' Docstring\n Another test\n with some indent\n ') ... ok test_bytescale (test_pilutil.TestPILUtil) ... SKIP: Skipping test: test_bytescale Need to import PIL for this test test_imresize (test_pilutil.TestPILUtil) ... SKIP: Skipping test: test_imresize Need to import PIL for this test test_imresize2 (test_pilutil.TestPILUtil) ... SKIP: Skipping test: test_imresize2 Need to import PIL for this test test_imresize3 (test_pilutil.TestPILUtil) ... SKIP: Skipping test: test_imresize3 Need to import PIL for this test Failure: SkipTest (Skipping test: test_fromimage Need to import PIL for this test) ... SKIP: Skipping test: test_fromimage Need to import PIL for this test test_datatypes.test_map_coordinates_dts ... ok test_datatypes.test_uint64_max ... ok test_filters.test_ticket_701 ... ok test_filters.test_orders_gauss(0, array([ 0.])) ... ok test_filters.test_orders_gauss(0, array([ 0.])) ... ok test_filters.test_orders_gauss(, , array([ 0.]), 1, -1) ... ok test_filters.test_orders_gauss(, , array([ 0.]), 1, 4) ... ok test_filters.test_orders_gauss(0, array([ 0.])) ... ok test_filters.test_orders_gauss(0, array([ 0.])) ... ok test_filters.test_orders_gauss(, , array([ 0.]), 1, -1, -1) ... ok test_filters.test_orders_gauss(, , array([ 0.]), 1, -1, 4) ... ok Regression test for #1311. ... ok test_io.test_imread ... SKIP: Skipping test: test_imread The Python Image Library could not be found. test_basic (test_measurements.Test_measurements_select) ... ok test_a (test_measurements.Test_measurements_stats) ... ok test_a_centered (test_measurements.Test_measurements_stats) ... ok test_b (test_measurements.Test_measurements_stats) ... ok test_b_centered (test_measurements.Test_measurements_stats) ... ok test_nonint_labels (test_measurements.Test_measurements_stats) ... ok label 1 ... ok label 2 ... ok label 3 ... ok label 4 ... ok label 5 ... ok label 6 ... ok label 7 ... ok label 8 ... ok label 9 ... ok label 10 ... ok label 11 ... ok label 12 ... ok label 13 ... ok find_objects 1 ... ok find_objects 2 ... ok find_objects 3 ... ok find_objects 4 ... ok find_objects 5 ... ok find_objects 6 ... ok find_objects 7 ... ok find_objects 8 ... ok find_objects 9 ... ok sum 1 ... ok sum 2 ... ok sum 3 ... ok sum 4 ... ok sum 5 ... ok sum 6 ... ok sum 7 ... ok sum 8 ... ok sum 9 ... ok sum 10 ... ok sum 11 ... ok sum 12 ... ok mean 1 ... ok mean 2 ... ok mean 3 ... ok mean 4 ... ok minimum 1 ... ok minimum 2 ... ok minimum 3 ... ok minimum 4 ... ok maximum 1 ... ok maximum 2 ... ok maximum 3 ... ok maximum 4 ... ok Ticket #501 ... ok median 1 ... ok median 2 ... ok median 3 ... ok variance 1 ... ok variance 2 ... ok variance 3 ... ok variance 4 ... ok variance 5 ... ok variance 6 ... ok standard deviation 1 ... ok standard deviation 2 ... ok standard deviation 3 ... ok standard deviation 4 ... ok standard deviation 5 ... ok standard deviation 6 ... ok standard deviation 7 ... ok minimum position 1 ... ok minimum position 2 ... ok minimum position 3 ... ok minimum position 4 ... ok minimum position 5 ... ok minimum position 6 ... ok minimum position 7 ... ok maximum position 1 ... ok maximum position 2 ... ok maximum position 3 ... ok maximum position 4 ... ok maximum position 5 ... ok maximum position 6 ... ok maximum position 7 - float labels ... ok extrema 1 ... ok extrema 2 ... ok extrema 3 ... ok extrema 4 ... ok center of mass 1 ... ok center of mass 2 ... ok center of mass 3 ... ok center of mass 4 ... ok center of mass 5 ... ok center of mass 6 ... ok center of mass 7 ... ok center of mass 8 ... ok center of mass 9 ... ok histogram 1 ... ok histogram 2 ... ok histogram 3 ... ok affine_transform 1 ... ok affine transform 2 ... ok affine transform 3 ... ok affine transform 4 ... ok affine transform 5 ... ok affine transform 6 ... ok affine transform 7 ... ok affine transform 8 ... ok affine transform 9 ... ok affine transform 10 ... ok affine transform 11 ... ok affine transform 12 ... ok affine transform 13 ... ok affine transform 14 ... ok affine transform 15 ... ok affine transform 16 ... ok affine transform 17 ... ok affine transform 18 ... ok affine transform 19 ... ok affine transform 20 ... ok affine transform 21 ... ok binary closing 1 ... ok binary closing 2 ... ok binary dilation 1 ... ok binary dilation 2 ... ok binary dilation 3 ... ok binary dilation 4 ... ok binary dilation 5 ... ok binary dilation 6 ... ok binary dilation 7 ... ok binary dilation 8 ... ok binary dilation 9 ... ok binary dilation 10 ... ok binary dilation 11 ... ok binary dilation 12 ... ok binary dilation 13 ... ok binary dilation 14 ... ok binary dilation 15 ... ok binary dilation 16 ... ok binary dilation 17 ... ok binary dilation 18 ... ok binary dilation 19 ... ok binary dilation 20 ... ok binary dilation 21 ... ok binary dilation 22 ... ok binary dilation 23 ... ok binary dilation 24 ... ok binary dilation 25 ... ok binary dilation 26 ... ok binary dilation 27 ... ok binary dilation 28 ... ok binary dilation 29 ... ok binary dilation 30 ... ok binary dilation 31 ... ok binary dilation 32 ... ok binary dilation 33 ... ok binary dilation 34 ... ok binary dilation 35 ... ok binary erosion 1 ... ok binary erosion 2 ... ok binary erosion 3 ... ok binary erosion 4 ... ok binary erosion 5 ... ok binary erosion 6 ... ok binary erosion 7 ... ok binary erosion 8 ... ok binary erosion 9 ... ok binary erosion 10 ... ok binary erosion 11 ... ok binary erosion 12 ... ok binary erosion 13 ... ok binary erosion 14 ... ok binary erosion 15 ... ok binary erosion 16 ... ok binary erosion 17 ... ok binary erosion 18 ... ok binary erosion 19 ... ok binary erosion 20 ... ok binary erosion 21 ... ok binary erosion 22 ... ok binary erosion 23 ... ok binary erosion 24 ... ok binary erosion 25 ... ok binary erosion 26 ... ok binary erosion 27 ... ok binary erosion 28 ... ok binary erosion 29 ... ok binary erosion 30 ... ok binary erosion 31 ... ok binary erosion 32 ... ok binary erosion 33 ... ok binary erosion 34 ... ok binary erosion 35 ... ok binary erosion 36 ... ok binary fill holes 1 ... ok binary fill holes 2 ... ok binary fill holes 3 ... ok binary opening 1 ... ok binary opening 2 ... ok binary propagation 1 ... ok binary propagation 2 ... ok black tophat 1 ... ok black tophat 2 ... ok boundary modes ... ok boundary modes 2 ... ok correlation 1 ... ok correlation 2 ... ok correlation 3 ... ok correlation 4 ... ok correlation 5 ... ok correlation 6 ... ok correlation 7 ... ok correlation 8 ... ok correlation 9 ... ok correlation 10 ... ok correlation 11 ... ok correlation 12 ... ok correlation 13 ... ok correlation 14 ... ok correlation 15 ... ok correlation 16 ... ok correlation 17 ... ok correlation 18 ... ok correlation 19 ... ok correlation 20 ... ok correlation 21 ... ok correlation 22 ... ok correlation 23 ... ok correlation 24 ... ok correlation 25 ... ok brute force distance transform 1 ... ok brute force distance transform 2 ... ok brute force distance transform 3 ... ok brute force distance transform 4 ... ok brute force distance transform 5 ... ok brute force distance transform 6 ... ok chamfer type distance transform 1 ... ok chamfer type distance transform 2 ... ok chamfer type distance transform 3 ... ok euclidean distance transform 1 ... ok euclidean distance transform 2 ... ok euclidean distance transform 3 ... ok euclidean distance transform 4 ... ok line extension 1 ... ok line extension 2 ... ok line extension 3 ... ok line extension 4 ... ok line extension 5 ... ok line extension 6 ... ok line extension 7 ... ok line extension 8 ... ok line extension 9 ... ok line extension 10 ... ok ellipsoid fourier filter for complex transforms 1 ... ok ellipsoid fourier filter for real transforms 1 ... ok gaussian fourier filter for complex transforms 1 ... ok gaussian fourier filter for real transforms 1 ... ok shift filter for complex transforms 1 ... ok shift filter for real transforms 1 ... ok uniform fourier filter for complex transforms 1 ... ok uniform fourier filter for real transforms 1 ... ok gaussian filter 1 ... ok gaussian filter 2 ... ok gaussian filter 3 - single precision data ... ok gaussian filter 4 ... ok gaussian filter 5 ... ok gaussian filter 6 ... ok gaussian gradient magnitude filter 1 ... ok gaussian gradient magnitude filter 2 ... ok gaussian laplace filter 1 ... ok gaussian laplace filter 2 ... ok generation of a binary structure 1 ... ok generation of a binary structure 2 ... ok generation of a binary structure 3 ... ok generation of a binary structure 4 ... ok generic filter 1 ... ok generic 1d filter 1 ... ok generic gradient magnitude 1 ... ok generic laplace filter 1 ... ok geometric transform 1 ... ok geometric transform 2 ... ok geometric transform 3 ... ok geometric transform 4 ... ok geometric transform 5 ... ok geometric transform 6 ... ok geometric transform 7 ... ok geometric transform 8 ... ok geometric transform 10 ... ok geometric transform 13 ... ok geometric transform 14 ... ok geometric transform 15 ... ok geometric transform 16 ... ok geometric transform 17 ... ok geometric transform 18 ... ok geometric transform 19 ... ok geometric transform 20 ... ok geometric transform 21 ... ok geometric transform 22 ... ok geometric transform 23 ... ok geometric transform 24 ... ok grey closing 1 ... ok grey closing 2 ... ok grey dilation 1 ... ok grey dilation 2 ... ok grey dilation 3 ... ok grey erosion 1 ... ok grey erosion 2 ... ok grey erosion 3 ... ok grey opening 1 ... ok grey opening 2 ... ok binary hit-or-miss transform 1 ... ok binary hit-or-miss transform 2 ... ok binary hit-or-miss transform 3 ... ok iterating a structure 1 ... ok iterating a structure 2 ... ok iterating a structure 3 ... ok laplace filter 1 ... ok laplace filter 2 ... ok map coordinates 1 ... ok map coordinates 2 ... ok maximum filter 1 ... ok maximum filter 2 ... ok maximum filter 3 ... ok maximum filter 4 ... ok maximum filter 5 ... ok maximum filter 6 ... ok maximum filter 7 ... ok maximum filter 8 ... ok maximum filter 9 ... ok minimum filter 1 ... ok minimum filter 2 ... ok minimum filter 3 ... ok minimum filter 4 ... ok minimum filter 5 ... ok minimum filter 6 ... ok minimum filter 7 ... ok minimum filter 8 ... ok minimum filter 9 ... ok morphological gradient 1 ... ok morphological gradient 2 ... ok morphological laplace 1 ... ok morphological laplace 2 ... ok prewitt filter 1 ... ok prewitt filter 2 ... ok prewitt filter 3 ... ok prewitt filter 4 ... ok rank filter 1 ... ok rank filter 2 ... ok rank filter 3 ... ok rank filter 4 ... ok rank filter 5 ... ok rank filter 6 ... ok rank filter 7 ... ok median filter 8 ... ok rank filter 9 ... ok rank filter 10 ... ok rank filter 11 ... ok rank filter 12 ... ok rank filter 13 ... ok rank filter 14 ... ok rotate 1 ... ok rotate 2 ... ok rotate 3 ... ok rotate 4 ... ok rotate 5 ... ok rotate 6 ... ok rotate 7 ... ok rotate 8 ... ok shift 1 ... ok shift 2 ... ok shift 3 ... ok shift 4 ... ok shift 5 ... ok shift 6 ... ok shift 7 ... ok shift 8 ... ok shift 9 ... ok sobel filter 1 ... ok sobel filter 2 ... ok sobel filter 3 ... ok sobel filter 4 ... ok spline filter 1 ... ok spline filter 2 ... ok spline filter 3 ... ok spline filter 4 ... ok spline filter 5 ... ok uniform filter 1 ... ok uniform filter 2 ... ok uniform filter 3 ... ok uniform filter 4 ... ok uniform filter 5 ... ok uniform filter 6 ... ok watershed_ift 1 ... ok watershed_ift 2 ... ok watershed_ift 3 ... ok watershed_ift 4 ... ok watershed_ift 5 ... ok watershed_ift 6 ... ok watershed_ift 7 ... ok white tophat 1 ... ok white tophat 2 ... ok zoom 1 ... ok zoom 2 ... ok zoom by affine transformation 1 ... ok Ticket #1419 ... ok Regression test for #413: median_filter does not handle bytes orders. ... ok Ticket #643 ... ok test_regression.test_ticket_742 ... ok test_explicit (test_odr.TestODR) ... ok test_implicit (test_odr.TestODR) ... ok test_lorentz (test_odr.TestODR) ... ok test_multi (test_odr.TestODR) ... ok test_pearson (test_odr.TestODR) ... ok test_ticket_1253 (test_odr.TestODR) ... ok test_simple (test_cobyla.TestCobyla) ... ok test_linesearch.TestLineSearch.test_armijo_terminate_1 ... ok test_linesearch.TestLineSearch.test_line_search_armijo ... ok test_linesearch.TestLineSearch.test_line_search_wolfe1 ... ok test_linesearch.TestLineSearch.test_line_search_wolfe2 ... ok test_linesearch.TestLineSearch.test_scalar_search_armijo ... ok test_linesearch.TestLineSearch.test_scalar_search_wolfe1 ... ok test_linesearch.TestLineSearch.test_scalar_search_wolfe2 ... ok test_linesearch.TestLineSearch.test_wolfe_terminate ... ok test_one_argument (test_minpack.TestCurveFit) ... ok test_two_argument (test_minpack.TestCurveFit) ... ok fsolve without gradient, equal pipes -> equal flows ... ok fsolve with gradient, equal pipes -> equal flows ... ok The callables 'func' and 'deriv_func' have no 'func_name' attribute. ... ok test_minpack.TestFSolve.test_wrong_shape_fprime_function ... ok The callable 'func' has no 'func_name' attribute. ... ok test_minpack.TestFSolve.test_wrong_shape_func_function ... ok f(x) = c * x**2; fixed point should be x=1/c ... ok f(x) = c * x**0.5; fixed point should be x=c**2 ... ok test_array_trivial (test_minpack.TestFixedPoint) ... ok f(x) = x**2; x0=1.05; fixed point should be x=1 ... ok f(x) = x**0.5; x0=1.05; fixed point should be x=1 ... ok f(x) = 2x; fixed point should be x=0 ... ok test_basic (test_minpack.TestLeastSq) ... ok test_full_output (test_minpack.TestLeastSq) ... ok test_input_untouched (test_minpack.TestLeastSq) ... ok The callables 'func' and 'deriv_func' have no 'func_name' attribute. ... ok test_wrong_shape_Dfun_function (test_minpack.TestLeastSq) ... ok The callable 'func' has no 'func_name' attribute. ... ok test_wrong_shape_func_function (test_minpack.TestLeastSq) ... ok test_nnls (test_nnls.TestNNLS) ... ok fsolve without gradient, equal pipes -> equal flows ... ok fsolve with gradient, equal pipes -> equal flows ... ok The callables 'func' and 'deriv_func' have no 'func_name' attribute. ... ok test_nonlin.TestFSolve.test_wrong_shape_fprime_function ... ok The callable 'func' has no 'func_name' attribute. ... ok test_nonlin.TestFSolve.test_wrong_shape_func_function ... ok test_nonlin.TestJacobianDotSolve.test_anderson ... ok test_nonlin.TestJacobianDotSolve.test_broyden1 ... ok test_nonlin.TestJacobianDotSolve.test_broyden2 ... ok test_nonlin.TestJacobianDotSolve.test_diagbroyden ... ok test_nonlin.TestJacobianDotSolve.test_excitingmixing ... ok test_nonlin.TestJacobianDotSolve.test_krylov ... ok test_nonlin.TestJacobianDotSolve.test_linearmixing ... ok test_anderson (test_nonlin.TestLinear) ... ok test_broyden1 (test_nonlin.TestLinear) ... ok test_broyden2 (test_nonlin.TestLinear) ... ok test_krylov (test_nonlin.TestLinear) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_nonlin.TestNonlin.test_problem(, ) ... ok test_anderson (test_nonlin.TestNonlinOldTests) ... ok test_broyden1 (test_nonlin.TestNonlinOldTests) ... ok test_broyden2 (test_nonlin.TestNonlinOldTests) ... ok test_diagbroyden (test_nonlin.TestNonlinOldTests) ... ok test_exciting (test_nonlin.TestNonlinOldTests) ... ok test_linearmixing (test_nonlin.TestNonlinOldTests) ... ok test_anderson (test_nonlin.TestSecant) ... ok test_broyden1 (test_nonlin.TestSecant) ... ok test_broyden1_update (test_nonlin.TestSecant) ... ok test_broyden2 (test_nonlin.TestSecant) ... ok test_broyden2_update (test_nonlin.TestSecant) ... ok Broyden-Fletcher-Goldfarb-Shanno optimization routine ... ok Test corner case where -Inf is the minimum. See #1494. ... ok brent algorithm ... ok conjugate gradient optimization routine ... ok Test fminbound ... ok test_fminbound_scalar (test_optimize.TestOptimize) ... ok limited-memory bound-constrained BFGS algorithm ... ok line-search Newton conjugate gradient optimization routine ... ok Nelder-Mead simplex algorithm ... ok Powell (direction set) optimization routine ... ok Compare rosen_hess(x) times p with rosen_hess_prod(x,p) (ticket #1248) ... ok test_tnc (test_optimize.TestTnc) ... ok Ticket #1214 ... ok Ticket #1074 ... ok test_bound_approximated (test_slsqp.TestSLSQP) ... ok test_bound_equality_given (test_slsqp.TestSLSQP) ... ok test_bound_equality_inequality_given (test_slsqp.TestSLSQP) ... ok test_unbounded_approximated (test_slsqp.TestSLSQP) ... ok test_unbounded_given (test_slsqp.TestSLSQP) ... ok test_bisect (test_zeros.TestBasic) ... ok test_brenth (test_zeros.TestBasic) ... ok test_brentq (test_zeros.TestBasic) ... ok test_deriv_zero_warning (test_zeros.TestBasic) ... ok test_ridder (test_zeros.TestBasic) ... ok test_axis_reverse (test_array_tools.TestArrayTools) ... ok test_axis_slice (test_array_tools.TestArrayTools) ... ok test_const_ext (test_array_tools.TestArrayTools) ... ok test_even_ext (test_array_tools.TestArrayTools) ... ok test_odd_ext (test_array_tools.TestArrayTools) ... ok test_backward_diff (test_cont2discrete.TestC2D) ... ok test_bilinear (test_cont2discrete.TestC2D) ... ok Test that the solution to the discrete approximation of a continuous ... ok test_euler (test_cont2discrete.TestC2D) ... ok test_gbt (test_cont2discrete.TestC2D) ... ok Test method='gbt' with alpha=0.25 for tf and zpk cases. ... ok test_transferfunction (test_cont2discrete.TestC2D) ... ok test_zerospolesgain (test_cont2discrete.TestC2D) ... ok test_zoh (test_cont2discrete.TestC2D) ... ok test_dimpulse (test_dltisys.TestDLTI) ... ok test_dlsim (test_dltisys.TestDLTI) ... ok test_dlsim_simple1d (test_dltisys.TestDLTI) ... ok test_dlsim_simple2d (test_dltisys.TestDLTI) ... ok test_dlsim_trivial (test_dltisys.TestDLTI) ... ok test_dstep (test_dltisys.TestDLTI) ... ok test_more_step_and_impulse (test_dltisys.TestDLTI) ... ok test_basic (test_filter_design.TestFreqz) ... ok test_basic_whole (test_filter_design.TestFreqz) ... ok test_plot (test_filter_design.TestFreqz) ... ok Regression test for ticket 1441. ... ok Regression test for #651: better handling of badly conditioned ... ok test_simple (test_filter_design.TestTf2zpk) ... ok Test the identity transfer function. ... ok Test that invalid cutoff argument raises ValueError. ... ok test_bandpass (test_fir_filter_design.TestFirWinMore) ... ok Test that attempt to create a highpass filter with an even number ... ok test_highpass (test_fir_filter_design.TestFirWinMore) ... ok test_lowpass (test_fir_filter_design.TestFirWinMore) ... ok test_multi (test_fir_filter_design.TestFirWinMore) ... ok Test the nyq keyword. ... ok test_response (test_fir_filter_design.TestFirwin) ... ok For one lowpass, bandpass, and highpass example filter, this test ... ok test01 (test_fir_filter_design.TestFirwin2) ... ok test02 (test_fir_filter_design.TestFirwin2) ... ok test03 (test_fir_filter_design.TestFirwin2) ... ok Test firwin2 when window=None. ... ok Test firwin2 for calculating Type IV filters ... ok Test firwin2 for calculating Type III filters ... ok test_invalid_args (test_fir_filter_design.TestFirwin2) ... ok test_nyq (test_fir_filter_design.TestFirwin2) ... ok test_bad_args (test_fir_filter_design.TestRemez) ... ok test_hilbert (test_fir_filter_design.TestRemez) ... ok test_fir_filter_design.test_kaiser_beta ... ok test_fir_filter_design.test_kaiser_atten ... ok test_fir_filter_design.test_kaiserord ... ok test_ltisys.TestSS2TF.test_basic(3, 3, 3) ... ok test_ltisys.TestSS2TF.test_basic(1, 3, 3) ... ok test_ltisys.TestSS2TF.test_basic(1, 1, 1) ... ok test_ltisys.Test_impulse2.test_01 ... ok Specify the desired time values for the output. ... ok Specify an initial condition as a scalar. ... ok Specify an initial condition as a list. ... ok test_ltisys.Test_impulse2.test_05 ... ok test_ltisys.Test_impulse2.test_06 ... ok test_ltisys.Test_lsim2.test_01 ... ok test_ltisys.Test_lsim2.test_02 ... ok test_ltisys.Test_lsim2.test_03 ... ok test_ltisys.Test_lsim2.test_04 ... ok test_ltisys.Test_lsim2.test_05 ... ok Test use of the default values of the arguments `T` and `U`. ... ok test_ltisys.Test_step2.test_01 ... ok Specify the desired time values for the output. ... ok Specify an initial condition as a scalar. ... ok Specify an initial condition as a list. ... ok test_ltisys.Test_step2.test_05 ... ok test_ltisys.Test_step2.test_06 ... ok test_basic (test_signaltools.TestCSpline1DEval) ... ok test_2d_arrays (test_signaltools.TestConvolve) ... ok test_basic (test_signaltools.TestConvolve) ... ok test_complex (test_signaltools.TestConvolve) ... ok test_same_mode (test_signaltools.TestConvolve) ... ok test_valid_mode (test_signaltools.TestConvolve) ... ok test_zero_order (test_signaltools.TestConvolve) ... ok test_rank1_full (test_signaltools.TestCorrelateComplex128) ... ok test_rank1_same (test_signaltools.TestCorrelateComplex128) ... ok test_rank1_valid (test_signaltools.TestCorrelateComplex128) ... ok test_rank3 (test_signaltools.TestCorrelateComplex128) ... ok test_rank1_full (test_signaltools.TestCorrelateComplex256) ... ok test_rank1_same (test_signaltools.TestCorrelateComplex256) ... ok test_rank1_valid (test_signaltools.TestCorrelateComplex256) ... ok test_rank3 (test_signaltools.TestCorrelateComplex256) ... ok test_rank1_full (test_signaltools.TestCorrelateComplex256) ... ok test_rank1_same (test_signaltools.TestCorrelateComplex256) ... ok test_rank1_valid (test_signaltools.TestCorrelateComplex256) ... ok test_rank3 (test_signaltools.TestCorrelateComplex256) ... ok test_rank1_full (test_signaltools.TestCorrelateComplex64) ... ok test_rank1_same (test_signaltools.TestCorrelateComplex64) ... ok test_rank1_valid (test_signaltools.TestCorrelateComplex64) ... ok test_rank3 (test_signaltools.TestCorrelateComplex64) ... ok test_rank1_full (test_signaltools.TestCorrelateDecimal) ... ok test_rank1_same (test_signaltools.TestCorrelateDecimal) ... ok test_rank1_valid (test_signaltools.TestCorrelateDecimal) ... ok test_rank3_all (test_signaltools.TestCorrelateDecimal) ... ok test_rank3_same (test_signaltools.TestCorrelateDecimal) ... ok test_rank3_valid (test_signaltools.TestCorrelateDecimal) ... ok test_rank1_full (test_signaltools.TestCorrelateFloat128) ... ok test_rank1_same (test_signaltools.TestCorrelateFloat128) ... ok test_rank1_valid (test_signaltools.TestCorrelateFloat128) ... ok test_rank3_all (test_signaltools.TestCorrelateFloat128) ... ok test_rank3_same (test_signaltools.TestCorrelateFloat128) ... ok test_rank3_valid (test_signaltools.TestCorrelateFloat128) ... ok test_rank1_full (test_signaltools.TestCorrelateFloat32) ... ok test_rank1_same (test_signaltools.TestCorrelateFloat32) ... ok test_rank1_valid (test_signaltools.TestCorrelateFloat32) ... ok test_rank3_all (test_signaltools.TestCorrelateFloat32) ... ok test_rank3_same (test_signaltools.TestCorrelateFloat32) ... ok test_rank3_valid (test_signaltools.TestCorrelateFloat32) ... ok test_rank1_full (test_signaltools.TestCorrelateFloat64) ... ok test_rank1_same (test_signaltools.TestCorrelateFloat64) ... ok test_rank1_valid (test_signaltools.TestCorrelateFloat64) ... ok test_rank3_all (test_signaltools.TestCorrelateFloat64) ... ok test_rank3_same (test_signaltools.TestCorrelateFloat64) ... ok test_rank3_valid (test_signaltools.TestCorrelateFloat64) ... ok test_rank1_full (test_signaltools.TestCorrelateInt) ... ok test_rank1_same (test_signaltools.TestCorrelateInt) ... ok test_rank1_valid (test_signaltools.TestCorrelateInt) ... ok test_rank3_all (test_signaltools.TestCorrelateInt) ... ok test_rank3_same (test_signaltools.TestCorrelateInt) ... ok test_rank3_valid (test_signaltools.TestCorrelateInt) ... ok test_rank1_full (test_signaltools.TestCorrelateInt16) ... ok test_rank1_same (test_signaltools.TestCorrelateInt16) ... ok test_rank1_valid (test_signaltools.TestCorrelateInt16) ... ok test_rank3_all (test_signaltools.TestCorrelateInt16) ... ok test_rank3_same (test_signaltools.TestCorrelateInt16) ... ok test_rank3_valid (test_signaltools.TestCorrelateInt16) ... ok test_rank1_full (test_signaltools.TestCorrelateInt8) ... ok test_rank1_same (test_signaltools.TestCorrelateInt8) ... ok test_rank1_valid (test_signaltools.TestCorrelateInt8) ... ok test_rank3_all (test_signaltools.TestCorrelateInt8) ... ok test_rank3_same (test_signaltools.TestCorrelateInt8) ... ok test_rank3_valid (test_signaltools.TestCorrelateInt8) ... ok test_rank1_full (test_signaltools.TestCorrelateUint16) ... ok test_rank1_same (test_signaltools.TestCorrelateUint16) ... ok test_rank1_valid (test_signaltools.TestCorrelateUint16) ... ok test_rank3_all (test_signaltools.TestCorrelateUint16) ... ok test_rank3_same (test_signaltools.TestCorrelateUint16) ... ok test_rank3_valid (test_signaltools.TestCorrelateUint16) ... ok test_rank1_full (test_signaltools.TestCorrelateUint64) ... ok test_rank1_same (test_signaltools.TestCorrelateUint64) ... ok test_rank1_valid (test_signaltools.TestCorrelateUint64) ... ok test_rank3_all (test_signaltools.TestCorrelateUint64) ... ok test_rank3_same (test_signaltools.TestCorrelateUint64) ... ok test_rank3_valid (test_signaltools.TestCorrelateUint64) ... ok test_rank1_full (test_signaltools.TestCorrelateUint8) ... ok test_rank1_same (test_signaltools.TestCorrelateUint8) ... ok test_rank1_valid (test_signaltools.TestCorrelateUint8) ... ok test_rank3_all (test_signaltools.TestCorrelateUint8) ... ok test_rank3_same (test_signaltools.TestCorrelateUint8) ... ok test_rank3_valid (test_signaltools.TestCorrelateUint8) ... ok test_signaltools.TestDecimate.test_basic ... ok Regression test for ticket #1480. ... ok test_2d_complex_same (test_signaltools.TestFFTConvolve) ... ok test_2d_real_same (test_signaltools.TestFFTConvolve) ... ok test_complex (test_signaltools.TestFFTConvolve) ... ok test_random_data (test_signaltools.TestFFTConvolve) ... ok test_real (test_signaltools.TestFFTConvolve) ... ok test_real_same_mode (test_signaltools.TestFFTConvolve) ... ok test_real_valid_mode (test_signaltools.TestFFTConvolve) ... ok test_zero_order (test_signaltools.TestFFTConvolve) ... ok test_basic (test_signaltools.TestFiltFilt) ... ok test_sine (test_signaltools.TestFiltFilt) ... ok test_signaltools.TestHilbert.test_bad_args ... ok test_signaltools.TestHilbert.test_hilbert_axisN(array([[ 0.+2.30940108j, 6.+2.30940108j, 12.+2.30940108j], ... ok test_signaltools.TestHilbert.test_hilbert_axisN(array([ 0.+2.30940108j, 1.-1.15470054j, 2.-1.15470054j, 3.-1.15470054j, ... ok test_signaltools.TestHilbert.test_hilbert_axisN((3, 20), [3, 20]) ... ok test_signaltools.TestHilbert.test_hilbert_axisN((20, 3), [20, 3]) ... ok test_signaltools.TestHilbert.test_hilbert_axisN(array([ 0.00000000e+00-1.7201583j , 1.00000000e+00-2.04779451j, ... ok test_signaltools.TestHilbert.test_hilbert_theoretical ... ok test_signaltools.TestHilbert2.test_bad_args ... ok test_basic (test_signaltools.TestLFilterZI) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterComplex128) ... ok test_rank2 (test_signaltools.TestLinearFilterComplex128) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterComplex128) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterComplex128) ... ok test_rank3 (test_signaltools.TestLinearFilterComplex128) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterComplex64) ... ok test_rank2 (test_signaltools.TestLinearFilterComplex64) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterComplex64) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterComplex64) ... ok test_rank3 (test_signaltools.TestLinearFilterComplex64) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterComplexxxiExtended28) ... ok test_rank2 (test_signaltools.TestLinearFilterComplexxxiExtended28) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterComplexxxiExtended28) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterComplexxxiExtended28) ... ok test_rank3 (test_signaltools.TestLinearFilterComplexxxiExtended28) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterDecimal) ... ok test_rank2 (test_signaltools.TestLinearFilterDecimal) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterDecimal) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterDecimal) ... ok test_rank3 (test_signaltools.TestLinearFilterDecimal) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterFloat32) ... ok test_rank2 (test_signaltools.TestLinearFilterFloat32) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterFloat32) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterFloat32) ... ok test_rank3 (test_signaltools.TestLinearFilterFloat32) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterFloat64) ... ok test_rank2 (test_signaltools.TestLinearFilterFloat64) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterFloat64) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterFloat64) ... ok test_rank3 (test_signaltools.TestLinearFilterFloat64) ... ok Regression test for #880: empty array for zi crashes. ... ok test_rank1 (test_signaltools.TestLinearFilterFloatExtended) ... ok test_rank2 (test_signaltools.TestLinearFilterFloatExtended) ... ok test_rank2_init_cond_a0 (test_signaltools.TestLinearFilterFloatExtended) ... ok test_rank2_init_cond_a1 (test_signaltools.TestLinearFilterFloatExtended) ... ok test_rank3 (test_signaltools.TestLinearFilterFloatExtended) ... ok test_basic (test_signaltools.TestMedFilt) ... ok Ticket #1124. Ensure this does not segfault. ... ok test_basic (test_signaltools.TestOrderFilt) ... ok test_basic (test_signaltools.TestWiener) ... ok Test if height of peak in normalized Lomb-Scargle periodogram ... ok Test if frequency location of peak corresponds to frequency of ... ok test_spectral.TestLombscargle.test_wrong_shape ... ok test_spectral.TestLombscargle.test_zero_division ... ok test_hyperbolic_at_zero (test_waveforms.TestChirp) ... ok test_hyperbolic_freq_01 (test_waveforms.TestChirp) ... ok test_hyperbolic_freq_02 (test_waveforms.TestChirp) ... ok test_hyperbolic_freq_03 (test_waveforms.TestChirp) ... ok test_integer_all (test_waveforms.TestChirp) ... ok test_integer_f0 (test_waveforms.TestChirp) ... ok test_integer_f1 (test_waveforms.TestChirp) ... ok test_integer_t1 (test_waveforms.TestChirp) ... ok test_linear_at_zero (test_waveforms.TestChirp) ... ok test_linear_freq_01 (test_waveforms.TestChirp) ... ok test_linear_freq_02 (test_waveforms.TestChirp) ... ok test_logarithmic_at_zero (test_waveforms.TestChirp) ... ok test_logarithmic_freq_01 (test_waveforms.TestChirp) ... ok test_logarithmic_freq_02 (test_waveforms.TestChirp) ... ok test_logarithmic_freq_03 (test_waveforms.TestChirp) ... ok test_quadratic_at_zero (test_waveforms.TestChirp) ... ok test_quadratic_at_zero2 (test_waveforms.TestChirp) ... ok test_quadratic_freq_01 (test_waveforms.TestChirp) ... ok test_quadratic_freq_02 (test_waveforms.TestChirp) ... ok test_unknown_method (test_waveforms.TestChirp) ... ok test_integer_bw (test_waveforms.TestGaussPulse) ... ok test_integer_bwr (test_waveforms.TestGaussPulse) ... ok test_integer_fc (test_waveforms.TestGaussPulse) ... ok test_integer_tpr (test_waveforms.TestGaussPulse) ... ok test_sweep_poly_const (test_waveforms.TestSweepPoly) ... ok test_sweep_poly_cubic (test_waveforms.TestSweepPoly) ... ok Use an array of coefficients instead of a poly1d. ... ok Use a list of coefficients instead of a poly1d. ... ok test_sweep_poly_linear (test_waveforms.TestSweepPoly) ... ok test_sweep_poly_quad1 (test_waveforms.TestSweepPoly) ... ok test_sweep_poly_quad2 (test_waveforms.TestSweepPoly) ... ok test_cascade (test_wavelets.TestWavelets) ... ok test_daub (test_wavelets.TestWavelets) ... ok test_morlet (test_wavelets.TestWavelets) ... ok test_qmf (test_wavelets.TestWavelets) ... ok test_windows.TestChebWin.test_cheb_even ... ok test_windows.TestChebWin.test_cheb_odd ... ok test_windows.TestGetWindow.test_boxcar ... ok test_windows.TestGetWindow.test_cheb_even ... ok test_windows.TestGetWindow.test_cheb_odd ... ok Getting factors of complex matrix ... SKIP: Skipping test: test_complex_lu UMFPACK appears not to be compiled Getting factors of real matrix ... SKIP: Skipping test: test_real_lu UMFPACK appears not to be compiled Getting factors of complex matrix ... SKIP: Skipping test: test_complex_lu UMFPACK appears not to be compiled Getting factors of real matrix ... SKIP: Skipping test: test_real_lu UMFPACK appears not to be compiled Prefactorize (with UMFPACK) matrix for solving with multiple rhs ... SKIP: Skipping test: test_factorized_umfpack UMFPACK appears not to be compiled Prefactorize matrix for solving with multiple rhs ... SKIP: Skipping test: test_factorized_without_umfpack UMFPACK appears not to be compiled Solve with UMFPACK: double precision complex ... SKIP: Skipping test: test_solve_complex_umfpack UMFPACK appears not to be compiled Solve: single precision complex ... SKIP: Skipping test: test_solve_complex_without_umfpack UMFPACK appears not to be compiled Solve with UMFPACK: double precision, sparse rhs ... SKIP: Skipping test: test_solve_sparse_rhs UMFPACK appears not to be compiled Solve with UMFPACK: double precision ... SKIP: Skipping test: test_solve_umfpack UMFPACK appears not to be compiled Solve: single precision ... SKIP: Skipping test: test_solve_without_umfpack UMFPACK appears not to be compiled test_non_square (test_linsolve.TestLinsolve) ... ok test_singular (test_linsolve.TestLinsolve) ... ok test_smoketest (test_linsolve.TestLinsolve) ... ok test_twodiags (test_linsolve.TestLinsolve) ... ok test_linsolve.TestSplu.test_lu_refcount ... ok test_linsolve.TestSplu.test_spilu_nnz0 ... ok test_linsolve.TestSplu.test_spilu_smoketest ... ok test_linsolve.TestSplu.test_splu_basic ... ok test_linsolve.TestSplu.test_splu_nnz0 ... ok test_linsolve.TestSplu.test_splu_perm ... ok test_linsolve.TestSplu.test_splu_smoketest ... ok test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'normal') ... /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py:103: RuntimeWarning: invalid value encountered in absolute ind = np.argsort(abs(reval)) FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, None, , None, 'normal') ... FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, None, , None, 'normal') ... FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, None, , None, 'normal') ... FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'buckling') ... FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, None, , None, 'normal') ... FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'normal') ... FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, None, , None, 'normal') ... FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'normal') ... /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py:86: RuntimeWarning: divide by zero encountered in divide reval = 1. / (eval - sigma) FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, None, , None, 'normal') ... FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'BE', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'BE', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'BE', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'BE', None, None, , None, 'normal') ... FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'BE', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'BE', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'BE', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'BE', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'BE', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'BE', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'BE', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'BE', None, None, , None, 'normal') ... /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/numpy/core/fromnumeric.py:2296: RuntimeWarning: overflow encountered in multiply return round(decimals, out) FAIL test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LA', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LA', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LA', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LA', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LA', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LA', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LA', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LA', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LA', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LA', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LA', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'LA', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'BE', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'BE', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'BE', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'BE', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'BE', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'BE', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'BE', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'BE', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'BE', None, 0.5, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'BE', None, 0.5, , None, 'buckling') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'BE', None, 0.5, , None, 'cayley') ... ok test_arpack.test_symmetric_modes(True, , 'd', 2, 'BE', None, None, , None, 'normal') ... ok test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, None, , None, 'normal') ... FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'buckling') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'buckling') ... /usr/local/Cellar/python/2.7.2/lib/python2.7/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py:96: RuntimeWarning: divide by zero encountered in divide reval = eval / (eval - sigma) FAIL test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'cayley') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, None, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'normal') ... ERROR test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'buckling') ... From ckkart at hoc.net Mon Oct 10 10:26:52 2011 From: ckkart at hoc.net (Christian K.) Date: Mon, 10 Oct 2011 14:26:52 +0000 (UTC) Subject: [SciPy-User] MLE with stats.lognorm References: Message-ID: > >> for example with starting value for loc > >>>>> print stats.lognorm.fit(x, loc=0) > >> (0.23800805074491538, 0.034900026034516723, 196.31113801786194) > > > > I see. Is there any workaround/patch to force loc=0.0? What is the > > meaning of loc anyway? > > loc is the starting value for fmin, I don't remember how to specify > starting values for shape parameters, I never used it. > > As in the ticket you could monkey patch the _fitstart function > > >>> stats.cauchy._fitstart = lambda x:(0,1) > >>> stats.cauchy.fit(x) > > or what I do to experiment with starting values is > > stats.distributions.lognorm_gen._fitstart = fitstart_lognormal Ok, but this is not different from calling fit like stats.lognorm.fit(samples, loc=0.0) I would really need to force loc=0.0 stats.lognorm.fit(samples, loc=0.0, floc=0.0) does not work either. Btw., I think the extradoc is quite misleading: """ lognorm.pdf(x,s) = 1/(s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2) for x > 0, s > 0. If log x is normally distributed with mean mu and variance sigma**2, then x is log-normally distributed with shape paramter sigma and scale parameter exp(mu). """ sigma seems to equal s in the function definition but mu does not appear at all. It seems to enter via _pdf()/scale when looking at distributions.py, wehere scale = exp(mu)? Christian From Salman.Haq at neustar.biz Mon Oct 10 11:14:31 2011 From: Salman.Haq at neustar.biz (Haq, Salman) Date: Mon, 10 Oct 2011 11:14:31 -0400 Subject: [SciPy-User] Time Series Analysis in SciPy Message-ID: Hi All, I'm dipping my feet in statistical analysis after a long time could use some high-level guidance. I have several historical time series and I want to be able to answer questions about them like: - how well does series A track series B? - is there causality between series A and B? - etc.... I think these are common questions about time series data that I'd like to answer. How do I go about doing this? - which set of libraries/functions should I most likely explore? - which type of visualizations will be most intuitive to understand the results? - implementation-wise, which is the most efficient way to store/load the data if the series are typically under 1000 samples? - other things or resources I should keep in mind? Thanks, Salman From josef.pktd at gmail.com Mon Oct 10 11:22:14 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 10 Oct 2011 11:22:14 -0400 Subject: [SciPy-User] Time Series Analysis in SciPy In-Reply-To: References: Message-ID: On Mon, Oct 10, 2011 at 11:14 AM, Haq, Salman wrote: > Hi All, > > I'm dipping my feet in statistical analysis after a long time could use some high-level guidance. > > I have several historical time series and I want to be able to answer questions about them like: > ?- how well does series A track series B? > ?- is there causality between series A and B? > ?- etc.... > > I think these are common questions about time series data that I'd like to answer. > > How do I go about doing this? > ?- which set of libraries/functions should I most likely explore? short answer: pandas and scikits.statsmodels. Josef > ?- which type of visualizations will be most intuitive to understand the results? > ?- implementation-wise, which is the most efficient way to store/load the data if the series are typically under 1000 samples? > ?- other things or resources I should keep in mind? > > Thanks, > Salman > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Mon Oct 10 11:23:48 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 10 Oct 2011 11:23:48 -0400 Subject: [SciPy-User] Time Series Analysis in SciPy In-Reply-To: References: Message-ID: On Mon, Oct 10, 2011 at 11:22 AM, wrote: > On Mon, Oct 10, 2011 at 11:14 AM, Haq, Salman wrote: >> Hi All, >> >> I'm dipping my feet in statistical analysis after a long time could use some high-level guidance. >> >> I have several historical time series and I want to be able to answer questions about them like: >> ?- how well does series A track series B? >> ?- is there causality between series A and B? >> ?- etc.... >> >> I think these are common questions about time series data that I'd like to answer. >> >> How do I go about doing this? >> ?- which set of libraries/functions should I most likely explore? > > short answer: pandas and scikits.statsmodels. too short: and nitime if you are more interested in frequency domain Josef > > Josef > >> ?- which type of visualizations will be most intuitive to understand the results? >> ?- implementation-wise, which is the most efficient way to store/load the data if the series are typically under 1000 samples? >> ?- other things or resources I should keep in mind? >> >> Thanks, >> Salman >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From ckkart at hoc.net Mon Oct 10 13:07:57 2011 From: ckkart at hoc.net (Christian K.) Date: Mon, 10 Oct 2011 19:07:57 +0200 Subject: [SciPy-User] track is down Message-ID: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, root at localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Apache/2.2.3 (CentOS) Server at projects.scipy.org Port 80 From pav at iki.fi Mon Oct 10 13:16:40 2011 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 10 Oct 2011 19:16:40 +0200 Subject: [SciPy-User] track is down In-Reply-To: References: Message-ID: Hi, Yes, it's experiencing some problems apparently with load. A workaround is to wait five minutes and try again. 10.10.2011 19:07, Christian K. kirjoitti: > Internal Server Error > > The server encountered an internal error or misconfiguration and was > unable to complete your request. > > Please contact the server administrator, root at localhost and inform them > of the time the error occurred, and anything you might have done that > may have caused the error. > > More information about this error may be available in the server error log. > > Apache/2.2.3 (CentOS) Server at projects.scipy.org Port 80 From josef.pktd at gmail.com Mon Oct 10 15:22:42 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 10 Oct 2011 15:22:42 -0400 Subject: [SciPy-User] MLE with stats.lognorm In-Reply-To: References: Message-ID: On Mon, Oct 10, 2011 at 10:26 AM, Christian K. wrote: >> >> for example with starting value for loc >> >>>>> print stats.lognorm.fit(x, loc=0) >> >> (0.23800805074491538, 0.034900026034516723, 196.31113801786194) >> > >> > I see. Is there any workaround/patch to force loc=0.0? What is the >> > meaning of loc anyway? >> >> loc is the starting value for fmin, I don't remember how to specify >> starting values for shape parameters, I never used it. >> >> As in the ticket you could monkey patch the _fitstart function >> >> >>> stats.cauchy._fitstart = lambda x:(0,1) >> >>> stats.cauchy.fit(x) >> >> or what I do to experiment with starting values is >> >> stats.distributions.lognorm_gen._fitstart = fitstart_lognormal > > Ok, but this is not different from calling fit like > stats.lognorm.fit(samples, loc=0.0) > > I would really need to force loc=0.0 > > stats.lognorm.fit(samples, loc=0.0, floc=0.0) > > does not work either. ok, I misunderstood that you want to fix the location parameter at zero This looks like a different bug. floc=0 doesn't seem to work, I don't get any results that look close to the true values With a sample size of 2000 the MLE should be pretty close to the true parameters: import numpy as np from scipy import stats np.set_printoptions(precision=4) print 'true' print 0.25, 0., 20.0 print 'estimated, floc=0, loc=0' for i in range(10): x = stats.lognorm.rvs(0.25, 0., 20.0, size=2000) print np.array(stats.lognorm.fit(x, floc=0)), \ np.array(stats.lognorm.fit(x, loc=0)) true 0.25 0.0 20.0 estimated, floc=0, loc=0 [ 2.1271 0. 2.3999] [ 0.2623 1.0211 18.7911] [ 2.1393 0. 2.3952] [ 0.2523 0.0294 20.0117] [ 2.1356 0. 2.3978] [ 0.2477 0.03 19.9703] [ 2.1378 0. 2.3874] [ 0.2496 0.0301 19.9231] [ 2.1463 0. 2.3641] [ 0.2474 0.0292 19.9051] [ 2.1408 0. 2.3898] [ 0.2459 0.0303 20.0118] [ 2.1252 0. 2.4326] [ 0.251 0.029 20.0412] [ 2.1296 0. 2.3943] [ 0.2476 0.0296 19.8208] [ 2.1344 0. 2.401 ] [ 0.2472 0.0299 19.9744] [ 2.1383 0. 2.4133] [ 0.247 0.0301 20.1544] floc=0 is supposed to fix the location at 0, loc=0 only provides a starting value for loc, but still estimates loc > > Btw., I think the extradoc is quite misleading: I think this might be just the non-standard parameterization of the log-normal distribution because we use generic loc and scale handling. The parameterization has been discussed in the mailing list and for example in http://projects.scipy.org/scipy/ticket/1502 clearer documentation for this or a reparameterized distribution would be helpful for lognorm Josef > > """ > lognorm.pdf(x,s) = 1/(s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2) > for x > 0, s > 0. > > If log x is normally distributed with mean mu and variance sigma**2, > then x is log-normally distributed with shape paramter sigma and scale > parameter exp(mu). > """ > > sigma seems to equal s in the function definition but mu does not appear at > all. It seems to enter via _pdf()/scale when looking at distributions.py, > wehere scale = exp(mu)? > > Christian > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Mon Oct 10 19:35:54 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 10 Oct 2011 19:35:54 -0400 Subject: [SciPy-User] MLE with stats.lognorm In-Reply-To: References: Message-ID: On Mon, Oct 10, 2011 at 3:22 PM, wrote: > On Mon, Oct 10, 2011 at 10:26 AM, Christian K. wrote: >>> >> for example with starting value for loc >>> >>>>> print stats.lognorm.fit(x, loc=0) >>> >> (0.23800805074491538, 0.034900026034516723, 196.31113801786194) >>> > >>> > I see. Is there any workaround/patch to force loc=0.0? What is the >>> > meaning of loc anyway? >>> >>> loc is the starting value for fmin, I don't remember how to specify >>> starting values for shape parameters, I never used it. >>> >>> As in the ticket you could monkey patch the _fitstart function >>> >>> >>> stats.cauchy._fitstart = lambda x:(0,1) >>> >>> stats.cauchy.fit(x) >>> >>> or what I do to experiment with starting values is >>> >>> stats.distributions.lognorm_gen._fitstart = fitstart_lognormal >> >> Ok, but this is not different from calling fit like >> stats.lognorm.fit(samples, loc=0.0) >> >> I would really need to force loc=0.0 >> >> stats.lognorm.fit(samples, loc=0.0, floc=0.0) >> >> does not work either. > > ok, I misunderstood that you want to fix the location parameter at zero > > This looks like a different bug. > > floc=0 doesn't seem to work, I don't get any results that look close > to the true values > With a sample size of 2000 the MLE should be pretty close to the true > parameters: this is now http://projects.scipy.org/scipy/ticket/1536 I ran a few more distributions as examples, and my conclusion is: At this stage, don't trust any results with setting floc. As far as I know, nobody has ever checked the fixed parameter cases in distributions fit. Patches welcome. Josef > > > import numpy as np > > from scipy import stats > > np.set_printoptions(precision=4) > print 'true' > print 0.25, 0., 20.0 > print 'estimated, floc=0, loc=0' > for i in range(10): > ? ?x = stats.lognorm.rvs(0.25, 0., 20.0, size=2000) > ? ?print np.array(stats.lognorm.fit(x, floc=0)), \ > ? ? ? ? ? ?np.array(stats.lognorm.fit(x, loc=0)) > > true > 0.25 0.0 20.0 > estimated, floc=0, loc=0 > [ 2.1271 ?0. ? ? ?2.3999] [ ?0.2623 ? 1.0211 ?18.7911] > [ 2.1393 ?0. ? ? ?2.3952] [ ?0.2523 ? 0.0294 ?20.0117] > [ 2.1356 ?0. ? ? ?2.3978] [ ?0.2477 ? 0.03 ? ?19.9703] > [ 2.1378 ?0. ? ? ?2.3874] [ ?0.2496 ? 0.0301 ?19.9231] > [ 2.1463 ?0. ? ? ?2.3641] [ ?0.2474 ? 0.0292 ?19.9051] > [ 2.1408 ?0. ? ? ?2.3898] [ ?0.2459 ? 0.0303 ?20.0118] > [ 2.1252 ?0. ? ? ?2.4326] [ ?0.251 ? ?0.029 ? 20.0412] > [ 2.1296 ?0. ? ? ?2.3943] [ ?0.2476 ? 0.0296 ?19.8208] > [ 2.1344 ?0. ? ? ?2.401 ] [ ?0.2472 ? 0.0299 ?19.9744] > [ 2.1383 ?0. ? ? ?2.4133] [ ?0.247 ? ?0.0301 ?20.1544] > > floc=0 is supposed to fix the location at 0, loc=0 only provides a > starting value for loc, but still estimates loc > >> >> Btw., I think the extradoc is quite misleading: > > I think this might be just the non-standard parameterization of the > log-normal distribution because we use generic loc and scale handling. > The parameterization has been discussed in the mailing list and for > example in http://projects.scipy.org/scipy/ticket/1502 > > clearer documentation for this or a reparameterized distribution would > be helpful for lognorm > > Josef > >> >> """ >> lognorm.pdf(x,s) = 1/(s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2) >> for x > 0, s > 0. >> >> If log x is normally distributed with mean mu and variance sigma**2, >> then x is log-normally distributed with shape paramter sigma and scale >> parameter exp(mu). >> """ >> >> sigma seems to equal s in the function definition but mu does not appear at >> all. It seems to enter via _pdf()/scale when looking at distributions.py, >> wehere scale = exp(mu)? >> >> Christian >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From permafacture at gmail.com Tue Oct 11 01:09:27 2011 From: permafacture at gmail.com (Permafacture) Date: Mon, 10 Oct 2011 22:09:27 -0700 (PDT) Subject: [SciPy-User] scipy.optimize on a physical model Message-ID: <1a71d24b-0e01-4d50-b7d3-88a6da3808ef@s9g2000yql.googlegroups.com> I am working with a non-sequential optical ray-tracing program written in python. I would like to maximize the concentration ratio and optical efficiency of a model defined by several variables (one to dozens). Is it possible that code already written in scipy could do this? Specifically, I was looking at optimize.anneal. Are there scikits that might be more appropriate? Depending on the detail of the trace (the number of rays involved), it could take between .1 seconds and 10 seconds to evaluate the model, so the fewer function calls the better. thanks for reading, elliot From sponsfreixes at gmail.com Tue Oct 11 06:19:30 2011 From: sponsfreixes at gmail.com (Sergi Pons Freixes) Date: Tue, 11 Oct 2011 12:19:30 +0200 Subject: [SciPy-User] Source of the error between computers (version, architecture, etc) Message-ID: Hi all, I have some code that runs perfectly on: Linux Toshiba-00 2.6.32-33-generic #72-Ubuntu SMP Fri Jul 29 21:08:37 UTC 2011 i686 GNU/Linux Python 2.6.5 Numpy 1.3.0 But on this machine: Linux mirto 3.0-ARCH #1 SMP PREEMPT Tue Aug 30 08:53:25 CEST 2011 x86_64 Intel(R) Core(TM) i5-2500 CPU @ 3.30GHz GenuineIntel GNU/Linux Python 2.7.2 Numpy 1.6.1 I'm getting this error: $ python main.py Traceback (most recent call last): File "main.py", line 32, in data = aldp.merge_max_irta(data, irta) File "/home/sergi/Dropbox/doctorat/alfacs/codi/aldp.py", line 378, in merge_max_irta data = np.hstack((maxd, irta)) File "/usr/lib/python2.7/site-packages/numpy/core/shape_base.py", line 270, in hstack return _nx.concatenate(map(atleast_1d,tup),1) TypeError: invalid type promotion Google hasn't helped much when searching about "TypeError: invalid type promotion" and similar queries. To reduce the uncertainty, I would like to know if the cause could be the difference in versions of the software, a different architecture (32 bits vs 64), or whatever. Any clue? Regards, Sergi From jerome.kieffer at esrf.fr Tue Oct 11 06:54:10 2011 From: jerome.kieffer at esrf.fr (Jerome Kieffer) Date: Tue, 11 Oct 2011 12:54:10 +0200 Subject: [SciPy-User] Source of the error between computers (version, architecture, etc) In-Reply-To: References: Message-ID: <20111011125410.1ace44c9.jerome.kieffer@esrf.fr> On Tue, 11 Oct 2011 12:19:30 +0200 Sergi Pons Freixes wrote: > Hi all, > > I have some code that runs perfectly on: > > Linux Toshiba-00 2.6.32-33-generic #72-Ubuntu SMP Fri Jul 29 21:08:37 > UTC 2011 i686 GNU/Linux > Python 2.6.5 > Numpy 1.3.0 > > But on this machine: > > Linux mirto 3.0-ARCH #1 SMP PREEMPT Tue Aug 30 08:53:25 CEST 2011 > x86_64 Intel(R) Core(TM) i5-2500 CPU @ 3.30GHz GenuineIntel GNU/Linux > Python 2.7.2 > Numpy 1.6.1 > > I'm getting this error: In python 2.6, many operation are done in the native type (uint8 for example) what can lead to "odd" behavour. You should use dtype="float" in sum or mean methods of ndarray. In python 2.7 the default type is switched to float, what changes some result and can break tests but the results are usually "better". Hop this helps -- Jerome Kieffer Online Data Analysis / SoftGroup From tiabaldu at gmail.com Tue Oct 11 04:35:49 2011 From: tiabaldu at gmail.com (Ruslan Mullakhmetov) Date: Tue, 11 Oct 2011 01:35:49 -0700 (PDT) Subject: [SciPy-User] Finding Pareto number in scipy Message-ID: <15896998.1745.1318322149705.JavaMail.geo-discussion-forums@yqai1> Hi folks, I often need to find Pareto number (c: \Sum_{x_i < c } f(x_i) = 1 - c ). Is there such function in scipy/numpy? I wrote my own, but i'd prefer to use librarian one to get rid of copying own implementation across projects. (I do not want to put it into standard library path) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jake.biesinger at gmail.com Tue Oct 11 13:29:07 2011 From: jake.biesinger at gmail.com (Jacob Biesinger) Date: Tue, 11 Oct 2011 10:29:07 -0700 Subject: [SciPy-User] From 1-D boolean array to integer index Message-ID: Hi all! This seems a trivial question, but I couldn't find it in the archives. I have a 1-D bool array which I'd like to convert to an integer index. The best I've come up with is: >>> int(''.join(['1' if e else '0' for e in sp.array([True, False])]), 2) 2 >>> int(''.join(['1' if e else '0' for e in sp.array([True, False, True])]), 2) 5 Is there an easier way to do this? Thanks! -- Jake Biesinger Graduate Student Xie Lab, UC Irvine -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Oct 11 13:38:28 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 11 Oct 2011 13:38:28 -0400 Subject: [SciPy-User] From 1-D boolean array to integer index In-Reply-To: References: Message-ID: On Tue, Oct 11, 2011 at 1:29 PM, Jacob Biesinger wrote: > Hi all! > This seems a trivial question, but I couldn't find it in the archives. ?I > have a 1-D bool array which I'd like to convert to an integer index. ?The > best I've come up with is: >>>> int(''.join(['1' if e else '0' for e in?sp.array([True, False])]), 2) > 2 >>>> int(''.join(['1' if e else '0' for e in sp.array([True, False, True])]), >>>> 2) > 5 > Is there an easier way to do this? I don't quite understand what you'd like to do, but there is >>> np.nonzero(np.array([True, False, True])) (array([0, 2]),) >>> np.array([True, False, True]).astype(int) array([1, 0, 1]) >>> np.array([True, False, True]).astype('S1') array(['T', 'F', 'T'], dtype='|S1') >>> np.array([True, False, True]).astype(int).astype('S1') array(['1', '0', '1'], dtype='|S1') Josef > Thanks! > -- > Jake Biesinger > Graduate Student > Xie Lab, UC Irvine > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From e.antero.tammi at gmail.com Tue Oct 11 13:49:23 2011 From: e.antero.tammi at gmail.com (eat) Date: Tue, 11 Oct 2011 20:49:23 +0300 Subject: [SciPy-User] From 1-D boolean array to integer index In-Reply-To: References: Message-ID: Hi On Tue, Oct 11, 2011 at 8:29 PM, Jacob Biesinger wrote: > Hi all! > > This seems a trivial question, but I couldn't find it in the archives. I > have a 1-D bool array which I'd like to convert to an integer index. The > best I've come up with is: > > >>> int(''.join(['1' if e else '0' for e in sp.array([True, False])]), 2) > 2 > >>> int(''.join(['1' if e else '0' for e in sp.array([True, False, > True])]), 2) > 5 > > Is there an easier way to do this? > Perhaps something like this (np.array([True, False, True])* (2** np.arange(3))).sum() Just my 2 cents, eat > > Thanks! > -- > Jake Biesinger > Graduate Student > Xie Lab, UC Irvine > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Tue Oct 11 13:57:49 2011 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Tue, 11 Oct 2011 13:57:49 -0400 Subject: [SciPy-User] From 1-D boolean array to integer index In-Reply-To: References: Message-ID: Another recipe: >>> x = [True, False] >>> int( (len(x) * '%d') % tuple(x) ,2) 2 -- Oleksandr 2011/10/11 eat > Hi > > On Tue, Oct 11, 2011 at 8:29 PM, Jacob Biesinger > wrote: > >> Hi all! >> >> This seems a trivial question, but I couldn't find it in the archives. I >> have a 1-D bool array which I'd like to convert to an integer index. The >> best I've come up with is: >> >> >>> int(''.join(['1' if e else '0' for e in sp.array([True, False])]), 2) >> 2 >> >>> int(''.join(['1' if e else '0' for e in sp.array([True, False, >> True])]), 2) >> 5 >> >> Is there an easier way to do this? >> > Perhaps something like this > (np.array([True, False, True])* (2** np.arange(3))).sum() > > Just my 2 cents, > eat > >> >> Thanks! >> -- >> Jake Biesinger >> Graduate Student >> Xie Lab, UC Irvine >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alok at merfinllc.com Tue Oct 11 14:30:10 2011 From: alok at merfinllc.com (Alok Singhal) Date: Tue, 11 Oct 2011 11:30:10 -0700 Subject: [SciPy-User] From 1-D boolean array to integer index In-Reply-To: References: Message-ID: Another: np.sum((1 << np.arange(data.size))[data]) On Tue, Oct 11, 2011 at 10:57 AM, Oleksandr Huziy wrote: > Another recipe: > >>>> x = [True, False] >>>> int( (len(x) * '%d') % tuple(x) ,2) > 2 > > -- > Oleksandr > > 2011/10/11 eat >> >> Hi >> >> On Tue, Oct 11, 2011 at 8:29 PM, Jacob Biesinger >> wrote: >>> >>> Hi all! >>> This seems a trivial question, but I couldn't find it in the archives. ?I >>> have a 1-D bool array which I'd like to convert to an integer index. ?The >>> best I've come up with is: >>> >>> int(''.join(['1' if e else '0' for e in?sp.array([True, False])]), 2) >>> 2 >>> >>> int(''.join(['1' if e else '0' for e in sp.array([True, False, >>> >>> True])]), 2) >>> 5 >>> Is there an easier way to do this? >> >> Perhaps something like this >> (np.array([True, False, True])* (2** np.arange(3))).sum() >> Just my 2 cents, >> eat >>> >>> Thanks! >>> -- >>> Jake Biesinger >>> Graduate Student >>> Xie Lab, UC Irvine >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From alan.isaac at gmail.com Tue Oct 11 15:25:54 2011 From: alan.isaac at gmail.com (Alan G Isaac) Date: Tue, 11 Oct 2011 15:25:54 -0400 Subject: [SciPy-User] From 1-D boolean array to integer index In-Reply-To: References: Message-ID: <4E949842.4060102@gmail.com> On 10/11/2011 1:29 PM, Jacob Biesinger wrote: > I have a 1-D bool array which I'd like to convert to an integer index. Sounds like you may be able to just use your bool array: >>> idx = np.array([True, False, True]) >>> a = np.random.random(3) >>> a array([ 0.04818879, 0.49417941, 0.70470834]) >>> a[idx] array([ 0.04818879, 0.70470834]) hth, Alan Isaac From jake.biesinger at gmail.com Tue Oct 11 16:38:08 2011 From: jake.biesinger at gmail.com (Jacob Biesinger) Date: Tue, 11 Oct 2011 13:38:08 -0700 Subject: [SciPy-User] From 1-D boolean array to integer index In-Reply-To: <4E949842.4060102@gmail.com> References: <4E949842.4060102@gmail.com> Message-ID: Thanks for the ideas everyone. Some clarification: the 1d bool array represents the bits of the integer I'm interested in. I have binary observations in a vector of length ~10, so the integer index will be the particular combination of 1's and 0's in the vector. In this case, the array will not be very long (~10 elements) but the conversion needs to be fast (part of an inner loop). Here's some timings on your candidate suggestions: # setup size = 10 b_array = sp.rand(size) < .5 # eat's suggestion place_values = 2**sp.arange(size-1, -1, -1) %timeit (b_array * place_values).sum() 100000 loops, best of 3: 4.83 us per loop # Alok's suggestion bit_shift = sp.arange(size-1, -1, -1) %timeit sp.sum((1< wrote: > On 10/11/2011 1:29 PM, Jacob Biesinger wrote: > > I have a 1-D bool array which I'd like to convert to an integer index. > > Sounds like you may be able to just use your bool array: > > >>> idx = np.array([True, False, True]) > >>> a = np.random.random(3) > >>> a > array([ 0.04818879, 0.49417941, 0.70470834]) > >>> a[idx] > array([ 0.04818879, 0.70470834]) > > hth, > Alan Isaac > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Oct 11 17:11:49 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 11 Oct 2011 17:11:49 -0400 Subject: [SciPy-User] From 1-D boolean array to integer index In-Reply-To: References: <4E949842.4060102@gmail.com> Message-ID: On Tue, Oct 11, 2011 at 4:38 PM, Jacob Biesinger wrote: > Thanks for the ideas everyone. ?Some clarification: ?the 1d bool array > represents the bits of the integer I'm interested in. ?I have ?binary > observations in a vector of length ~10, so the integer index will be the > particular combination of 1's and 0's in the vector. > In this case, the array will not be very long (~10 elements) but the > conversion needs to be fast (part of an inner loop). > Here's some timings on your candidate suggestions: > # setup > size = 10 > b_array = sp.rand(size) < .5 > #?eat's suggestion > place_values =??2**sp.arange(size-1, -1, -1) > %timeit (b_array *?place_values).sum() > ?100000 loops, best of 3: 4.83 us per loop > # Alok's suggestion > bit_shift =?sp.arange(size-1, -1, -1) > %timeit sp.sum((1< 100000 loops, best of 3: 7.58 us per loop > # Oleksandr's suggestion > string_format = size * '%d' > %timeit int( string_format % tuple(b_array) ,2) > 100000 loops, best of 3: 7.24 us per loop > # my original idea > %timeit int(''.join(['1' if e else '0' for e in b_array]), 2) > 1000000 loops, best of 3: 1.49 us per loop > > I'm kinda surprised by the timings-- building a new list, doing a string > join, then converting the string a an integer is faster than bit-shift & > summation or multiplication & summation. ?And it seems that python's string > formatting operator is pretty efficient as well! ?Just as fast as some of > the other ops. > Thanks again for the suggestions-- Guess I'll stick with my current > (fastest) implementation. the difference between python and using numpy would be that with eat's solution you can do 1000000 in one operation instead of a 1000000 python loops. Josef > -- > Jake Biesinger > Graduate Student > Xie Lab, UC Irvine > > > > On Tue, Oct 11, 2011 at 12:25 PM, Alan G Isaac wrote: >> >> On 10/11/2011 1:29 PM, Jacob Biesinger wrote: >> > I have a 1-D bool array which I'd like to convert to an integer index. >> >> Sounds like you may be able to just use your bool array: >> >> ? ? >>> idx = np.array([True, False, True]) >> ? ? >>> a = np.random.random(3) >> ? ? >>> a >> ? ? array([ 0.04818879, ?0.49417941, ?0.70470834]) >> ? ? >>> a[idx] >> ? ? array([ 0.04818879, ?0.70470834]) >> >> hth, >> Alan Isaac >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From guziy.sasha at gmail.com Tue Oct 11 17:19:35 2011 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Tue, 11 Oct 2011 17:19:35 -0400 Subject: [SciPy-User] From 1-D boolean array to integer index In-Reply-To: References: <4E949842.4060102@gmail.com> Message-ID: Try shifting before the loop in the Alok's suggestion. -- Oleksandr 2011/10/11 Jacob Biesinger > Thanks for the ideas everyone. Some clarification: the 1d bool array > represents the bits of the integer I'm interested in. I have binary > observations in a vector of length ~10, so the integer index will be the > particular combination of 1's and 0's in the vector. > > In this case, the array will not be very long (~10 elements) but the > conversion needs to be fast (part of an inner loop). > > Here's some timings on your candidate suggestions: > > # setup > size = 10 > b_array = sp.rand(size) < .5 > > # eat's suggestion > place_values = 2**sp.arange(size-1, -1, -1) > %timeit (b_array * place_values).sum() > 100000 loops, best of 3: 4.83 us per loop > > # Alok's suggestion > bit_shift = sp.arange(size-1, -1, -1) > %timeit sp.sum((1< 100000 loops, best of 3: 7.58 us per loop > > # Oleksandr's suggestion > string_format = size * '%d' > %timeit int( string_format % tuple(b_array) ,2) > 100000 loops, best of 3: 7.24 us per loop > > # my original idea > %timeit int(''.join(['1' if e else '0' for e in b_array]), 2) > 1000000 loops, best of 3: 1.49 us per loop > > > I'm kinda surprised by the timings-- building a new list, doing a string > join, then converting the string a an integer is faster than bit-shift & > summation or multiplication & summation. And it seems that python's string > formatting operator is pretty efficient as well! Just as fast as some of > the other ops. > > Thanks again for the suggestions-- Guess I'll stick with my current > (fastest) implementation. > -- > Jake Biesinger > Graduate Student > Xie Lab, UC Irvine > > > > On Tue, Oct 11, 2011 at 12:25 PM, Alan G Isaac wrote: > >> On 10/11/2011 1:29 PM, Jacob Biesinger wrote: >> > I have a 1-D bool array which I'd like to convert to an integer index. >> >> Sounds like you may be able to just use your bool array: >> >> >>> idx = np.array([True, False, True]) >> >>> a = np.random.random(3) >> >>> a >> array([ 0.04818879, 0.49417941, 0.70470834]) >> >>> a[idx] >> array([ 0.04818879, 0.70470834]) >> >> hth, >> Alan Isaac >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jake.biesinger at gmail.com Tue Oct 11 22:00:48 2011 From: jake.biesinger at gmail.com (Jacob Biesinger) Date: Tue, 11 Oct 2011 19:00:48 -0700 Subject: [SciPy-User] From 1-D boolean array to integer index In-Reply-To: References: <4E949842.4060102@gmail.com> Message-ID: On Tue, Oct 11, 2011 at 2:19 PM, Oleksandr Huziy wrote: > Try shifting before the loop in the Alok's suggestion. > Thanks for the catch-- it's a bit faster: bit_shift = 1 << sp.arange(size-1, -1, -1) %timeit sp.sum((bit_shift)[data]) 100000 loops, best of 3: 4.06 us per loop Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Wed Oct 12 13:17:51 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Wed, 12 Oct 2011 19:17:51 +0200 Subject: [SciPy-User] scikits portal seems down Message-ID: http://scikits.appspot.com/scikits raises errors I guess it couldn't be that GAE just updated to new version, including support for Python 2.7 BTW, but maybe someone uploaded unfinished code? -------------- next part -------------- An HTML attachment was scrubbed... URL: From klonuo at gmail.com Wed Oct 12 13:34:52 2011 From: klonuo at gmail.com (Klonuo Umom) Date: Wed, 12 Oct 2011 19:34:52 +0200 Subject: [SciPy-User] scikits portal seems down In-Reply-To: References: Message-ID: After 15min everything is fine now I did not grab the error... nevermind On Wed, Oct 12, 2011 at 7:17 PM, Klonuo Umom wrote: > http://scikits.appspot.com/scikits raises errors > > I guess it couldn't be that GAE just updated to new version, including > support for Python 2.7 BTW, but maybe someone uploaded unfinished code? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Oct 12 13:36:26 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 12 Oct 2011 13:36:26 -0400 Subject: [SciPy-User] scikits portal seems down In-Reply-To: References: Message-ID: On Wed, Oct 12, 2011 at 1:17 PM, Klonuo Umom wrote: > http://scikits.appspot.com/scikits raises errors > I guess it couldn't be that GAE just updated to new version, including > support for Python 2.7 BTW, but maybe someone uploaded unfinished code? It works for me. Josef > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From bblais at bryant.edu Wed Oct 12 20:23:58 2011 From: bblais at bryant.edu (Brian Blais) Date: Wed, 12 Oct 2011 20:23:58 -0400 Subject: [SciPy-User] fitting cosine with curve_fit - problem with frequency Message-ID: <222B5606-F021-4517-9F06-01FB7FBE5845@bryant.edu> Hello, So I am trying to fit some data that is a mixture of oscillations, so I figured I'd try to do a "toy" problem...and that problem is stumping me (full code at the bottom). I generate data like: x=pylab.linspace(0,50,100) y=cos(x/2)+pylab.randn(len(x))*.1 and fit a model like: def m1(t,a,b,c): return a*cos(b*t+c) with scipy.optimize.curve_fit. It seems as if the method used in curve_fit doesn't like oscillatory data, especially when estimating the frequency. Unless my initial guess for the frequency is *very* close to the right answer (i.e. I need the right answer already to get the answer), it doesn't even get close. Is there a better way to fit this sort of a function? Should I do an fft, pick off frequencies, and use those as the initial estimates? Am I doing something wrong? any ideas, or references to places I can read about it, would be great! thanks, Brian Blais -- Brian Blais bblais at bryant.edu http://web.bryant.edu/~bblais http://bblais.blogspot.com/ #=====================code below==================== import pylab from scipy import optimize from numpy import * def m1(t,a,b,c): return a*cos(b*t+c) x=pylab.linspace(0,50,100) y=cos(x/2)+pylab.randn(len(x))*.1 pylab.figure(1) pylab.clf() pylab.plot(x,y,'o-') func=m1 p0=[5,0.9,2] popt, pcov = optimize.curve_fit(func, x, y, p0=p0, # initial guess ) xf=linspace(x[0],x[-1],1000) yf=func(xf,*popt) pylab.plot(xf,yf,'.-') pylab.draw() From josef.pktd at gmail.com Wed Oct 12 20:28:17 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 12 Oct 2011 20:28:17 -0400 Subject: [SciPy-User] circular statistic, anybody? Message-ID: Some of the few functions in scipy.stats where I have no idea what they are doing (and whether they are doing the correct thing) are the circular statistics, circmean, circvar and circstd. (And they are one of the few ones where I have no interest in figuring it out.) Test coverage is zero, and they are a bit picky on inputs http://projects.scipy.org/scipy/ticket/1537 I would like to get some verified results, and improved docstrings wouldn't hurt either. I found package circular in R >>> from scipy import stats >>> x = np.arange(20)/20.*np.pi >>> stats.circmean(x) 1.4922565104551515 in R: > library(circular) > xc = 0:19 > xc = xc /20 * pi > mean(circular(xc)) Circular Data: Type = angles Units = radians Template = none Modulo = asis Zero = 0 Rotation = counter [1] 1.492256510455152 That's the only case I managed to match. If xc is in [0, 2*pi], then R circular produces different results (NaN or a different number with argument modulo=2pi for example) >From the R help: """ The function circular is used to create circular objects. as.circular and is.circular coerce an object to a circular and test whether an object is a circular data. Usage circular(x, type = c("angles", "directions"), units = c("radians", "degrees", "hours"), template = c("none", "geographics", "clock12", "clock24"), modulo = c("asis", "2pi", "pi"), zero = 0, rotation = c("counter", "clock"), names) """ Does anybody have an idea what these options mean (I don't), and what the scipy.stats.circ* functions are actually doing? Thanks, Josef From josef.pktd at gmail.com Wed Oct 12 20:55:17 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 12 Oct 2011 20:55:17 -0400 Subject: [SciPy-User] fitting cosine with curve_fit - problem with frequency In-Reply-To: <222B5606-F021-4517-9F06-01FB7FBE5845@bryant.edu> References: <222B5606-F021-4517-9F06-01FB7FBE5845@bryant.edu> Message-ID: On Wed, Oct 12, 2011 at 8:23 PM, Brian Blais wrote: > Hello, > > So I am trying to fit some data that is a mixture of oscillations, so I figured I'd try to do a "toy" problem...and that problem is stumping me (full code at the bottom). ?I generate data like: > > x=pylab.linspace(0,50,100) > y=cos(x/2)+pylab.randn(len(x))*.1 > > and fit a model like: > > def m1(t,a,b,c): > ? ?return a*cos(b*t+c) > > > with scipy.optimize.curve_fit. ?It seems as if the method used in curve_fit doesn't like oscillatory data, especially when estimating the frequency. ?Unless my initial guess for the frequency is *very* close to the right answer (i.e. I need the right answer already to get the answer), it doesn't even get close. > > Is there a better way to fit this sort of a function? ?Should I do an fft, pick off frequencies, and use those as the initial estimates? ?Am I doing something wrong? If the starting value for b is in the interval 0.4, 0.6, then the fitted curve looks good. I guess a rough starting value for the frequency is necessary. My guess is that the curvature of the objective function is only *locally* convex in the frequency, in b. Also, it looks to me there is an indeterminacy in c (modulo 2*pi, or it looks like modulo pi with switch in sign for b) That's the problems that leastsq/curvefit might have, estimating in frequency domain might be more appropriate. I guess somebody has a better solution. Josef > > any ideas, or references to places I can read about it, would be great! > > ? ? ? ? ? ? ? ?thanks, > > ? ? ? ? ? ? ? ? ? ? ? ?Brian Blais > > -- > Brian Blais > bblais at bryant.edu > http://web.bryant.edu/~bblais > http://bblais.blogspot.com/ > > #=====================code below==================== > import pylab > from scipy import optimize > from numpy import * > > def m1(t,a,b,c): > ? ?return a*cos(b*t+c) > > > x=pylab.linspace(0,50,100) > y=cos(x/2)+pylab.randn(len(x))*.1 > > pylab.figure(1) > pylab.clf() > pylab.plot(x,y,'o-') > > func=m1 > p0=[5,0.9,2] > > popt, pcov = optimize.curve_fit(func, x, y, > ? ? ? ?p0=p0, ?# initial guess > ? ? ? ?) > > > xf=linspace(x[0],x[-1],1000) > yf=func(xf,*popt) > > pylab.plot(xf,yf,'.-') > > pylab.draw() > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Wed Oct 12 21:09:59 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 12 Oct 2011 21:09:59 -0400 Subject: [SciPy-User] fitting cosine with curve_fit - problem with frequency In-Reply-To: References: <222B5606-F021-4517-9F06-01FB7FBE5845@bryant.edu> Message-ID: On Wed, Oct 12, 2011 at 8:55 PM, wrote: > On Wed, Oct 12, 2011 at 8:23 PM, Brian Blais wrote: >> Hello, >> >> So I am trying to fit some data that is a mixture of oscillations, so I figured I'd try to do a "toy" problem...and that problem is stumping me (full code at the bottom). ?I generate data like: >> >> x=pylab.linspace(0,50,100) >> y=cos(x/2)+pylab.randn(len(x))*.1 >> >> and fit a model like: >> >> def m1(t,a,b,c): >> ? ?return a*cos(b*t+c) >> >> >> with scipy.optimize.curve_fit. ?It seems as if the method used in curve_fit doesn't like oscillatory data, especially when estimating the frequency. ?Unless my initial guess for the frequency is *very* close to the right answer (i.e. I need the right answer already to get the answer), it doesn't even get close. >> >> Is there a better way to fit this sort of a function? ?Should I do an fft, pick off frequencies, and use those as the initial estimates? ?Am I doing something wrong? > > If the starting value for b is in the interval 0.4, 0.6, then the > fitted curve looks good. I guess a rough starting value for the > frequency is necessary. some guessing, but it works in the examples I tried: import matplotlib.pyplot as plt psd, fr = plt.psd(y) midx = np.argmax(psd) print fr[midx], psd[midx] p0[1] = fr[midx]*np.pi*2 Josef > > My guess is that the curvature of the objective function is only > *locally* convex in the frequency, in b. > Also, it looks to me there is an indeterminacy in c (modulo 2*pi, or > it looks like modulo pi with switch in sign for b) > > That's the problems that leastsq/curvefit might have, estimating in > frequency domain might be more appropriate. I guess somebody has a > better solution. > > Josef > >> >> any ideas, or references to places I can read about it, would be great! >> >> ? ? ? ? ? ? ? ?thanks, >> >> ? ? ? ? ? ? ? ? ? ? ? ?Brian Blais >> >> -- >> Brian Blais >> bblais at bryant.edu >> http://web.bryant.edu/~bblais >> http://bblais.blogspot.com/ >> >> #=====================code below==================== >> import pylab >> from scipy import optimize >> from numpy import * >> >> def m1(t,a,b,c): >> ? ?return a*cos(b*t+c) >> >> >> x=pylab.linspace(0,50,100) >> y=cos(x/2)+pylab.randn(len(x))*.1 >> >> pylab.figure(1) >> pylab.clf() >> pylab.plot(x,y,'o-') >> >> func=m1 >> p0=[5,0.9,2] >> >> popt, pcov = optimize.curve_fit(func, x, y, >> ? ? ? ?p0=p0, ?# initial guess >> ? ? ? ?) >> >> >> xf=linspace(x[0],x[-1],1000) >> yf=func(xf,*popt) >> >> pylab.plot(xf,yf,'.-') >> >> pylab.draw() >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > From ssalha at gmail.com Wed Oct 12 11:11:19 2011 From: ssalha at gmail.com (ssalha) Date: Wed, 12 Oct 2011 08:11:19 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] scipy.io.numpyio.fwrite replacement?? In-Reply-To: <4E020806.4080209@ut.ee> References: <4E020806.4080209@ut.ee> Message-ID: <32638582.post@talk.nabble.com> Hello, Please let me know if you were able to solve this problem. Your feedback is greatly appreciated, as I am stuck on the same question now of not finding a replacement of scipy.io.numpy on the latest version, using ipython2.7.1, ubuntu unity. and scipy 0.8 version. Tried downgrading to an older version of scipy, but there was still a problem ... Thank you, Sara Andres Luhamaa wrote: > > Hi all, > I try to upgrade code from scipy 0.7.2 to 0.9.0 and see that there is no > more scipy.io.numpyio. I found two similar questions in the archive of > this list without a reasonable answer (new in the list, so cannot > replay to these). > In particular I need scipy.io.numpyio.fwrite and I do not know how to > replace it with something else. np.save seems to save in double, > np.lib.format has some options, but nothing seems what I want. As I am > not using python to read the data file, using some other file format > like ".npy" or ".npz" is not an option. > > To illustrate what I want to get, to those familiar with fortran, the > write sequence in fortran looks like this: > open (filenum, file=filename,form='unformatted', & > & access='DIRECT', status='unknown', RECL=lon) > write (filenum,REC=IREC) data > > Any guidance really appreciated, > Andres > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/scipy.io.numpyio.fwrite-replacement---tp31904137p32638582.html Sent from the Scipy-User mailing list archive at Nabble.com. From JRadinger at gmx.at Fri Oct 14 07:51:59 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Fri, 14 Oct 2011 13:51:59 +0200 Subject: [SciPy-User] calculate predicted values from regression + confidence intervall Message-ID: <20111014115159.241250@gmx.net> Hello, I did a statistical regression analysis (linear regression) in R which has following parameters: Y = X1 + X2 from the R-analysis I can get the intercept and slopes for the independent variables so that I get: Y = Intercept + slope1*X1 + slope2*X2 Now I want to use that in Scipy to calculate new predicted Y values. Generally that is not a problem. I use the new X1 and X2 as input and the slopes and intercept are predefined. So I can easily calculate Y new. Now my question arises: How can I calculate a kind of uncertainty (e.g. a confidence intervall) for the new Y? What do I have to extract from R and how do I have to use the extracted parameters in scipy to calculate such things? Is that generally possible? Thank you very much Johannes -- NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zur?ck-Garantie! Jetzt informieren: http://www.gmx.net/de/go/freephone From tazzben at me.com Thu Oct 13 15:55:42 2011 From: tazzben at me.com (tazzben) Date: Thu, 13 Oct 2011 19:55:42 +0000 (GMT) Subject: [SciPy-User] fmin / I'm sure this is a dumb syntax error on my part Message-ID: So, I'm new to the optimization tools in scipy, so I'm trying to solve a problem I already know the answer to in an attempt to learn the system. What I'm trying to solve is: min of 100(y-x^2)^2+(1-x)^2 solving for both x and y (the solution is 1,1).? Based on the example, I tried this: from scipy.optimize import fmin def sFunction(x): return 100.0(x[1]-x[0]**2.0)**2.0 + (1.0-x[0])**2.0 iv = [2.0, 5.0] fmin(sFunction, iv) Also in an earlier version I had x and y, but it wasn't working so I thought it might be better as an array. ?So when I do this I get an error that says that a float is not callable. ?What I think is happening is that it is substituting the?initial?guess in (thus it returns a float) but it isn't iterating for some reason.? I'm sure it's really stupid, but could someone explain what's suppose to be going on? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Fri Oct 14 10:56:35 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 14 Oct 2011 10:56:35 -0400 Subject: [SciPy-User] fmin / I'm sure this is a dumb syntax error on my part In-Reply-To: References: Message-ID: On Thu, Oct 13, 2011 at 3:55 PM, tazzben wrote: > So, I'm new to the optimization tools in scipy, so I'm trying to solve a > problem I already know the answer to in an attempt to learn the system. > What I'm trying to solve is: > min of 100(y-x^2)^2+(1-x)^2 > solving for both x and y (the solution is 1,1). > Based on the example, I tried this: > from scipy.optimize import fmin > def sFunction(x): > return 100.0(x[1]-x[0]**2.0)**2.0 + (1.0-x[0])**2.0 > iv = [2.0, 5.0] > fmin(sFunction, iv) > Also in an earlier version I had x and y, but it wasn't working so I thought > it might be better as an array. ?So when I do this I get an error that says > that a float is not callable. ?What I think is happening is that it is > substituting the?initial?guess in (thus it returns a float) but it isn't > iterating for some reason. You need a * Between 100.0 and (. It's treating the float like a function and trying to call it. Skipper > I'm sure it's really stupid, but could someone explain what's suppose to be > going on? > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From cpolymeris at gmail.com Fri Oct 14 13:24:30 2011 From: cpolymeris at gmail.com (Camilo Polymeris) Date: Fri, 14 Oct 2011 17:24:30 +0000 Subject: [SciPy-User] Averaging over unevenly-spaced records Message-ID: Hello all, I am pretty new to numpy (and numerical software packages in general), so this may be a basic question, but I would appreciate any help. Say I have a recarray like the following: r = array([ ... (datetime.datetime(2011, 3, 30, 16, 1, 15, 911000), 1.39, 18), (datetime.datetime(2011, 3, 30, 16, 1, 16, 181000), 1.34, 22), (datetime.datetime(2011, 3, 30, 16, 1, 16, 630000), 1.37, 19), (datetime.datetime(2011, 3, 30, 16, 1, 16, 922000), 1.34, 19), (datetime.datetime(2011, 3, 30, 16, 1, 17, 324000), 1.33, 19), ... dtype=[('datetime', '|O8'), ('A', ' References: <20111014115159.241250@gmx.net> Message-ID: On Fri, Oct 14, 2011 at 7:51 AM, Johannes Radinger wrote: > Hello, > > I did a statistical regression analysis (linear regression) in R which > has following parameters: > > Y = X1 + X2 > > from the R-analysis I can get the intercept and slopes for the independent variables so that I get: > > Y = Intercept + slope1*X1 + slope2*X2 > > Now I want to use that in Scipy to calculate new predicted Y values. > Generally that is not a problem. I use the new X1 and X2 as input and > the slopes and intercept are predefined. So I can easily calculate Y new. > > Now my question arises: > How can I calculate a kind of uncertainty (e.g. a confidence intervall) for > the new Y? What do I have to extract from R and how do I have to use > the extracted parameters in scipy to calculate such things? > Is that generally possible? Better sandbox then nothing: https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/sandbox/regression/predstd.py#L28 This is my version for scikits.statsmodels.OLS (and maybe WLS) usage example https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/examples/tut_ols.py Josef > > Thank you very much > Johannes > -- > NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zur?ck-Garantie! > Jetzt informieren: http://www.gmx.net/de/go/freephone > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From oliphant at enthought.com Sun Oct 16 11:25:28 2011 From: oliphant at enthought.com (Travis Oliphant) Date: Sun, 16 Oct 2011 10:25:28 -0500 Subject: [SciPy-User] circular statistic, anybody? In-Reply-To: References: Message-ID: <2F93093D-3131-4890-B602-9C5DFA2C4FE4@enthought.com> The functions are fairly straightforward implementations of the concepts discussed here: http://en.wikipedia.org/wiki/Directional_statistics The range [low,high] is assumed to be periodic and linearly mapable to [0, 2*pi]. These numbers are then interpreted as angles, converted to complex numbers on the unit circle and the standard mean of the complex number computed. The angle of this mean-value complex number is the returned mean (mapped back to the range [low, high]) The magnitude of this mean-value complex number is used to calculate the variance (scaled by the mapping coefficient). More tests and a link to the wiki-pedia article would be useful in the documentation. I am not familiar with the CircStats package in R but it looks to have nice specialization to particular domains where circular statistics are relevant (time periods, and angles). Thanks, -Travis On Oct 12, 2011, at 7:28 PM, josef.pktd at gmail.com wrote: > Some of the few functions in scipy.stats where I have no idea what > they are doing (and whether they are doing the correct thing) are the > circular statistics, circmean, circvar and circstd. (And they are one > of the few ones where I have no interest in figuring it out.) > > Test coverage is zero, and they are a bit picky on inputs > http://projects.scipy.org/scipy/ticket/1537 > > I would like to get some verified results, and improved docstrings > wouldn't hurt either. > > I found package circular in R > >>>> from scipy import stats >>>> x = np.arange(20)/20.*np.pi >>>> stats.circmean(x) > 1.4922565104551515 > > in R: > >> library(circular) >> xc = 0:19 >> xc = xc /20 * pi >> mean(circular(xc)) > Circular Data: > Type = angles > Units = radians > Template = none > Modulo = asis > Zero = 0 > Rotation = counter > [1] 1.492256510455152 > > That's the only case I managed to match. If xc is in [0, 2*pi], then R > circular produces different results (NaN or a different number with > argument modulo=2pi for example) > >> From the R help: > """ > The function circular is used to create circular objects. as.circular > and is.circular coerce an object to a circular and test whether an > object is a circular data. > Usage > > circular(x, type = c("angles", "directions"), > units = c("radians", "degrees", "hours"), > template = c("none", "geographics", "clock12", "clock24"), modulo = > c("asis", "2pi", "pi"), > zero = 0, rotation = c("counter", "clock"), names) > """ > > Does anybody have an idea what these options mean (I don't), and what > the scipy.stats.circ* functions are actually doing? > > Thanks, > > Josef > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user --- Travis Oliphant Enthought, Inc. oliphant at enthought.com 1-512-536-1057 http://www.enthought.com From srean.list at gmail.com Sun Oct 16 12:21:27 2011 From: srean.list at gmail.com (srean) Date: Sun, 16 Oct 2011 11:21:27 -0500 Subject: [SciPy-User] Extracting best fit from sparse.linalg.lsqr Message-ID: Hi, I want to obtain the best fit that lsqr has found, but without multiplying A*x. It probably represents this vector somewhere internally but does not return it. I do not want to incur the avoidable cost of an extra mat x vec. If you are familiar with the code, could you point me to what exactly should I return, if I were to modify the code. I tried to figure it out from https://github.com/scipy/scipy/blob/a582c5a367dee4955ecc490b77232992f9d01eb7/scipy/sparse/linalg/isolve/lsqr.pybut too sleep starved to see the light. Thanks a lot -- srean -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanforeest at gmail.com Sun Oct 16 18:18:12 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Mon, 17 Oct 2011 00:18:12 +0200 Subject: [SciPy-User] Averaging over unevenly-spaced records In-Reply-To: References: Message-ID: Hi, Have you perhaps considered using itertools.groupby? Like this you can group elements by datetime at second accuracy (use a key function that strips all subsecond accuracy from your datetime objects). Then just sum over your colums A and B, and divide for the average by the lenght of the group. HTH Nicky On 14 October 2011 19:24, Camilo Polymeris wrote: > Hello all, > > I am pretty new to numpy (and numerical software packages in general), > so this may be a basic question, but I would appreciate any help. > > Say I have a recarray like the following: > > ? ?r = array([ > ... > ? ? ? (datetime.datetime(2011, 3, 30, 16, 1, 15, 911000), 1.39, 18), > ? ? ? (datetime.datetime(2011, 3, 30, 16, 1, 16, 181000), 1.34, 22), > ? ? ? (datetime.datetime(2011, 3, 30, 16, 1, 16, 630000), 1.37, 19), > ? ? ? (datetime.datetime(2011, 3, 30, 16, 1, 16, 922000), 1.34, 19), > ? ? ? (datetime.datetime(2011, 3, 30, 16, 1, 17, 324000), 1.33, 19), > ... > ? ? ?dtype=[('datetime', '|O8'), ('A', ' > I would like to, for every whole second, e.g. datetime(2011, 3, 30, > 16, 1, 16), get the average of column A and the sum of column B, like > this: > > r1 = array([ > ... > ? ? ? [1.35, 60], ?# for second datetime(2011, 3, 30, 16, 1, 16) > ... > ? ? ? ]) > > As you can see, the datetimes are not homogeneously spaced. There can > be any number of data point in one second (even zero -- then I would > just keep the last value or 0 or NaN, whichever is easier). I have in > the order of 10^8 to 10^9 records. > I think it can be done with reduceat, but I would have to manually > find the indices, which I don't think is the numpythonicest way to do > this. Another option is to use griddata to interpolate the values at > e.g. 1ms, to have evenly data & and then use evenly spaced indices -- > more elegant, but seems inefficient. Any suggestions? > > Thanks & best regards, > > Camilo > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cpolymeris at gmail.com Sun Oct 16 18:24:54 2011 From: cpolymeris at gmail.com (Camilo Polymeris) Date: Sun, 16 Oct 2011 19:24:54 -0300 Subject: [SciPy-User] Averaging over unevenly-spaced records In-Reply-To: References: Message-ID: On Sun, Oct 16, 2011 at 7:18 PM, nicky van foreest wrote: > Hi, > > Have you perhaps considered using itertools.groupby? Like this you can > group elements by datetime at second accuracy (use a key function that > strips all subsecond accuracy from your datetime objects). Then just > sum over your colums A and B, and divide for the average by the lenght > of the group. That seems like a better approach. Thanks! Camilo From wesmckinn at gmail.com Sun Oct 16 19:23:19 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Sun, 16 Oct 2011 19:23:19 -0400 Subject: [SciPy-User] Averaging over unevenly-spaced records In-Reply-To: References: Message-ID: On Sun, Oct 16, 2011 at 6:24 PM, Camilo Polymeris wrote: > On Sun, Oct 16, 2011 at 7:18 PM, nicky van foreest wrote: >> Hi, >> >> Have you perhaps considered using itertools.groupby? Like this you can >> group elements by datetime at second accuracy (use a key function that >> strips all subsecond accuracy from your datetime objects). Then just >> sum over your colums A and B, and divide for the average by the lenght >> of the group. > > That seems like a better approach. Thanks! > > Camilo > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > You should take a look at my project, pandas (http://pandas.sourceforge.net/groupby.html). It had a lot richer built-in functionality for this kind of stuff than anything else in the scientific Python ecosystem. Assuming your timestamps are unique, you need only do: def normalize(dt): return dt.replace(microsecond=0) data.groupby(normalize).agg({'A' : np.mean, 'B' : np.sum}) and that will give you exactly what you want here data is a pandas DataFrame object. To get your record array into the right format, do: data = DataFrame.from_records(r, index='datetime') that will turn the datetimes into the index (row labels) of the DataFrame. However-- if the datetimes are not unique, all is not lost. Don't set the DataFrame index and do instead: data = DataFrame(r) grouper = data['datetime'].map(normalize) data.groupby(grouper).agg({'A' : np.mean, 'B' : np.sum}) I think you'll find this a lot more palatable than a DIY approach using itertools. - Wes From nwagner at iam.uni-stuttgart.de Mon Oct 17 04:12:17 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 17 Oct 2011 10:12:17 +0200 Subject: [SciPy-User] callback functions for fmin_cobyla, fmin_l_bfgs_b, fmin_tnc Message-ID: Hi all, a callback function is missing in fmin_cobyla, fmin_l_bfgs_b, fmin_tnc. It would be a nice enhancement. Nils From JRadinger at gmx.at Mon Oct 17 05:59:06 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Mon, 17 Oct 2011 11:59:06 +0200 Subject: [SciPy-User] calculate predicted values from regression + confidence intervall In-Reply-To: References: Message-ID: <20111017095906.326260@gmx.net> Hello Josef Hello others, I am not sure if that is what I really want. I just want to calculate a new predicted value using my regression equation and a variance (prediction intervall is statistically correct expression) for the new predicted value. I calculated my regression already in R and want to use the results in python manually (without a python-R interface). The R Coefficients are as follows: Estimate Std. Error t value Pr(>|t|) (Intercept) -9.00068 1.15351 -7.803 8.26e-12 *** Variable X1 1.87119 0.23341 8.017 2.95e-12 *** Variable X2 0.39193 0.07312 5.360 5.92e-07 *** Variable X3 0.27870 0.09561 2.915 0.00445 ** Can I use these results to manually calculate a predicted value of Y with a give set of new Xs? like X1 = 200 X2 = 150 X3 = 5 I can easily calculate the predicted Y as Y = -9 + 200*1.87 + 150*0.39 + 5*0.28 but how can I get the prediction interval? I am not sure if your approach is the one I need for that (with my given input) and if yes how to use it? Thank you in advance /Johannes > > Message: 2 > Date: Fri, 14 Oct 2011 18:54:23 -0400 > From: josef.pktd at gmail.com > Subject: Re: [SciPy-User] calculate predicted values from regression + > confidence intervall > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > On Fri, Oct 14, 2011 at 7:51 AM, Johannes Radinger > wrote: > > Hello, > > > > I did a statistical regression analysis (linear regression) in R which > > has following parameters: > > > > Y = X1 + X2 > > > > from the R-analysis I can get the intercept and slopes for the > independent variables so that I get: > > > > Y = Intercept + slope1*X1 + slope2*X2 > > > > Now I want to use that in Scipy to calculate new predicted Y values. > > Generally that is not a problem. I use the new X1 and X2 as input and > > the slopes and intercept are predefined. So I can easily calculate Y > new. > > > > Now my question arises: > > How can I calculate a kind of uncertainty (e.g. a confidence intervall) > for > > the new Y? What do I have to extract from R and how do I have to use > > the extracted parameters in scipy to calculate such things? > > Is that generally possible? > > Better sandbox then nothing: > > https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/sandbox/regression/predstd.py#L28 > > This is my version for scikits.statsmodels.OLS (and maybe WLS) > > usage example > https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/examples/tut_ols.py > > Josef > > > > > Thank you very much > > Johannes > > -- > > NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zur?ck-Garantie! > > Jetzt informieren: http://www.gmx.net/de/go/freephone > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 98, Issue 22 > ****************************************** -- NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zur?ck-Garantie! Jetzt informieren: http://www.gmx.net/de/go/freephone From JRadinger at gmx.at Mon Oct 17 05:59:06 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Mon, 17 Oct 2011 11:59:06 +0200 Subject: [SciPy-User] calculate predicted values from regression + confidence intervall In-Reply-To: References: Message-ID: <20111017095906.326260@gmx.net> Hello Josef Hello others, I am not sure if that is what I really want. I just want to calculate a new predicted value using my regression equation and a variance (prediction intervall is statistically correct expression) for the new predicted value. I calculated my regression already in R and want to use the results in python manually (without a python-R interface). The R Coefficients are as follows: Estimate Std. Error t value Pr(>|t|) (Intercept) -9.00068 1.15351 -7.803 8.26e-12 *** Variable X1 1.87119 0.23341 8.017 2.95e-12 *** Variable X2 0.39193 0.07312 5.360 5.92e-07 *** Variable X3 0.27870 0.09561 2.915 0.00445 ** Can I use these results to manually calculate a predicted value of Y with a give set of new Xs? like X1 = 200 X2 = 150 X3 = 5 I can easily calculate the predicted Y as Y = -9 + 200*1.87 + 150*0.39 + 5*0.28 but how can I get the prediction interval? I am not sure if your approach is the one I need for that (with my given input) and if yes how to use it? Thank you in advance /Johannes > > Message: 2 > Date: Fri, 14 Oct 2011 18:54:23 -0400 > From: josef.pktd at gmail.com > Subject: Re: [SciPy-User] calculate predicted values from regression + > confidence intervall > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > On Fri, Oct 14, 2011 at 7:51 AM, Johannes Radinger > wrote: > > Hello, > > > > I did a statistical regression analysis (linear regression) in R which > > has following parameters: > > > > Y = X1 + X2 > > > > from the R-analysis I can get the intercept and slopes for the > independent variables so that I get: > > > > Y = Intercept + slope1*X1 + slope2*X2 > > > > Now I want to use that in Scipy to calculate new predicted Y values. > > Generally that is not a problem. I use the new X1 and X2 as input and > > the slopes and intercept are predefined. So I can easily calculate Y > new. > > > > Now my question arises: > > How can I calculate a kind of uncertainty (e.g. a confidence intervall) > for > > the new Y? What do I have to extract from R and how do I have to use > > the extracted parameters in scipy to calculate such things? > > Is that generally possible? > > Better sandbox then nothing: > > https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/sandbox/regression/predstd.py#L28 > > This is my version for scikits.statsmodels.OLS (and maybe WLS) > > usage example > https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/examples/tut_ols.py > > Josef > > > > > Thank you very much > > Johannes > > -- > > NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zur?ck-Garantie! > > Jetzt informieren: http://www.gmx.net/de/go/freephone > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 98, Issue 22 > ****************************************** -- NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zur?ck-Garantie! Jetzt informieren: http://www.gmx.net/de/go/freephone From josef.pktd at gmail.com Mon Oct 17 07:54:32 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 17 Oct 2011 07:54:32 -0400 Subject: [SciPy-User] calculate predicted values from regression + confidence intervall In-Reply-To: <20111017095906.326260@gmx.net> References: <20111017095906.326260@gmx.net> Message-ID: On Mon, Oct 17, 2011 at 5:59 AM, Johannes Radinger wrote: > Hello Josef > Hello others, > > I am not sure if that is what I really want. > I just want to calculate a new predicted value > using my regression equation and a variance > (prediction intervall is statistically correct > expression) for the new predicted value. > > I calculated my regression already in R and want to > use the results in python manually (without a > python-R interface). > > The R Coefficients are as follows: > ? ? ? ? ? ? Estimate Std. Error t value Pr(>|t|) > (Intercept) ?-9.00068 ? ?1.15351 ?-7.803 8.26e-12 *** > Variable X1 ?1.87119 ? ?0.23341 ? 8.017 ?2.95e-12 *** > Variable X2 ?0.39193 ? ?0.07312 ? 5.360 ?5.92e-07 *** > Variable X3 ?0.27870 ? ?0.09561 ? 2.915 ?0.00445 ** > > Can I use these results to manually calculate > a predicted value of Y with a give set of new Xs? like > X1 = 200 > X2 = 150 > X3 = 5 > > I can easily calculate the predicted Y as > Y = -9 + 200*1.87 + 150*0.39 + 5*0.28 I don't think this is enough information to get the prediction confidence interval. You need the entire covariance matrix of the parameter estimates. Roughly (I would need to check the details): the parameter estimate is from a multivariate normal distribution, your y is a linear transformation, so the prediction should be normal distributed with mean y = Y = X*beta, and var(y) = X' * cov_beta * X + var_u_estimate (dot products for appropriate shapes) Without knowing the covariance matrix of the parameter estimates, you would have to assume that cov_beta is diagonal which is almost surely not the case. Josef > > but how can I get the prediction interval? > I am not sure if your approach is the one I need > for that (with my given input) and if yes > how to use it? > > Thank you in advance > > /Johannes > > > >> >> Message: 2 >> Date: Fri, 14 Oct 2011 18:54:23 -0400 >> From: josef.pktd at gmail.com >> Subject: Re: [SciPy-User] calculate predicted values from regression + >> ? ? ? confidence intervall >> To: SciPy Users List >> Message-ID: >> ? ? ? >> Content-Type: text/plain; charset=ISO-8859-1 >> >> On Fri, Oct 14, 2011 at 7:51 AM, Johannes Radinger >> wrote: >> > Hello, >> > >> > I did a statistical regression analysis (linear regression) in R which >> > has following parameters: >> > >> > Y = X1 + X2 >> > >> > from the R-analysis I can get the intercept and slopes for the >> independent variables so that I get: >> > >> > Y = Intercept + slope1*X1 + slope2*X2 >> > >> > Now I want to use that in Scipy to calculate new predicted Y values. >> > Generally that is not a problem. I use the new X1 and X2 as input and >> > the slopes and intercept are predefined. So I can easily calculate Y >> new. >> > >> > Now my question arises: >> > How can I calculate a kind of uncertainty (e.g. a confidence intervall) >> for >> > the new Y? What do I have to extract from R and how do I have to use >> > the extracted parameters in scipy to calculate such things? >> > Is that generally possible? >> >> Better sandbox then nothing: >> >> https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/sandbox/regression/predstd.py#L28 >> >> This is my version for scikits.statsmodels.OLS (and maybe WLS) >> >> usage example >> https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/examples/tut_ols.py >> >> Josef >> >> > >> > Thank you very much >> > Johannes >> > -- >> > NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zur?ck-Garantie! >> > Jetzt informieren: http://www.gmx.net/de/go/freephone >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> >> >> ------------------------------ >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> End of SciPy-User Digest, Vol 98, Issue 22 >> ****************************************** > > -- > NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zur?ck-Garantie! > Jetzt informieren: http://www.gmx.net/de/go/freephone > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From jradinger at gmx.at Mon Oct 17 15:29:47 2011 From: jradinger at gmx.at (Johannes Radinger) Date: Mon, 17 Oct 2011 21:29:47 +0200 Subject: [SciPy-User] calculate predicted values from regression + confidence intervall In-Reply-To: References: Message-ID: Hello again, it is not a problem to get the covariance matrix of the parameter estimates (with the vcov() function of R) and of course I can use that also for the calculations... The covariance matrix is: (Intercept) L uT stream.order (Intercept) 1.33059472 -0.246834193 -0.0307302435 0.0267554775 L -0.24683419 0.054481286 0.0014007745 -0.0105982957 uT -0.03073024 0.001400774 0.0053472652 -0.0007137384 stream.order 0.02675548 -0.010598296 -0.0007137384 0.0091419997 I probably just have to save that in the form of a python matrix (Just have to look up how to do that). But how can I know proceed? I found following description of prediction intervals (Thats the thing I want :) here: http://statmaster.sdu.dk/courses/st111/module05/module.pdf (page 11). But can that be realised in python using the covariance matrix resp. the other estimates and std. errors of my regression parameters? /Johannes Am 17.10.2011 um 19:00 schrieb scipy-user-request at scipy.org: >> The R Coefficients are as follows: >> ? ? ? ? ? ? Estimate Std. Error t value Pr(>|t|) >> (Intercept) ?-9.00068 ? ?1.15351 ?-7.803 8.26e-12 *** >> Variable X1 ?1.87119 ? ?0.23341 ? 8.017 ?2.95e-12 *** >> Variable X2 ?0.39193 ? ?0.07312 ? 5.360 ?5.92e-07 *** >> Variable X3 ?0.27870 ? ?0.09561 ? 2.915 ?0.00445 ** >> >> Can I use these results to manually calculate >> a predicted value of Y with a give set of new Xs? like >> X1 = 200 >> X2 = 150 >> X3 = 5 >> >> I can easily calculate the predicted Y as >> Y = -9 + 200*1.87 + 150*0.39 + 5*0.28 > > I don't think this is enough information to get the prediction > confidence interval. You need the entire covariance matrix of the > parameter estimates. > > Roughly (I would need to check the details): > the parameter estimate is from a multivariate normal distribution, > your y is a linear transformation, so the prediction should be normal > distributed with mean y = Y = X*beta, and var(y) = X' * cov_beta * X + > var_u_estimate (dot products for appropriate shapes) > > Without knowing the covariance matrix of the parameter estimates, you > would have to assume that cov_beta is diagonal which is almost surely > not the case. > > Josef From josef.pktd at gmail.com Mon Oct 17 15:39:33 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 17 Oct 2011 15:39:33 -0400 Subject: [SciPy-User] calculate predicted values from regression + confidence intervall In-Reply-To: References: Message-ID: On Mon, Oct 17, 2011 at 3:29 PM, Johannes Radinger wrote: > Hello again, > > it is not a problem to get the covariance matrix of the parameter > estimates (with the vcov() function of R) and of course I can use > that also for the calculations... > > The covariance matrix is: > ? ? ? ? ? ? (Intercept) ? ? ? ? ? ?L ? ? ? ? ? ?uT ?stream.order > (Intercept) ? 1.33059472 -0.246834193 -0.0307302435 ?0.0267554775 > L ? ? ? ? ? ?-0.24683419 ?0.054481286 ?0.0014007745 -0.0105982957 > uT ? ? ? ? ? -0.03073024 ?0.001400774 ?0.0053472652 -0.0007137384 > stream.order ?0.02675548 -0.010598296 -0.0007137384 ?0.0091419997 > > I probably just have to save that in the form of a python matrix (Just have > to look up how to do that). > > But how can I know proceed? > > I found following description of prediction intervals (Thats the thing > I want :) here: http://statmaster.sdu.dk/courses/st111/module05/module.pdf (page 11). > But can that be realised in python using the covariance matrix resp. the > other estimates and std. errors of my regression parameters? page 12: replace sigma^2 * (X'X)^{-1} by cov_beta, the first expression is cov_beta for OLS Josef > > /Johannes > > > Am 17.10.2011 um 19:00 schrieb scipy-user-request at scipy.org: > >>> The R Coefficients are as follows: >>> ? ? ? ? ? ? Estimate Std. Error t value Pr(>|t|) >>> (Intercept) ?-9.00068 ? ?1.15351 ?-7.803 8.26e-12 *** >>> Variable X1 ?1.87119 ? ?0.23341 ? 8.017 ?2.95e-12 *** >>> Variable X2 ?0.39193 ? ?0.07312 ? 5.360 ?5.92e-07 *** >>> Variable X3 ?0.27870 ? ?0.09561 ? 2.915 ?0.00445 ** >>> >>> Can I use these results to manually calculate >>> a predicted value of Y with a give set of new Xs? like >>> X1 = 200 >>> X2 = 150 >>> X3 = 5 >>> >>> I can easily calculate the predicted Y as >>> Y = -9 + 200*1.87 + 150*0.39 + 5*0.28 >> >> I don't think this is enough information to get the prediction >> confidence interval. You need the entire covariance matrix of the >> parameter estimates. >> >> Roughly (I would need to check the details): >> the parameter estimate is from a multivariate normal distribution, >> your y is a linear transformation, so the prediction should be normal >> distributed with mean y = Y = X*beta, and var(y) = X' * cov_beta * X + >> var_u_estimate (dot products for appropriate shapes) >> >> Without knowing the covariance matrix of the parameter estimates, you >> would have to assume that cov_beta is diagonal which is almost surely >> not the case. >> >> Josef > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cpolymeris at gmail.com Mon Oct 17 17:55:03 2011 From: cpolymeris at gmail.com (Camilo Polymeris) Date: Mon, 17 Oct 2011 18:55:03 -0300 Subject: [SciPy-User] Averaging over unevenly-spaced records In-Reply-To: References: Message-ID: > You should take a look at my project, pandas > (http://pandas.sourceforge.net/groupby.html). It had a lot richer > built-in functionality for this kind of stuff than anything else in > the scientific Python ecosystem. > > Assuming your timestamps are unique, you need only do: > > def normalize(dt): > ? ?return dt.replace(microsecond=0) > data.groupby(normalize).agg({'A' : np.mean, 'B' : np.sum}) > > and that will give you exactly what you want Yes, the timestamps are unique. Looks neat. > here data is a pandas DataFrame object. To get your record array into > the right format, do: > > data = DataFrame.from_records(r, index='datetime') > > that will turn the datetimes into the index (row labels) of the DataFrame. > > However-- if the datetimes are not unique, all is not lost. Don't set > the DataFrame index and do instead: > > data = DataFrame(r) > grouper = data['datetime'].map(normalize) > data.groupby(grouper).agg({'A' : np.mean, 'B' : np.sum}) > > I think you'll find this a lot more palatable than a DIY approach > using itertools. > Thanks for your suggestions. I think, I'll give the DIY approach a try, for pedagogical reasons, but may later switch to pandas. Regards, Camilo From hturesson at gmail.com Tue Oct 18 07:12:59 2011 From: hturesson at gmail.com (Hjalmar Turesson) Date: Tue, 18 Oct 2011 07:12:59 -0400 Subject: [SciPy-User] =?windows-1252?q?=91NPY=5FDOUBLE=92_undeclared?= Message-ID: Hi, When attempting to compile C extensions, I get the following type of errors: error: ?NPY_DOUBLE? undeclared(first use in this function) error: ?NPY_LONG? undeclared (first use in this function) error: ?NPY_UINT32? undeclared (first use in this function) It seems I've forgot to include some header defining NPY_*, but both "Python.h" and "Numeric/arrayobject.h" are included. And the compiler doesn't complain about not finding these headers. I get the same error when trying to compile the example from the SciPy Cookbook ( http://www.scipy.org/Cookbook/C_Extensions/NumPy_arrays#head-7b1bc91e01b2ea0714315c69f5c8c56b1280f0c0 ). Is there some additional header I need to include? Or did I screw up something else? Google is not very informative in about this matter (which suggests that I made a trivial error). I'm using gcc version 4.6.1 on Ubuntu, with Numpy 1.5.1 Thanks, Hjalmar -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Oct 18 07:16:58 2011 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Oct 2011 12:16:58 +0100 Subject: [SciPy-User] =?utf-8?b?4oCYTlBZX0RPVUJMReKAmSB1bmRlY2xhcmVk?= In-Reply-To: References: Message-ID: On Tue, Oct 18, 2011 at 12:12, Hjalmar Turesson wrote: > Hi, > When attempting to compile C extensions, I get the following type of errors: > > error: ?NPY_DOUBLE? undeclared(first use in this function) > error: ?NPY_LONG? undeclared (first use in this function) > error: ?NPY_UINT32? undeclared (first use in this function) > > It seems I've forgot to include some header defining NPY_*, but both > "Python.h" and "Numeric/arrayobject.h" are included. And the compiler > doesn't complain about not finding these headers. It's not "Numeric/arrayobject.h". It's "numpy/arrayobject.h" -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From hturesson at gmail.com Tue Oct 18 07:26:07 2011 From: hturesson at gmail.com (Hjalmar Turesson) Date: Tue, 18 Oct 2011 07:26:07 -0400 Subject: [SciPy-User] =?windows-1252?q?=91NPY=5FDOUBLE=92_undeclared?= In-Reply-To: References: Message-ID: Thanks for the super-speedy reply. Worked (and horribly obvious in retrospect). Hjalmar On Tue, Oct 18, 2011 at 7:16 AM, Robert Kern wrote: > On Tue, Oct 18, 2011 at 12:12, Hjalmar Turesson > wrote: > > Hi, > > When attempting to compile C extensions, I get the following type of > errors: > > > > error: ?NPY_DOUBLE? undeclared(first use in this function) > > error: ?NPY_LONG? undeclared (first use in this function) > > error: ?NPY_UINT32? undeclared (first use in this function) > > > > It seems I've forgot to include some header defining NPY_*, but both > > "Python.h" and "Numeric/arrayobject.h" are included. And the compiler > > doesn't complain about not finding these headers. > > It's not "Numeric/arrayobject.h". It's "numpy/arrayobject.h" > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Mon Oct 17 13:56:46 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Mon, 17 Oct 2011 23:26:46 +0530 Subject: [SciPy-User] scipy.test gives error Message-ID: Hi, scipy.test gives following error : MKL FATAL ERROR : can not load libmkl_lapack.so import numpy;import scipy does not throw any error. This library is present in the LD_LIBARY_PATH. What can be the issue? These modules were installed few days back and used to work. -Aksharb -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Tue Oct 18 02:53:49 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Tue, 18 Oct 2011 12:23:49 +0530 Subject: [SciPy-User] Fwd: scipy.test gives error In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: akshar bhosale Date: Mon, Oct 17, 2011 at 11:26 PM Subject: scipy.test gives error To: scipy-user at scipy.org, scipy-dev-owner at scipy.org Hi, scipy.test gives following error : MKL FATAL ERROR : can not load libmkl_lapack.so import numpy;import scipy does not throw any error. This library is present in the LD_LIBARY_PATH. What can be the issue? These modules were installed few days back and used to work. -Aksharb -------------- next part -------------- An HTML attachment was scrubbed... URL: From alia_khouri at yahoo.com Tue Oct 18 10:24:19 2011 From: alia_khouri at yahoo.com (Alia) Date: Tue, 18 Oct 2011 07:24:19 -0700 (PDT) Subject: [SciPy-User] eigenvectors <-> nth root of row product Message-ID: <1318947859.67787.YahooMailNeo@web65715.mail.ac4.yahoo.com> Please excuse me if this question is out of topic or scope for this forum. I have just started using numpy and I would like to apply it to a practical problem that I have been researching. I have been reading about a method called AHP (Analytic Hierarchy Process) or the Saaty method [see: wikipedia article if you are curious http://en.wikipedia.org/wiki/Analytic_Hierarchy_Process] which is a " is a structured technique for organizing and analyzing complex decisions." and which involves, for the purposes of this group, calculating eigenvectors from matrices. After trawling the net for a guide to the calculation implied by this process, I found this pdf (http://www.booksites.net/download/coyle/student_files/AHP_Technique.pdf) which describes a method for arriving at eigenvectors which I will quote somewhat: THE AHP THEORY Consider n elements to be compared, C1 ... Cn and denote the relative ?weight? (or priority or significance) of Ci with respect to Cj by aij and form a square matrix A=(aij) of order n with the constraints that aij = 1/aji, for i =? j, and aii = 1, all i. Such a matrix is said to be a reciprocal matrix. The weights are consistent if they are transitive, that is aik = aijajk for all i, j, and k. Such a matrix might exist if the aij??? are calculated from exactly measured data. Then find a vector ? of order n such that A? =??. For such a matrix, ? is said to be an eigenvector (of order n) and ? is an eigenvalue. For a consistent matrix, ? = n . ... THE AHP CALCULATIONS There are several methods for calculating the eigenvector. Multiplying together the entries in each row of the matrix and then taking the nth root of that product gives a very good approximation to the correct answer. The nth roots are summed and that sum is used to normalise the eigenvector elements to add to 1.00. In the matrix below, the 4th root for the first row is 0.293 and that is divided by 5.024 to give 0.058 as the first element in the eigenvector. The table below gives a worked example in terms of four attributes to be compared which, for simplicity, we refer to as A, B, C, and D. ??? ??? ??? ??? ??? ??? ??? ??? ??? ???? nth root ??? ??? ??? ??? ??? ??? ??? ??? ??? ???? of product ?? ?? A??? B??? ??? C?????? D??? ? of values?? Eigenvector??? ??? ??? A??? 1??? 1/3 ??? 1/9??? 1/5???? 0.293??? ??? 0.058 B??? 3??? 1??? ??? 1??? ?? 1??? ??? 1.316??? ??? 0.262 C??? 9??? 1??? ??? 1?????? 3??? ??? 2.279??? ??? 0.454 D??? 5??? 1??? ??? 1/3??? 1??? ??? 1.136??? ??? 0.226 --------------------------------------------------------- Totals??? ??? ??? ??? ??? ??? ??? ??? 5.024??? ??? 1.000 I have represented the matrix in a python file where I try to compare the author's method of arriving at eigenvectors with numpy's eig function output: import numpy as np from numpy import array m = array([ ???????? [1., 1/3., 1/9., 1/5.], ???????? [3., 1., 1., 1.], ???????? [9., 1., 1., 3.], ???????? [5., 1., 1/3., 1.] ???? ]) w, v = np.linalg.eig(m) nth_root_prod = array([pow(row.prod(), 1./row.size) for row in m]) ev = nth_root_prod / sum(nth_root_prod) Unfortunately, I am unable to reconcile numpy's output with the author's. I'm speculating now: but perhaps the numbers don't reconcile because the author, as he has hinted above has taken the eigenvalue or lambda = n (which in this case is 16). If this is the case, then I would have to generate the eigenvectors for the matrix above where lambda = 16. Presumably there is a way in numpy to do this. My first question, for those of you who are more familiar with these kinds of problems am I completely on the wrong track here?? Many thanks for any help on this. Alia From ckkart at hoc.net Wed Oct 19 03:17:33 2011 From: ckkart at hoc.net (Christian K.) Date: Wed, 19 Oct 2011 07:17:33 +0000 (UTC) Subject: [SciPy-User] calculate predicted values from regression + confidence intervall References: <20111017095906.326260@gmx.net> Message-ID: Hi Joseph, gmail.com> writes: > > Roughly (I would need to check the details): > the parameter estimate is from a multivariate normal distribution, > your y is a linear transformation, so the prediction should be normal > distributed with mean y = Y = X*beta, and var(y) = X' * cov_beta * X + > var_u_estimate (dot products for appropriate shapes) Does this hold also for nonlinear models? I would need to calculate prediction intervals for something like f(X,Y) = a1-a2*log(X)+a3/Y (inverse power/Arrhenius model from accelerated reliability testing) Currently I am using a Monte Carllo approach to get the prediction intervals. Best regards, Christian K. From josef.pktd at gmail.com Wed Oct 19 07:51:56 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 19 Oct 2011 07:51:56 -0400 Subject: [SciPy-User] calculate predicted values from regression + confidence intervall In-Reply-To: References: <20111017095906.326260@gmx.net> Message-ID: On Wed, Oct 19, 2011 at 3:17 AM, Christian K. wrote: > Hi Joseph, > > gmail.com> writes: >> >> Roughly (I would need to check the details): >> the parameter estimate is from a multivariate normal distribution, >> your y is a linear transformation, so the prediction should be normal >> distributed with mean y = Y = X*beta, and var(y) = X' * cov_beta * X + >> var_u_estimate (dot products for appropriate shapes) > > Does this hold also for nonlinear models? I would need to calculate prediction > intervals for something like > > f(X,Y) = a1-a2*log(X)+a3/Y (inverse power/Arrhenius model from accelerated > reliability testing) your f(X,Y) is still linear in the parameters, a1, a2, a3. So the linear version still applies. For the general non-linear model where f(x, params) is non-linear in the parameters, it would hold only locally, as a linear approximation, with y = jac * beta, where the gradient/jacobian jac replaces x in the linear model in the covariance calculation. I haven't looked at it in detail yet, but I think some statistical packages might return this. The problem is that the local linear approximation might not be very good if you are interested in larger deviations, for example confidence intervals, and the standard deviation is large relative to the curvature. The same problem shows up for all non-linear transformation using this Delta method in statistics http://en.wikipedia.org/wiki/Delta_method There are ways to improve the approximation, but it's only simple in special cases. Most things that I have seen for the general nonlinear case in terms of higher order approximation looked too complicated for my taste. I guess, in statsmodels I will add the Delta method and bootstrap, and let the users decide if the approximation is good enough in their case, using the Delta method is much faster. Josef > > Currently I am using a Monte Carllo approach to get the prediction intervals. > > Best regards, Christian K. > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From akshar.bhosale at gmail.com Tue Oct 18 12:43:34 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Tue, 18 Oct 2011 22:13:34 +0530 Subject: [SciPy-User] Fwd: scipy.test gives error In-Reply-To: References: Message-ID: python -c 'import scipy;scipy.test(verbose=10)' gives me : NumPy version 1.6.0 NumPy is installed in /home/akshar/.local/lib/python2.6/site-packages/numpy SciPy version 0.9.0 SciPy is installed in /home/akshar/.local/lib/python2.6/site-packages/scipy Python version 2.6 (r26:66714, May 29 2011, 15:10:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] nose version 1.0.0 nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] nose.selector: INFO: /home/akshar/.local/lib/python2.6/site-packages/scipy/fftpack/convolve.so is executable; skipped nose.selector: INFO: /home/akshar/.local/lib/python2.6/site-packages/scipy/integrate/vode.so is executable; skipped . . . . . Tests maxRstat(Z, R, 1) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 1) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 2) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 2) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 2) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 2) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3) on the Q data set using centroid linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using complete linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using median linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using single linkage. ... ok Tests maxRstat(Z, R, 3) on the Q data set using Ward linkage. ... ok Tests maxRstat(Z, R, 3) on linkage and inconsistency matrices with different numbers of clusters. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on empty linkage. Expecting exception. ... ok Tests maxRstat(Z, R, 3) on linkage with one cluster. ... ok Tests maxRstat(Z, R, 3.3). Expecting exception. ... ok Tests maxRstat(Z, R, -1). Expecting exception. ... ok Tests maxRstat(Z, R, 4). Expecting exception. ... ok Tests num_obs_linkage(Z) on linkage over 2 observations. ... ok Tests num_obs_linkage(Z) on linkage over 3 observations. ... ok Tests num_obs_linkage(Z) on linkage on observation sets between sizes 4 and 15 (step size 3). ... ok Tests num_obs_linkage(Z) with empty linkage. ... ok Tests to_mlab_linkage on linkage array with multiple rows. ... ok Tests to_mlab_linkage on empty linkage array. ... ok Tests to_mlab_linkage on linkage array with single row. ... ok test_hierarchy.load_testing_files ... ok Ticket #505. ... ok Testing that kmeans2 init methods work. ... MKL FATAL ERROR: Cannot load libmkl_lapack.so ---------- Forwarded message ---------- From: akshar bhosale Date: Tue, Oct 18, 2011 at 7:51 PM Subject: Fwd: scipy.test gives error To: scipy-dev at scipy.org On Tue, Oct 18, 2011 at 12:23 PM, akshar bhosale wrote: > > > ---------- Forwarded message ---------- > From: akshar bhosale > Date: Mon, Oct 17, 2011 at 11:26 PM > Subject: scipy.test gives error > To: scipy-user at scipy.org, scipy-dev-owner at scipy.org > > > Hi, > scipy.test gives following error : > MKL FATAL ERROR : can not load libmkl_lapack.so > import numpy;import scipy does not throw any error. > This library is present in the LD_LIBARY_PATH. > What can be the issue? > These modules were installed few days back and used to work. > > -Aksharb > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alia_khouri at yahoo.com Wed Oct 19 09:55:21 2011 From: alia_khouri at yahoo.com (Alia) Date: Wed, 19 Oct 2011 06:55:21 -0700 (PDT) Subject: [SciPy-User] eigenvectors <-> nth root of row product Message-ID: <1319032521.7521.YahooMailNeo@web65705.mail.ac4.yahoo.com> Just to do a self followup on my earlier post, I found this page (http://people.revoledu.com/kardi/tutorial/AHP/Priority%20Vector.htm) to be very helpful to answering my question. Best, AK From ckkart at hoc.net Thu Oct 20 05:11:08 2011 From: ckkart at hoc.net (Christian K.) Date: Thu, 20 Oct 2011 09:11:08 +0000 (UTC) Subject: [SciPy-User] calculate predicted values from regression + confidence intervall References: <20111017095906.326260@gmx.net> Message-ID: gmail.com> writes: > > f(X,Y) = a1-a2*log(X)+a3/Y (inverse power/Arrhenius model from accelerated > > reliability testing) > > your f(X,Y) is still linear in the parameters, a1, a2, a3. So the > linear version still applies. Ok, but then I do not understand how to follow your indications for the prediction interval: >> distributed with mean y = Y = X*beta, and var(y) = X' * cov_beta * X + >> var_u_estimate (dot products for appropriate shapes) X in my case is [X,Y] and cov_beta has a shape of 3x3, since there are 3 paramters. Sorry for my ignorance on statistics, I really apppreaciate your help. Regards, Christian From ejefree at yandex.ru Thu Oct 20 04:15:47 2011 From: ejefree at yandex.ru (ejefree) Date: Thu, 20 Oct 2011 14:15:47 +0600 Subject: [SciPy-User] callback functions for fmin_cobyla, fmin_l_bfgs_b, fmin_tnc In-Reply-To: References: Message-ID: I need it to! The callback function is required to use scipy.optimize in other libraries -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Oct 20 10:12:37 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 20 Oct 2011 10:12:37 -0400 Subject: [SciPy-User] calculate predicted values from regression + confidence intervall In-Reply-To: References: <20111017095906.326260@gmx.net> Message-ID: On Thu, Oct 20, 2011 at 5:11 AM, Christian K. wrote: > ? gmail.com> writes: >> > f(X,Y) = a1-a2*log(X)+a3/Y (inverse power/Arrhenius model from accelerated >> > reliability testing) >> >> your f(X,Y) is still linear in the parameters, a1, a2, a3. So the >> linear version still applies. > > Ok, but then I do not understand how to follow your indications for the > prediction interval: > >>> distributed with mean y = Y = X*beta, and var(y) = X' * cov_beta * X + >>> var_u_estimate (dot products for appropriate shapes) > > X in my case is [X,Y] and cov_beta has a shape of 3x3, since there are 3 > paramters. > Sorry for my ignorance on statistics, I really apppreaciate your help. I'm attaching a complete example for the linear in parameters case, including the comparison with statsmodels. the relevant part is y_pred = np.dot(xp, beta_est) y_pred_cov = np.dot(xp, np.dot(cov_params, xp.T)) y_pred_cov = np.atleast_1d(y_pred_cov) y_pred_std = np.sqrt(np.diag(y_pred_cov) + sigma2_u) It took me a bit of time to match up the DIY version with statsmodels, because of shape, ddof, sqrt bugs in the initially quickly written example. I hope this helps. Josef > > Regards, Christian > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- A non-text attachment was scrubbed... Name: try_predict_interval.py Type: text/x-python Size: 1828 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Thu Oct 20 10:32:45 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 20 Oct 2011 16:32:45 +0200 Subject: [SciPy-User] callback functions for fmin_cobyla, fmin_l_bfgs_b, fmin_tnc In-Reply-To: References: Message-ID: On Thu, 20 Oct 2011 14:15:47 +0600 ejefree wrote: > I need it to! > The callback function is required to use scipy.optimize >in other libraries FWIW, Openopt provides callback functions. Nils From ckkart at hoc.net Thu Oct 20 16:50:39 2011 From: ckkart at hoc.net (Christian K.) Date: Thu, 20 Oct 2011 22:50:39 +0200 Subject: [SciPy-User] calculate predicted values from regression + confidence intervall In-Reply-To: References: <20111017095906.326260@gmx.net> Message-ID: Am 20.10.11 16:12, schrieb josef.pktd at gmail.com: > On Thu, Oct 20, 2011 at 5:11 AM, Christian K. wrote: >> gmail.com> writes: >>>> f(X,Y) = a1-a2*log(X)+a3/Y (inverse power/Arrhenius model from accelerated >>>> reliability testing) >>> >>> your f(X,Y) is still linear in the parameters, a1, a2, a3. So the >>> linear version still applies. >> >> Ok, but then I do not understand how to follow your indications for the >> prediction interval: >> >>>> distributed with mean y = Y = X*beta, and var(y) = X' * cov_beta * X + >>>> var_u_estimate (dot products for appropriate shapes) >> >> X in my case is [X,Y] and cov_beta has a shape of 3x3, since there are 3 >> paramters. >> Sorry for my ignorance on statistics, I really apppreaciate your help. > > I'm attaching a complete example for the linear in parameters case, > including the comparison with statsmodels. Ok, I got it, thank you very much. As I understood, this works for OLS (only?). What about if I get the covariance matrix from a 2D odr/leastsq fit from scipy.odr ? I noticed, that the covariance matrices differ by a constant (large) factor. Regards, Christian From robert.kern at gmail.com Thu Oct 20 17:15:36 2011 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 20 Oct 2011 22:15:36 +0100 Subject: [SciPy-User] calculate predicted values from regression + confidence intervall In-Reply-To: References: <20111017095906.326260@gmx.net> Message-ID: On Thu, Oct 20, 2011 at 21:50, Christian K. wrote: > Am 20.10.11 16:12, schrieb josef.pktd at gmail.com: >> On Thu, Oct 20, 2011 at 5:11 AM, Christian K. wrote: >>> ? gmail.com> writes: >>>>> f(X,Y) = a1-a2*log(X)+a3/Y (inverse power/Arrhenius model from accelerated >>>>> reliability testing) >>>> >>>> your f(X,Y) is still linear in the parameters, a1, a2, a3. So the >>>> linear version still applies. >>> >>> Ok, but then I do not understand how to follow your indications for the >>> prediction interval: >>> >>>>> distributed with mean y = Y = X*beta, and var(y) = X' * cov_beta * X + >>>>> var_u_estimate (dot products for appropriate shapes) >>> >>> X in my case is [X,Y] and cov_beta has a shape of 3x3, since there are 3 >>> paramters. >>> Sorry for my ignorance on statistics, I really apppreaciate your help. >> >> I'm attaching a complete example for the linear in parameters case, >> including the comparison with statsmodels. > > Ok, I got it, thank you very much. As I understood, this works for OLS > (only?). What about if I get the covariance matrix from a 2D odr/leastsq > fit from scipy.odr ? I noticed, that the covariance matrices differ by a > constant (large) factor. ODRPACK will scale the covariance matrix by the Chi^2 score of the residuals (i.e. divide the residuals by the error bars, square, sum, divide by nobs-nparams), IIRC. This accounts for misestimation of the error bars. If the error bars were correctly estimated, the Chi^2 score will be ~1. If the error bars were too small compared to the residuals, then the Chi^2 score will be high, and thus increase the estimated variance, etc. This may or may not be what you want, especially when comparing it with other tools, but it's what ODRPACK computes, so it's what scipy.odr returns. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josef.pktd at gmail.com Thu Oct 20 20:03:26 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 20 Oct 2011 20:03:26 -0400 Subject: [SciPy-User] calculate predicted values from regression + confidence intervall In-Reply-To: References: <20111017095906.326260@gmx.net> Message-ID: On Thu, Oct 20, 2011 at 5:15 PM, Robert Kern wrote: > On Thu, Oct 20, 2011 at 21:50, Christian K. wrote: >> Am 20.10.11 16:12, schrieb josef.pktd at gmail.com: >>> On Thu, Oct 20, 2011 at 5:11 AM, Christian K. wrote: >>>> ? gmail.com> writes: >>>>>> f(X,Y) = a1-a2*log(X)+a3/Y (inverse power/Arrhenius model from accelerated >>>>>> reliability testing) >>>>> >>>>> your f(X,Y) is still linear in the parameters, a1, a2, a3. So the >>>>> linear version still applies. >>>> >>>> Ok, but then I do not understand how to follow your indications for the >>>> prediction interval: >>>> >>>>>> distributed with mean y = Y = X*beta, and var(y) = X' * cov_beta * X + >>>>>> var_u_estimate (dot products for appropriate shapes) >>>> >>>> X in my case is [X,Y] and cov_beta has a shape of 3x3, since there are 3 >>>> paramters. >>>> Sorry for my ignorance on statistics, I really apppreaciate your help. >>> >>> I'm attaching a complete example for the linear in parameters case, >>> including the comparison with statsmodels. >> >> Ok, I got it, thank you very much. As I understood, this works for OLS >> (only?). It's OLS only, it can be adapted to other estimators like non-linear least squares, or to weighted least squares. I never looked at the details of odr, so I'm no help there. Josef >>What about if I get the covariance matrix from a 2D odr/leastsq >> fit from scipy.odr ? I noticed, that the covariance matrices differ by a >> constant (large) factor. > > ODRPACK will scale the covariance matrix by the Chi^2 score of the > residuals (i.e. divide the residuals by the error bars, square, sum, > divide by nobs-nparams), IIRC. This accounts for misestimation of the > error bars. If the error bars were correctly estimated, the Chi^2 > score will be ~1. If the error bars were too small compared to the > residuals, then the Chi^2 score will be high, and thus increase the > estimated variance, etc. This may or may not be what you want, > especially when comparing it with other tools, but it's what ODRPACK > computes, so it's what scipy.odr returns. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From gkclri at yahoo.com Fri Oct 21 10:29:20 2011 From: gkclri at yahoo.com (Gopalakrishnan Ravimohan) Date: Fri, 21 Oct 2011 07:29:20 -0700 (PDT) Subject: [SciPy-User] Fw: problem in installing th scipy In-Reply-To: <1319187924.36334.YahooMailNeo@web44813.mail.sp1.yahoo.com> References: <1319187924.36334.YahooMailNeo@web44813.mail.sp1.yahoo.com> Message-ID: <1319207360.31134.YahooMailNeo@web44813.mail.sp1.yahoo.com> with this mail i have attached the error file. which i tried to install in my cluster. plz help me in sorting the error as i am unable to find what is causing to exit the program before building. With Regards GOPALAKRISHNAN.R Senior Research Fellow C/o. Dr. V. Subramanian Chemical Laboratory Central Leather Research Institute (CSIR) India -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_installation_problem.rtf Type: application/msword Size: 8177 bytes Desc: not available URL: From robert.kern at gmail.com Fri Oct 21 10:33:34 2011 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Oct 2011 15:33:34 +0100 Subject: [SciPy-User] [SciPy-Dev] Fw: problem in installing th scipy In-Reply-To: <1319207360.31134.YahooMailNeo@web44813.mail.sp1.yahoo.com> References: <1319187924.36334.YahooMailNeo@web44813.mail.sp1.yahoo.com> <1319207360.31134.YahooMailNeo@web44813.mail.sp1.yahoo.com> Message-ID: On Fri, Oct 21, 2011 at 15:29, Gopalakrishnan Ravimohan wrote: > > > with this mail i have attached the error file. which i tried to install in > my cluster. > plz help me in sorting the error as i am unable to find what is causing to > exit the program before building. libptf77blas.a (and the rest of ATLAS) was not compiled correctly for it to be linked into an extension module. As the error message says, you will need to rebuild ATLAS using the -fPIC flag. Consult the ATLAS build instructions for the details on how to do that. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From akshar.bhosale at gmail.com Thu Oct 20 11:51:01 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Thu, 20 Oct 2011 21:21:01 +0530 Subject: [SciPy-User] error in numpy-1.6.0 install Message-ID: Hi, i have intel xeon 64 bit machine running rhel 5.2 x86_64. i have intel cluster toolkit installed (11/069) and mkl 10.3 what can be the best options for installing numpy-1.6.0 like what changes are required in site.cfg and intel compilers.py etc? what flags / options to be given while configuring/building/installing the same ? Any patches required? what are the deatail steps for the same? -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Thu Oct 20 13:32:42 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Thu, 20 Oct 2011 23:02:42 +0530 Subject: [SciPy-User] numpy.test hangs Message-ID: Hi, i have intel xeon 64 bit machine running rhel 5.2 x86_64. i have intel cluster toolkit installed (11/069) and mkl 10.3. I have installed numpy-1.6.0. I am testing it by using nose. numpy.test just hangs like below and nothing comes. What could be the issue? ###################### python Python 2.6 (r26:66714, Sep 22 2011, 12:28:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(); Running unit tests for numpy NumPy version 1.6.0 NumPy is installed in /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy Python version 2.6 (r26:66714, Sep 22 2011, 15:10:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] nose version 1.0.0 ..................................................................................................................... ########################### -akshar -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkclri at yahoo.com Fri Oct 21 05:05:24 2011 From: gkclri at yahoo.com (Gopalakrishnan Ravimohan) Date: Fri, 21 Oct 2011 02:05:24 -0700 (PDT) Subject: [SciPy-User] problem in installing th scipy Message-ID: <1319187924.36334.YahooMailNeo@web44813.mail.sp1.yahoo.com> with this mail i have attached the error file. which i tried to install in my cluster. plz help me in sorting the error as i am unable to find what is causing to exit the program before building. With Regards GOPALAKRISHNAN.R Senior Research Fellow C/o. Dr. V. Subramanian Chemical Laboratory Central Leather Research Institute (CSIR) India -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_installation_problem.rtf Type: application/msword Size: 8177 bytes Desc: not available URL: From akshar.bhosale at gmail.com Sat Oct 22 05:24:39 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Sat, 22 Oct 2011 14:54:39 +0530 Subject: [SciPy-User] Fwd: [SciPy-Dev] numpy.test hangs In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: akshar bhosale Date: Sat, Oct 22, 2011 at 1:46 PM Subject: Re: [SciPy-Dev] numpy.test hangs To: SciPy Developers List , Discussion of Numerical Python Hi, python Python 2.6 (r26:66714, May 29 2011, 15:10:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.show_config() lapack_opt_info: libraries = ['mkl_lapack95_lp64', 'mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] blas_opt_info: libraries = ['mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] lapack_mkl_info: libraries = ['mkl_lapack95_lp64', 'mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] blas_mkl_info: libraries = ['mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] mkl_info: libraries = ['mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', '/opt/intel/Compiler/11.0/069/include/'] Akshar On Sat, Oct 22, 2011 at 11:54 AM, akshar bhosale wrote: > yes sure.. > i have intel cluster toolkit installed on my system. (11/069 version and > mkl 10.3). i have machine having intel xeon processor and rhel 5.2 x86_64 > platform. i am trying with intel compilers. > if i do > > python -c 'import numpy;numpy.matrix([[1, 5, 10], [1.0, 3j, 4]], > numpy.complex128).T.I.H' > python: symbol lookup error: > /opt/intel/Compiler/11.0/069/mkl/lib/em64/libmkl_lapack.so: undefined > symbol: mkl_lapack_zgeqrf > > my site.cfg is : > #################### > [mkl] > > mkl_libs = mkl_def, mkl_intel_lp64, mkl_intel_thread, mkl_core, mkl_mc > lapack_libs = mkl_lapack95_lp64 > > library_dirs = > /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/lib/intel64/ > include_dirs = > /opt/intel/Compiler/11.0/069/mkl/include:/opt/intel/Compiler/11.0/069/include/ > #################### > and intelcompiler.py is : > ############################ > from distutils.unixccompiler import UnixCCompiler > from numpy.distutils.exec_command import find_executable > import sys > > class IntelCCompiler(UnixCCompiler): > """ A modified Intel compiler compatible with an gcc built Python.""" > compiler_type = 'intel' > cc_exe = 'icc' > cc_args = 'fPIC' > > def __init__ (self, verbose=0, dry_run=0, force=0): > sys.exit(0) > UnixCCompiler.__init__ (self, verbose,dry_run, force) > self.cc_exe = 'icc -fPIC ' > compiler = self.cc_exe > self.set_executables(compiler=compiler, > compiler_so=compiler, > compiler_cxx=compiler, > linker_exe=compiler, > linker_so=compiler + ' -shared -lstdc++') > > class IntelItaniumCCompiler(IntelCCompiler): > compiler_type = 'intele' > > # On Itanium, the Intel Compiler used to be called ecc, let's search > for > # it (now it's also icc, so ecc is last in the search). > for cc_exe in map(find_executable,['icc','ecc']): > if cc_exe: > break > > class IntelEM64TCCompiler(UnixCCompiler): > """ A modified Intel x86_64 compiler compatible with a 64bit gcc built > Python. > """ > compiler_type = 'intelem' > cc_exe = 'icc -m64 -fPIC' > cc_args = "-fPIC -openmp" > def __init__ (self, verbose=0, dry_run=0, force=0): > UnixCCompiler.__init__ (self, verbose,dry_run, force) > self.cc_exe = 'icc -m64 -fPIC -openmp ' > compiler = self.cc_exe > self.set_executables(compiler=compiler, > compiler_so=compiler, > compiler_cxx=compiler, > linker_exe=compiler, > linker_so=compiler + ' -shared -lstdc++') > ########################## > LD_LIBRARY_PATH is : > ######################### > > /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/scalasca-1.3.3/lib:/opt/PBS/lib:/opt/intel/mpi/lib64:/opt/maui/lib:/opt/jdk1.6.0_23/lib:/opt/intel/Compiler/11.0/069/ipp/em64t/sharedlib:/opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib:/opt/intel/Compiler/11.0/069/lib/intel64:/opt/intel/Compiler/11.0/069/ipp/em64t/sharedlib:/opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib:/opt/intel/Compiler/11.0/069/lib/intel64:/usr/local/lib > ######################### > > -AKSHAR > > > > On Sat, Oct 22, 2011 at 11:32 AM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Fri, Oct 21, 2011 at 11:49 PM, akshar bhosale < >> akshar.bhosale at gmail.com> wrote: >> >>> Hi, >>> >>> unfortunately 1.6.1 also hangs on the same place. Can i move ahead with >>> installing scipy? >>> >>> >> Hmm. Well, give scipy a try, but it would be nice to know what the problem >> is with einsum. I'm thinking compiler, GCC 4.1.2 might be a bit old, but >> it could easily be something else. Can you give us more information about >> your system? >> >> Chuck >> >>> >>> On Sat, Oct 22, 2011 at 12:19 AM, Charles R Harris < >>> charlesr.harris at gmail.com> wrote: >>> >>>> >>>> >>>> On Fri, Oct 21, 2011 at 5:25 AM, akshar bhosale < >>>> akshar.bhosale at gmail.com> wrote: >>>> >>>>> Hi, >>>>> does this mean that numpy is not configured properly or i can ignore >>>>> this and go ahead with scipy installation? >>>>> >>>> >>>> Scipy will probably work, but you should really install numpy 1.6.1 >>>> instead of 1.6.0. >>>> >>>> >>>> >>>> Chuck >>>> >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>>> >>>> >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at fastolfe.net Sat Oct 22 13:22:43 2011 From: david at fastolfe.net (David Nesting) Date: Sat, 22 Oct 2011 10:22:43 -0700 Subject: [SciPy-User] scikits.timeseries for many, large, independent and irregular time series Message-ID: I am interested in using scikits.timeseries for a project. I expect to use it with hundreds or thousands of sensors recording points of data. I need to perform computations on the resulting timeseries as data points arrive. Most of these computations will be on recent data, but some will need to look backward much farther. The reporting rate for each incoming data point will vary from a second (perhaps less) to a minute or longer, and the time at which the points arrive will not be perfectly regular (some sensors will only report changes). 1. I've seen posts discussing converting irregular timeseries to "proper" regularly spaced TimeSeries data. But since this loses data fidelity, and not all values are meant to be interpolated, I want to keep the original data points, so it seems like I would need to store (and persist) this data using data structures and methods of my own devising, and only use TimeSeries objects when I want to do computations on them (and at that point I can make them regular and fill in or interpolate empty values as needed). Does that sound right? 2. Some computations could involve very large TimeSeries objects. The original data points may not even be in memory and would need to be fetched from a database. It seems like I cannot avoid putting these data points in memory in order to perform the computations, though. Is there any support (or suggested techniques) for doing streaming operations on TimeSeries? In other words, if I want to get an average (across time) of the sum (across space) of 10 000 different TimeSeries, I should be able to compute that incrementally without having to hold any of those TimeSeries in memory. And, ideally, if some of those data points need to be fetched from a database (or a back-end server), it should be possible to do that fairly transparently. Any advice here? I appreciate your help. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Sun Oct 23 01:12:27 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Sun, 23 Oct 2011 10:42:27 +0530 Subject: [SciPy-User] lapack import error with NumPy Message-ID: Hi, i have installed numpy 1.6.0 with python 2.6. i have intel cluster toolkit installed on my system. (11/069 version and mkl 10.3). i have machine having intel xeon processor and rhel 5.2 x86_64 platform. i am trying with intel compilers. if i do python -c 'import numpy;numpy.matrix([[1, 5, 10], [1.0, 3j, 4]], numpy.complex128).T.I.H' python: symbol lookup error: /opt/intel/Compiler/11.0/069/ mkl/lib/em64/libmkl_lapack.so: undefined symbol: mkl_lapack_zgeqrf my site.cfg is : #################### [mkl] mkl_libs = mkl_def, mkl_intel_lp64, mkl_intel_thread, mkl_core, mkl_mc lapack_libs = mkl_lapack95_lp64 library_dirs = /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/lib/intel64/ include_dirs = /opt/intel/Compiler/11.0/069/mkl/include:/opt/intel/Compiler/11.0/069/include/ #################### and intelcompiler.py is : ############################ from distutils.unixccompiler import UnixCCompiler from numpy.distutils.exec_command import find_executable import sys class IntelCCompiler(UnixCCompiler): """ A modified Intel compiler compatible with an gcc built Python.""" compiler_type = 'intel' cc_exe = 'icc' cc_args = 'fPIC' def __init__ (self, verbose=0, dry_run=0, force=0): sys.exit(0) UnixCCompiler.__init__ (self, verbose,dry_run, force) self.cc_exe = 'icc -fPIC ' compiler = self.cc_exe self.set_executables(compiler=compiler, compiler_so=compiler, compiler_cxx=compiler, linker_exe=compiler, linker_so=compiler + ' -shared -lstdc++') class IntelItaniumCCompiler(IntelCCompiler): compiler_type = 'intele' # On Itanium, the Intel Compiler used to be called ecc, let's search for # it (now it's also icc, so ecc is last in the search). for cc_exe in map(find_executable,['icc','ecc']): if cc_exe: break class IntelEM64TCCompiler(UnixCCompiler): """ A modified Intel x86_64 compiler compatible with a 64bit gcc built Python. """ compiler_type = 'intelem' cc_exe = 'icc -m64 -fPIC' cc_args = "-fPIC -openmp" def __init__ (self, verbose=0, dry_run=0, force=0): UnixCCompiler.__init__ (self, verbose,dry_run, force) self.cc_exe = 'icc -m64 -fPIC -openmp ' compiler = self.cc_exe self.set_executables(compiler=compiler, compiler_so=compiler, compiler_cxx=compiler, linker_exe=compiler, linker_so=compiler + ' -shared -lstdc++') ########################## LD_LIBRARY_PATH is : ######################### /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/scalasca-1.3.3/lib:/opt/PBS/lib:/opt/intel/mpi/lib64:/opt/maui/lib:/opt/jdk1.6.0_23/lib:/opt/intel/Compiler/11.0/069/ipp/em64t/sharedlib:/opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib:/opt/intel/Compiler/11.0/069/lib/intel64:/opt/intel/Compiler/11.0/069/ipp/em64t/sharedlib:/opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib:/opt/intel/Compiler/11.0/069/lib/intel64:/usr/local/lib ######################### -AKSHAR -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Sun Oct 23 05:45:07 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Sun, 23 Oct 2011 15:15:07 +0530 Subject: [SciPy-User] [SciPy-Dev] numpy.test hangs In-Reply-To: References: Message-ID: Hi, i changed site.cfg : ####site.cfg#### [MKL} mkl_libs = mkl_def, mkl_intel_lp64, mkl_intel_thread, mkl_core, mkl_mc lapack_libs = mkl_lapack95_lp64, mkl_lapack, mkl_scalapack_ilp64, mkl_scalapack_lp64 #lapack_libs =mkl_lapack,mkl_scalapack_ilp64,mkl_scalapack_lp64,mkl_lapack95_lp64 library_dirs = /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/lib/intel64/ include_dirs = /opt/intel/Compiler/11.0/069/mkl/include:/opt/intel/Compiler/11.0/069/include/ and now numpy.test hangs at : python Python 2.6 (r26:66714, May 29 2011, 15:10:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(verbose=3) Running unit tests for numpy NumPy version 1.6.0 NumPy is installed in /home/aksharb/Python-2.6/lib/python2.6/site-packages/numpy Python version 2.6 (r26:66714, May 29 2011, 15:10:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] nose version 1.0.0 nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] nose.selector: INFO: /home/external/unipune/gadre/Python-2.6/lib/python2.6/site-packages/numpy/core/multiarray.so is executable; skipped nose.selector: INFO: /home/external/unipune/gadre/Python-2.6/lib/python2.6/site-packages/numpy/core/scalarmath.so is executable; skipped nose.selector: INFO: /home/external/unipune/gadre/Python-2.6/lib/python2.6/site-packages/numpy/core/umath.so is executable; skipped nose.selector: INFO: /home/external/unipune/gadre/Python-2.6/lib/python2.6/site-packages/numpy/core/multiarray_tests.so is executable; skipped nose.selector: INFO: /home/external/unipune/gadre/Python-2.6/lib/python2.6/site-packages/numpy/core/umath_tests.so is executable; skipped nose.selector: INFO: /home/external/unipune/gadre/Python-2.6/lib/python2.6/site-packages/numpy/fft/fftpack_lite.so is executable; skipped nose.selector: INFO: /home/external/unipune/gadre/Python-2.6/lib/python2.6/site-packages/numpy/linalg/lapack_lite.so is executable; skipped nose.selector: INFO: /home/external/unipune/gadre/Python-2.6/lib/python2.6/site-packages/numpy/random/mtrand.so is executable; skipped test_api.test_fastCopyAndTranspose ... ok test_arrayprint.TestArrayRepr.test_nan_inf ... ok test_str (test_arrayprint.TestComplexArray) ... ok Ticket 844. ... ok test_blasdot.test_blasdot_used ... ok test_blasdot.test_dot_2args ... ok test_blasdot.test_dot_3args ... ok test_blasdot.test_dot_3args_errors ... ok . . . Test if an appropriate exception is raised when passing bad values to ... ok Test whether equivalent subarray dtypes hash the same. ... ok Test whether different subarray dtypes hash differently. ... ok Test some data types that are equal ... ok Test some more complicated cases that shouldn't be equal ... ok Test some simple cases that shouldn't be equal ... ok test_single_subarray (test_dtype.TestSubarray) ... ok test_einsum_errors (test_einsum.TestEinSum) ... ok test_einsum_sums_cfloat128 (test_einsum.TestEinSum) ... ####################### On Sat, Oct 22, 2011 at 1:46 PM, wrote: > This is a members-only list. Your message has been automatically > rejected, since it came from a non-member's email address. Please > make sure to use the email account that you used to join this list. > > > > ---------- Forwarded message ---------- > From: akshar bhosale > To: SciPy Developers List , Discussion of Numerical > Python > Date: Sat, 22 Oct 2011 13:46:49 +0530 > Subject: Re: [SciPy-Dev] numpy.test hangs > Hi, > > python > Python 2.6 (r26:66714, May 29 2011, 15:10:47) > [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> numpy.show_config() > lapack_opt_info: > libraries = ['mkl_lapack95_lp64', 'mkl_def', 'mkl_intel_lp64', > 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] > library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', > '/opt/intel/Compiler/11.0/069/include/'] > blas_opt_info: > libraries = ['mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', > 'mkl_core', 'mkl_mc', 'pthread'] > library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', > '/opt/intel/Compiler/11.0/069/include/'] > lapack_mkl_info: > libraries = ['mkl_lapack95_lp64', 'mkl_def', 'mkl_intel_lp64', > 'mkl_intel_thread', 'mkl_core', 'mkl_mc', 'pthread'] > library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', > '/opt/intel/Compiler/11.0/069/include/'] > blas_mkl_info: > libraries = ['mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', > 'mkl_core', 'mkl_mc', 'pthread'] > library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', > '/opt/intel/Compiler/11.0/069/include/'] > mkl_info: > libraries = ['mkl_def', 'mkl_intel_lp64', 'mkl_intel_thread', > 'mkl_core', 'mkl_mc', 'pthread'] > library_dirs = ['/opt/intel/Compiler/11.0/069/mkl/lib/em64t'] > define_macros = [('SCIPY_MKL_H', None)] > include_dirs = ['/opt/intel/Compiler/11.0/069/mkl/include', > '/opt/intel/Compiler/11.0/069/include/'] > > Akshar > > > On Sat, Oct 22, 2011 at 11:54 AM, akshar bhosale > wrote: > >> yes sure.. >> i have intel cluster toolkit installed on my system. (11/069 version and >> mkl 10.3). i have machine having intel xeon processor and rhel 5.2 x86_64 >> platform. i am trying with intel compilers. >> if i do >> >> python -c 'import numpy;numpy.matrix([[1, 5, 10], [1.0, 3j, 4]], >> numpy.complex128).T.I.H' >> python: symbol lookup error: >> /opt/intel/Compiler/11.0/069/mkl/lib/em64/libmkl_lapack.so: undefined >> symbol: mkl_lapack_zgeqrf >> >> my site.cfg is : >> #################### >> [mkl] >> >> mkl_libs = mkl_def, mkl_intel_lp64, mkl_intel_thread, mkl_core, mkl_mc >> lapack_libs = mkl_lapack95_lp64 >> >> library_dirs = >> /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/lib/intel64/ >> include_dirs = >> /opt/intel/Compiler/11.0/069/mkl/include:/opt/intel/Compiler/11.0/069/include/ >> #################### >> and intelcompiler.py is : >> ############################ >> from distutils.unixccompiler import UnixCCompiler >> from numpy.distutils.exec_command import find_executable >> import sys >> >> class IntelCCompiler(UnixCCompiler): >> """ A modified Intel compiler compatible with an gcc built Python.""" >> compiler_type = 'intel' >> cc_exe = 'icc' >> cc_args = 'fPIC' >> >> def __init__ (self, verbose=0, dry_run=0, force=0): >> sys.exit(0) >> UnixCCompiler.__init__ (self, verbose,dry_run, force) >> self.cc_exe = 'icc -fPIC ' >> compiler = self.cc_exe >> self.set_executables(compiler=compiler, >> compiler_so=compiler, >> compiler_cxx=compiler, >> linker_exe=compiler, >> linker_so=compiler + ' -shared -lstdc++') >> >> class IntelItaniumCCompiler(IntelCCompiler): >> compiler_type = 'intele' >> >> # On Itanium, the Intel Compiler used to be called ecc, let's search >> for >> # it (now it's also icc, so ecc is last in the search). >> for cc_exe in map(find_executable,['icc','ecc']): >> if cc_exe: >> break >> >> class IntelEM64TCCompiler(UnixCCompiler): >> """ A modified Intel x86_64 compiler compatible with a 64bit gcc built >> Python. >> """ >> compiler_type = 'intelem' >> cc_exe = 'icc -m64 -fPIC' >> cc_args = "-fPIC -openmp" >> def __init__ (self, verbose=0, dry_run=0, force=0): >> UnixCCompiler.__init__ (self, verbose,dry_run, force) >> self.cc_exe = 'icc -m64 -fPIC -openmp ' >> compiler = self.cc_exe >> self.set_executables(compiler=compiler, >> compiler_so=compiler, >> compiler_cxx=compiler, >> linker_exe=compiler, >> linker_so=compiler + ' -shared -lstdc++') >> ########################## >> LD_LIBRARY_PATH is : >> ######################### >> >> /opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/scalasca-1.3.3/lib:/opt/PBS/lib:/opt/intel/mpi/lib64:/opt/maui/lib:/opt/jdk1.6.0_23/lib:/opt/intel/Compiler/11.0/069/ipp/em64t/sharedlib:/opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib:/opt/intel/Compiler/11.0/069/lib/intel64:/opt/intel/Compiler/11.0/069/ipp/em64t/sharedlib:/opt/intel/Compiler/11.0/069/mkl/lib/em64t:/opt/intel/Compiler/11.0/069/tbb/em64t/cc4.1.0_libc2.4_kernel2.6.16.21/lib:/opt/intel/Compiler/11.0/069/lib/intel64:/usr/local/lib >> ######################### >> >> -AKSHAR >> >> >> >> On Sat, Oct 22, 2011 at 11:32 AM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Fri, Oct 21, 2011 at 11:49 PM, akshar bhosale < >>> akshar.bhosale at gmail.com> wrote: >>> >>>> Hi, >>>> >>>> unfortunately 1.6.1 also hangs on the same place. Can i move ahead with >>>> installing scipy? >>>> >>>> >>> Hmm. Well, give scipy a try, but it would be nice to know what the >>> problem is with einsum. I'm thinking compiler, GCC 4.1.2 might be a bit >>> old, but it could easily be something else. Can you give us more information >>> about your system? >>> >>> Chuck >>> >>>> >>>> On Sat, Oct 22, 2011 at 12:19 AM, Charles R Harris < >>>> charlesr.harris at gmail.com> wrote: >>>> >>>>> >>>>> >>>>> On Fri, Oct 21, 2011 at 5:25 AM, akshar bhosale < >>>>> akshar.bhosale at gmail.com> wrote: >>>>> >>>>>> Hi, >>>>>> does this mean that numpy is not configured properly or i can ignore >>>>>> this and go ahead with scipy installation? >>>>>> >>>>> >>>>> Scipy will probably work, but you should really install numpy 1.6.1 >>>>> instead of 1.6.0. >>>>> >>>>> >>>>> >>>>> Chuck >>>>> >>>>> _______________________________________________ >>>>> SciPy-Dev mailing list >>>>> SciPy-Dev at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>>> >>>> >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckkart at hoc.net Sun Oct 23 17:18:10 2011 From: ckkart at hoc.net (Christian K.) Date: Sun, 23 Oct 2011 23:18:10 +0200 Subject: [SciPy-User] calculate predicted values from regression + confidence intervall In-Reply-To: References: <20111017095906.326260@gmx.net> Message-ID: Am 21.10.11 02:03, schrieb josef.pktd at gmail.com: > On Thu, Oct 20, 2011 at 5:15 PM, Robert Kern wrote: >> On Thu, Oct 20, 2011 at 21:50, Christian K. wrote: >>> Am 20.10.11 16:12, schrieb josef.pktd at gmail.com: >>>> On Thu, Oct 20, 2011 at 5:11 AM, Christian K. wrote: >>>>> gmail.com> writes: >>>>>>> f(X,Y) = a1-a2*log(X)+a3/Y (inverse power/Arrhenius model from accelerated >>>>>>> reliability testing) >>>>>> >>>>>> your f(X,Y) is still linear in the parameters, a1, a2, a3. So the >>>>>> linear version still applies. >>>>> >>>>> Ok, but then I do not understand how to follow your indications for the >>>>> prediction interval: >>>>> >>>>>>> distributed with mean y = Y = X*beta, and var(y) = X' * cov_beta * X + >>>>>>> var_u_estimate (dot products for appropriate shapes) >>>>> >>>>> X in my case is [X,Y] and cov_beta has a shape of 3x3, since there are 3 >>>>> paramters. >>>>> Sorry for my ignorance on statistics, I really apppreaciate your help. >>>> >>>> I'm attaching a complete example for the linear in parameters case, >>>> including the comparison with statsmodels. >>> >>> Ok, I got it, thank you very much. As I understood, this works for OLS >>> (only?). > > It's OLS only, it can be adapted to other estimators like non-linear > least squares, or to weighted least squares. > > I never looked at the details of odr, so I'm no help there. > > Josef > > >>> What about if I get the covariance matrix from a 2D odr/leastsq >>> fit from scipy.odr ? I noticed, that the covariance matrices differ by a >>> constant (large) factor. >> >> ODRPACK will scale the covariance matrix by the Chi^2 score of the >> residuals (i.e. divide the residuals by the error bars, square, sum, >> divide by nobs-nparams), IIRC. This accounts for misestimation of the >> error bars. If the error bars were correctly estimated, the Chi^2 >> score will be ~1. If the error bars were too small compared to the >> residuals, then the Chi^2 score will be high, and thus increase the >> estimated variance, etc. This may or may not be what you want, >> especially when comparing it with other tools, but it's what ODRPACK >> computes, so it's what scipy.odr returns. I see. While playng with the errors of both the inputs and outputs of odr I noticed, that the covariance matrix does depend on the input errors even when specifying fit_type=2 == least squares. Is that to be expected? Anyway, thank you both for your valuable inputs. Christian From hodgson.neil at yahoo.co.uk Sun Oct 23 18:28:40 2011 From: hodgson.neil at yahoo.co.uk (Neil Hodgson) Date: Sun, 23 Oct 2011 23:28:40 +0100 (BST) Subject: [SciPy-User] scikits.timeseries for many, large, independent and irregular time series Message-ID: <1319408920.25666.YahooMailNeo@web27405.mail.ukl.yahoo.com> David, I've been doing some work with something like this. >> 1. I've seen posts discussing converting irregular timeseries to "proper"?regularly spaced TimeSeries data. ? I have been keeping my eye on the excellent looking Pandas and scikits.timeseries (which plan to consolidate, see http://pandas.sourceforge.net/timeseries.html), but for the reason you describe above I've also so far stuck to some home-grown code. ?It seems like lots of methods would need adapting to cope with non-uniformly sampled data (more common in geosciences compared to financial data for example). ?I've been waiting for Numpy 1.7 and the new datetime64 dtype before investing any serious time and energy in to even thinking about it. ? >> 2. Some computations could involve very large TimeSeries objects.? Here, I am using PyTables, with datetimes stored as float64. ?I think it's perfect for what you describe. ?(Pandas already is also using PyTables as an optional io platform). ? Hope that helps and I am interested to see what other people are doing, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesmckinn at gmail.com Sun Oct 23 20:40:47 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Sun, 23 Oct 2011 20:40:47 -0400 Subject: [SciPy-User] scikits.timeseries for many, large, independent and irregular time series In-Reply-To: <1319408920.25666.YahooMailNeo@web27405.mail.ukl.yahoo.com> References: <1319408920.25666.YahooMailNeo@web27405.mail.ukl.yahoo.com> Message-ID: On Sun, Oct 23, 2011 at 6:28 PM, Neil Hodgson wrote: > David, > > I've been doing some work with something like this. >>> 1. I've seen posts discussing converting irregular timeseries to >>> "proper"?regularly spaced TimeSeries data. > I have been keeping my eye on the excellent looking Pandas and > scikits.timeseries (which plan to consolidate, see > http://pandas.sourceforge.net/timeseries.html), but for the reason you > describe above I've also so far stuck to some home-grown code. ?It seems > like lots of methods would need adapting to cope with non-uniformly sampled > data (more common in geosciences compared to financial data for example). > ?I've been waiting for Numpy 1.7 and the new datetime64 dtype before > investing any serious time and energy in to even thinking about it. >>> 2. Some computations could involve very large TimeSeries objects. > Here, I am using PyTables, with datetimes stored as float64. ?I think it's > perfect for what you describe. ?(Pandas already is also using PyTables as an > optional io platform). > Hope that helps and I am interested to see what other people are doing, > Neil > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > As the guy behind pandas, I am admittedly biased but can confirm that it is very good for (in fact, largely designed for) irregularly spaced time series (and all kinds of labeled / structured data) and in large part lacks the rigidity of scikits.timeseries. So I would strongly recommend giving it a try before going down the path of building your own data structures. I will be continuing to actively develop and support pandas over the coming years, so having more users providing feedback on functionality would be very useful for me. Lately I've built the support infrastructure to enable very fast operations involving datetime64-indexed data. All of this is available-- if you have an ordered int64-based index, alignment operations and merging / joining operations will be extremely fast (I wrote about this here last month: http://wesmckinney.com/blog/?p=232) If you need to use float64-based indexes, it should be straightforward to make a similarly-fast index data structure (more or less a copy-paste job of the Int64Index, changing the Cython functions to use their float64 counterparts). Sometime between now and the end of the year I am going to integrate datetime64 more thoroughly, essentially eliminating datetime.datetime-based indexing for most practical use cases. This should be pretty straightforward to do but will require some care to ease the transition for legacy systems built based on datetime.datetime indexing. PyTables is an excellent storage option, especially if your data is largely static. pandas provides a dict-like HDFStore class for storing time series data, which may not be a bad place to start. best, Wes From opossumnano at gmail.com Mon Oct 24 09:58:12 2011 From: opossumnano at gmail.com (Tiziano Zito) Date: Mon, 24 Oct 2011 15:58:12 +0200 Subject: [SciPy-User] ANN: MDP 3.2 released! Message-ID: <20111024135812.GF28362@tulpenbaum.cognition.tu-berlin.de> We are glad to announce release 3.2 of the Modular toolkit for Data Processing (MDP). MDP is a Python library of widely used data processing algorithms that can be combined according to a pipeline analogy to build more complex data processing software. The base of available algorithms includes signal processing methods (Principal Component Analysis, Independent Component Analysis, Slow Feature Analysis), manifold learning methods ([Hessian] Locally Linear Embedding), several classifiers, probabilistic methods (Factor Analysis, RBM), data pre-processing methods, and many others. What's new in version 3.2? -------------------------- - improved sklearn wrappers - update sklearn, shogun, and pp wrappers to new versions - do not leave temporary files around after testing - refactoring and cleaning up of HTML exporting features - improve export of signature and doc-string to public methods - fixed and updated FastICANode to closely resemble the original Matlab version (thanks to Ben Willmore) - support for new numpy version - new NeuralGasNode (thanks to Michael Schmuker) - several bug fixes and improvements We recommend all users to upgrade. Resources --------- Download: http://sourceforge.net/projects/mdp-toolkit/files Homepage: http://mdp-toolkit.sourceforge.net Mailing list: http://lists.sourceforge.net/mailman/listinfo/mdp-toolkit-users Acknowledgments --------------- We thank the contributors to this release: Michael Schmuker, Ben Willmore. The MDP developers, Pietro Berkes Zbigniew J?drzejewski-Szmek Rike-Benjamin Schuppner Niko Wilbert Tiziano Zito From johann.cohentanugi at gmail.com Mon Oct 24 13:50:19 2011 From: johann.cohentanugi at gmail.com (Johann Cohen-Tanugi) Date: Mon, 24 Oct 2011 19:50:19 +0200 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: Message-ID: <4EA5A55B.2010100@gmail.com> Hello, the OP is a colleague of mine and I looked quickly at the code. The infinite loop in the OP's illustrating script comes from the "while 1" loop in l.144 of linesearch.py : becuse phi0 is np.nan, phi1 is returned as np.nan as well, and the break condition is never met. There is an easy fix : while 1: stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, derphi1, c1, c2, xtol, task, amin, amax, isave, dsave) if task[:2] == asbytes('FG') and not np.isnan(phi1): alpha1 = stp phi1 = phi(stp) derphi1 = derphi(stp) else: break but it is not a nice kludge.... Is there a better way to secure this while 1 loop? I am sure I am not covering all possible pathological cases with adding "not np.isnan(phi1)" in the code above. thoughts? Johann On 08/14/2011 01:38 AM, b9o2jnbm tsd71eam wrote: > I have run into a frustrating problem where scipy.optimize.fmin_bfgs > will get stuck in an infinite loop. > > I have submitted a bug report: > > http://projects.scipy.org/scipy/ticket/1494 > > but would also like to see if anybody on this list has any suggestions > or feedback. > > Thanks, > > -- > This message has been scanned for viruses and > dangerous content by *MailScanner* , and is > believed to be clean. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Oct 24 13:56:25 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Oct 2011 13:56:25 -0400 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: <4EA5A55B.2010100@gmail.com> References: <4EA5A55B.2010100@gmail.com> Message-ID: On Mon, Oct 24, 2011 at 1:50 PM, Johann Cohen-Tanugi wrote: > Hello, > the OP is a colleague of mine and I looked quickly at the code. The infinite > loop in the OP's illustrating script comes from the "while 1" loop in l.144 > of linesearch.py : becuse phi0 is np.nan, phi1 is returned as np.nan as > well, and the break condition is never met. There is an easy fix : > ??? while 1: > ??????? stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, derphi1, > ?????????????????????????????????????????????????? c1, c2, xtol, task, > ?????????????????????????????????????????????????? amin, amax, isave, dsave) > ??????? if task[:2] == asbytes('FG') and not np.isnan(phi1): > ??????????? alpha1 = stp > ??????????? phi1 = phi(stp) > ??????????? derphi1 = derphi(stp) > ??????? else: > ??????????? break > > but it is not a nice kludge.... Is there a better way to secure this while 1 > loop? I am sure I am not covering all possible pathological cases with > adding "not np.isnan(phi1)" in the code above. Is this still a problem with 0.10 ? I thought this fixed it, https://github.com/scipy/scipy/commit/a31acbf Josef > > thoughts? > Johann > > On 08/14/2011 01:38 AM, b9o2jnbm tsd71eam wrote: > > I have run into a frustrating problem where scipy.optimize.fmin_bfgs will > get stuck in an infinite loop. > > I have submitted a bug report: > > http://projects.scipy.org/scipy/ticket/1494 > > but would also like to see if anybody on this list has any suggestions or > feedback. > > Thanks, > > -- > This message has been scanned for viruses and > dangerous content by MailScanner, and is > believed to be clean. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From josef.pktd at gmail.com Mon Oct 24 13:58:52 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Oct 2011 13:58:52 -0400 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> Message-ID: On Mon, Oct 24, 2011 at 1:56 PM, wrote: > On Mon, Oct 24, 2011 at 1:50 PM, Johann Cohen-Tanugi > wrote: >> Hello, >> the OP is a colleague of mine and I looked quickly at the code. The infinite >> loop in the OP's illustrating script comes from the "while 1" loop in l.144 >> of linesearch.py : becuse phi0 is np.nan, phi1 is returned as np.nan as >> well, and the break condition is never met. There is an easy fix : >> ??? while 1: >> ??????? stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, derphi1, >> ?????????????????????????????????????????????????? c1, c2, xtol, task, >> ?????????????????????????????????????????????????? amin, amax, isave, dsave) >> ??????? if task[:2] == asbytes('FG') and not np.isnan(phi1): >> ??????????? alpha1 = stp >> ??????????? phi1 = phi(stp) >> ??????????? derphi1 = derphi(stp) >> ??????? else: >> ??????????? break >> >> but it is not a nice kludge.... Is there a better way to secure this while 1 >> loop? I am sure I am not covering all possible pathological cases with >> adding "not np.isnan(phi1)" in the code above. > > Is this still a problem with 0.10 ? > I thought this fixed it, https://github.com/scipy/scipy/commit/a31acbf Is http://projects.scipy.org/scipy/ticket/1542 the same? josef > > Josef > > >> >> thoughts? >> Johann >> >> On 08/14/2011 01:38 AM, b9o2jnbm tsd71eam wrote: >> >> I have run into a frustrating problem where scipy.optimize.fmin_bfgs will >> get stuck in an infinite loop. >> >> I have submitted a bug report: >> >> http://projects.scipy.org/scipy/ticket/1494 >> >> but would also like to see if anybody on this list has any suggestions or >> feedback. >> >> Thanks, >> >> -- >> This message has been scanned for viruses and >> dangerous content by MailScanner, and is >> believed to be clean. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > From tmp50 at ukr.net Mon Oct 24 14:16:39 2011 From: tmp50 at ukr.net (Dmitrey) Date: Mon, 24 Oct 2011 21:16:39 +0300 Subject: [SciPy-User] [ANN] Multifactor analysis tool for experiment planning Message-ID: <2669.1319480199.16946892812193628160@ffe15.ukr.net> Hi all, > > new OpenOpt feature is available: Multifactor analysis tool for experiment planning (in physics, chemistry, biology etc). It is based on numerical optimization solver BOBYQA, released in 2009 by Michael J.D. Powell, and has easy and convenient GUI frontend, written in Python + tkinter. Maybe other (alternative) engines will be available in future. > > See its webpage for details. > > Regards, Dmitrey. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johann.cohentanugi at gmail.com Mon Oct 24 14:39:35 2011 From: johann.cohentanugi at gmail.com (Johann Cohen-Tanugi) Date: Mon, 24 Oct 2011 20:39:35 +0200 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> Message-ID: <4EA5B0E7.1080106@gmail.com> Dear Josef On 10/24/2011 07:58 PM, josef.pktd at gmail.com wrote: > On Mon, Oct 24, 2011 at 1:56 PM, wrote: >> On Mon, Oct 24, 2011 at 1:50 PM, Johann Cohen-Tanugi >> wrote: >>> Hello, >>> the OP is a colleague of mine and I looked quickly at the code. The infinite >>> loop in the OP's illustrating script comes from the "while 1" loop in l.144 >>> of linesearch.py : becuse phi0 is np.nan, phi1 is returned as np.nan as >>> well, and the break condition is never met. There is an easy fix : >>> while 1: >>> stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, derphi1, >>> c1, c2, xtol, task, >>> amin, amax, isave, dsave) >>> if task[:2] == asbytes('FG') and not np.isnan(phi1): >>> alpha1 = stp >>> phi1 = phi(stp) >>> derphi1 = derphi(stp) >>> else: >>> break >>> >>> but it is not a nice kludge.... Is there a better way to secure this while 1 >>> loop? I am sure I am not covering all possible pathological cases with >>> adding "not np.isnan(phi1)" in the code above. >> Is this still a problem with 0.10 ? >> I thought this fixed it, https://github.com/scipy/scipy/commit/a31acbf Well I am a complete newby with github, but I think I went to the head of master before testing, and the problem is still there. I can see the code snippet from https://github.com/scipy/scipy/commit/a31acbf in my local copy, and this is testing against +/-inf, not nan. Changing the OP's code to test against inf yields : In [1]: run test_bgfs.py /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:303: RuntimeWarning: invalid value encountered in subtract if (max(numpy.ravel(abs(sim[1:] - sim[0]))) <= xtol \ /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:308: RuntimeWarning: invalid value encountered in subtract xr = (1 + rho)*xbar - rho*sim[-1] /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:350: RuntimeWarning: invalid value encountered in subtract sim[j] = sim[0] + sigma*(sim[j] - sim[0]) fmin works [ inf] /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:132: RuntimeWarning: invalid value encountered in subtract alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:308: RuntimeWarning: invalid value encountered in subtract alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:417: RuntimeWarning: invalid value encountered in subtract B = (fb-D-C*db)/(db*db) fmin_bfgs gets stuck in a loop [ nan] so it looks like your code solves the inf situation, but not the nan. > Is http://projects.scipy.org/scipy/ticket/1542 the same? yes it looks like a duplicate > josef > >> Josef >> >> >>> thoughts? >>> Johann >>> >>> On 08/14/2011 01:38 AM, b9o2jnbm tsd71eam wrote: >>> >>> I have run into a frustrating problem where scipy.optimize.fmin_bfgs will >>> get stuck in an infinite loop. >>> >>> I have submitted a bug report: >>> >>> http://projects.scipy.org/scipy/ticket/1494 >>> >>> but would also like to see if anybody on this list has any suggestions or >>> feedback. >>> >>> Thanks, >>> >>> -- >>> This message has been scanned for viruses and >>> dangerous content by MailScanner, and is >>> believed to be clean. >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> From josef.pktd at gmail.com Mon Oct 24 15:26:59 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Oct 2011 15:26:59 -0400 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: <4EA5B0E7.1080106@gmail.com> References: <4EA5A55B.2010100@gmail.com> <4EA5B0E7.1080106@gmail.com> Message-ID: On Mon, Oct 24, 2011 at 2:39 PM, Johann Cohen-Tanugi wrote: > Dear Josef > On 10/24/2011 07:58 PM, josef.pktd at gmail.com wrote: >> >> On Mon, Oct 24, 2011 at 1:56 PM, ?wrote: >>> >>> On Mon, Oct 24, 2011 at 1:50 PM, Johann Cohen-Tanugi >>> ?wrote: >>>> >>>> Hello, >>>> the OP is a colleague of mine and I looked quickly at the code. The >>>> infinite >>>> loop in the OP's illustrating script comes from the "while 1" loop in >>>> l.144 >>>> of linesearch.py : becuse phi0 is np.nan, phi1 is returned as np.nan as >>>> well, and the break condition is never met. There is an easy fix : >>>> ? ? while 1: >>>> ? ? ? ? stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, >>>> derphi1, >>>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?c1, c2, xtol, task, >>>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?amin, amax, isave, >>>> dsave) >>>> ? ? ? ? if task[:2] == asbytes('FG') and not np.isnan(phi1): >>>> ? ? ? ? ? ? alpha1 = stp >>>> ? ? ? ? ? ? phi1 = phi(stp) >>>> ? ? ? ? ? ? derphi1 = derphi(stp) >>>> ? ? ? ? else: >>>> ? ? ? ? ? ? break >>>> >>>> but it is not a nice kludge.... Is there a better way to secure this >>>> while 1 >>>> loop? I am sure I am not covering all possible pathological cases with >>>> adding "not np.isnan(phi1)" in the code above. >>> >>> Is this still a problem with 0.10 ? >>> I thought this fixed it, https://github.com/scipy/scipy/commit/a31acbf > > Well I am a complete newby with github, but I think I went to the head of > master before testing, and the problem is still there. I can see the code > snippet from https://github.com/scipy/scipy/commit/a31acbf in my local copy, > and this is testing against +/-inf, not nan. Changing the OP's code to test > against inf yields : > In [1]: run test_bgfs.py > /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:303: > RuntimeWarning: invalid value encountered in subtract > ?if (max(numpy.ravel(abs(sim[1:] - sim[0]))) <= xtol \ > /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:308: > RuntimeWarning: invalid value encountered in subtract > ?xr = (1 + rho)*xbar - rho*sim[-1] > /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:350: > RuntimeWarning: invalid value encountered in subtract > ?sim[j] = sim[0] + sigma*(sim[j] - sim[0]) > fmin works [ inf] > /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:132: > RuntimeWarning: invalid value encountered in subtract > ?alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) > /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:308: > RuntimeWarning: invalid value encountered in subtract > ?alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) > /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:417: > RuntimeWarning: invalid value encountered in subtract > ?B = (fb-D-C*db)/(db*db) > fmin_bfgs gets stuck in a loop [ nan] > > so it looks like your code solves the inf situation, but not the nan. It's not my fix (I'm still on scipy 0.9 and avoid bfgs because I don't want to have to kill my interpreter session) isfinite also checks for nans >>> np.isfinite(np.nan) False so there should be another reason that the linesearch doesn't return. Josef > > >> Is http://projects.scipy.org/scipy/ticket/1542 the same? > > yes it looks like a duplicate >> >> josef >> >>> Josef >>> >>> >>>> thoughts? >>>> Johann >>>> >>>> On 08/14/2011 01:38 AM, b9o2jnbm tsd71eam wrote: >>>> >>>> I have run into a frustrating problem where scipy.optimize.fmin_bfgs >>>> will >>>> get stuck in an infinite loop. >>>> >>>> I have submitted a bug report: >>>> >>>> http://projects.scipy.org/scipy/ticket/1494 >>>> >>>> but would also like to see if anybody on this list has any suggestions >>>> or >>>> feedback. >>>> >>>> Thanks, >>>> >>>> -- >>>> This message has been scanned for viruses and >>>> dangerous content by MailScanner, and is >>>> believed to be clean. >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> > From johann.cohentanugi at gmail.com Mon Oct 24 15:36:34 2011 From: johann.cohentanugi at gmail.com (Johann Cohen-Tanugi) Date: Mon, 24 Oct 2011 21:36:34 +0200 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> Message-ID: <4EA5BE42.9010206@gmail.com> On 10/24/2011 07:56 PM, josef.pktd at gmail.com wrote: > On Mon, Oct 24, 2011 at 1:50 PM, Johann Cohen-Tanugi > wrote: >> Hello, >> the OP is a colleague of mine and I looked quickly at the code. The infinite >> loop in the OP's illustrating script comes from the "while 1" loop in l.144 >> of linesearch.py : becuse phi0 is np.nan, phi1 is returned as np.nan as >> well, and the break condition is never met. There is an easy fix : >> while 1: >> stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, derphi1, >> c1, c2, xtol, task, >> amin, amax, isave, dsave) >> if task[:2] == asbytes('FG') and not np.isnan(phi1): >> alpha1 = stp >> phi1 = phi(stp) >> derphi1 = derphi(stp) >> else: >> break >> >> but it is not a nice kludge.... Is there a better way to secure this while 1 >> loop? I am sure I am not covering all possible pathological cases with >> adding "not np.isnan(phi1)" in the code above. > Is this still a problem with 0.10 ? > I thought this fixed it, https://github.com/scipy/scipy/commit/a31acbf actually, this would not fix the nan issue, which ends up sitting on an infinite loop on l.531 where line_search_wolfe1 is called. So the 2 trac entries were maybe different : one meant for +/-inf which is fixed, and one for the nan case, which is still problematic. best, Johann > Josef > > >> thoughts? >> Johann >> >> On 08/14/2011 01:38 AM, b9o2jnbm tsd71eam wrote: >> >> I have run into a frustrating problem where scipy.optimize.fmin_bfgs will >> get stuck in an infinite loop. >> >> I have submitted a bug report: >> >> http://projects.scipy.org/scipy/ticket/1494 >> >> but would also like to see if anybody on this list has any suggestions or >> feedback. >> >> Thanks, >> >> -- >> This message has been scanned for viruses and >> dangerous content by MailScanner, and is >> believed to be clean. >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> From josef.pktd at gmail.com Mon Oct 24 15:59:10 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Oct 2011 15:59:10 -0400 Subject: [SciPy-User] Fwd: fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> <4EA5B0E7.1080106@gmail.com> <4EA5BFF3.2070303@lupm.univ-montp2.fr> Message-ID: tricky things these reply to all, forwarding to list ---------- Forwarded message ---------- From: Date: Mon, Oct 24, 2011 at 3:52 PM Subject: Re: [SciPy-User] fmin_bfgs stuck in infinite loop To: johann.cohen-tanugi at lupm.in2p3.fr On Mon, Oct 24, 2011 at 3:43 PM, Johann Cohen-Tanugi wrote: > indeed, see the email I just sent : for nan linesearch_wolfe1 does not exit > gracefully, so that Ralf's patch is never encountered. I'm not sure what's going on, I just copied the few lines from https://github.com/scipy/scipy/commit/a31acbf into my scipy 0.9 and the original example stops and I'm not able to produce an endless loop anymore when I try to change around with any of the examples, even when I start with a nan, it stops immediately. ?I only tried the one parameter example. Can you check that you actually run the code that has this fix? Josef > Johann > > On 10/24/2011 09:26 PM, josef.pktd at gmail.com wrote: >> >> On Mon, Oct 24, 2011 at 2:39 PM, Johann Cohen-Tanugi >> ?wrote: >>> >>> Dear Josef >>> On 10/24/2011 07:58 PM, josef.pktd at gmail.com wrote: >>>> >>>> On Mon, Oct 24, 2011 at 1:56 PM, ? ?wrote: >>>>> >>>>> On Mon, Oct 24, 2011 at 1:50 PM, Johann Cohen-Tanugi >>>>> ? ?wrote: >>>>>> >>>>>> Hello, >>>>>> the OP is a colleague of mine and I looked quickly at the code. The >>>>>> infinite >>>>>> loop in the OP's illustrating script comes from the "while 1" loop in >>>>>> l.144 >>>>>> of linesearch.py : becuse phi0 is np.nan, phi1 is returned as np.nan >>>>>> as >>>>>> well, and the break condition is never met. There is an easy fix : >>>>>> ? ? while 1: >>>>>> ? ? ? ? stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, >>>>>> derphi1, >>>>>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?c1, c2, xtol, task, >>>>>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?amin, amax, isave, >>>>>> dsave) >>>>>> ? ? ? ? if task[:2] == asbytes('FG') and not np.isnan(phi1): >>>>>> ? ? ? ? ? ? alpha1 = stp >>>>>> ? ? ? ? ? ? phi1 = phi(stp) >>>>>> ? ? ? ? ? ? derphi1 = derphi(stp) >>>>>> ? ? ? ? else: >>>>>> ? ? ? ? ? ? break >>>>>> >>>>>> but it is not a nice kludge.... Is there a better way to secure this >>>>>> while 1 >>>>>> loop? I am sure I am not covering all possible pathological cases with >>>>>> adding "not np.isnan(phi1)" in the code above. >>>>> >>>>> Is this still a problem with 0.10 ? >>>>> I thought this fixed it, https://github.com/scipy/scipy/commit/a31acbf >>> >>> Well I am a complete newby with github, but I think I went to the head of >>> master before testing, and the problem is still there. I can see the code >>> snippet from https://github.com/scipy/scipy/commit/a31acbf in my local >>> copy, >>> and this is testing against +/-inf, not nan. Changing the OP's code to >>> test >>> against inf yields : >>> In [1]: run test_bgfs.py >>> >>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:303: >>> RuntimeWarning: invalid value encountered in subtract >>> ?if (max(numpy.ravel(abs(sim[1:] - sim[0])))<= xtol \ >>> >>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:308: >>> RuntimeWarning: invalid value encountered in subtract >>> ?xr = (1 + rho)*xbar - rho*sim[-1] >>> >>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:350: >>> RuntimeWarning: invalid value encountered in subtract >>> ?sim[j] = sim[0] + sigma*(sim[j] - sim[0]) >>> fmin works [ inf] >>> >>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:132: >>> RuntimeWarning: invalid value encountered in subtract >>> ?alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) >>> >>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:308: >>> RuntimeWarning: invalid value encountered in subtract >>> ?alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) >>> >>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:417: >>> RuntimeWarning: invalid value encountered in subtract >>> ?B = (fb-D-C*db)/(db*db) >>> fmin_bfgs gets stuck in a loop [ nan] >>> >>> so it looks like your code solves the inf situation, but not the nan. >> >> It's not my fix (I'm still on scipy 0.9 and avoid bfgs because I don't >> want to have to kill my interpreter session) >> >> isfinite also checks for nans >> >>>>> np.isfinite(np.nan) >> >> False >> >> so there should be another reason that the linesearch doesn't return. >> >> Josef >> >> >> >> >> >>> >>>> Is http://projects.scipy.org/scipy/ticket/1542 the same? >>> >>> yes it looks like a duplicate >>>> >>>> josef >>>> >>>>> Josef >>>>> >>>>> >>>>>> thoughts? >>>>>> Johann >>>>>> >>>>>> On 08/14/2011 01:38 AM, b9o2jnbm tsd71eam wrote: >>>>>> >>>>>> I have run into a frustrating problem where scipy.optimize.fmin_bfgs >>>>>> will >>>>>> get stuck in an infinite loop. >>>>>> >>>>>> I have submitted a bug report: >>>>>> >>>>>> http://projects.scipy.org/scipy/ticket/1494 >>>>>> >>>>>> but would also like to see if anybody on this list has any suggestions >>>>>> or >>>>>> feedback. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> -- >>>>>> This message has been scanned for viruses and >>>>>> dangerous content by MailScanner, and is >>>>>> believed to be clean. >>>>>> >>>>>> _______________________________________________ >>>>>> SciPy-User mailing list >>>>>> SciPy-User at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>>> _______________________________________________ >>>>>> SciPy-User mailing list >>>>>> SciPy-User at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>> >>>>>> > From josef.pktd at gmail.com Mon Oct 24 16:14:26 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Oct 2011 16:14:26 -0400 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> <4EA5B0E7.1080106@gmail.com> <4EA5BFF3.2070303@lupm.univ-montp2.fr> Message-ID: On Mon, Oct 24, 2011 at 3:59 PM, wrote: > tricky things these reply to all, forwarding to list > > ---------- Forwarded message ---------- > From: ? > Date: Mon, Oct 24, 2011 at 3:52 PM > Subject: Re: [SciPy-User] fmin_bfgs stuck in infinite loop > To: johann.cohen-tanugi at lupm.in2p3.fr > > > On Mon, Oct 24, 2011 at 3:43 PM, Johann Cohen-Tanugi > wrote: >> indeed, see the email I just sent : for nan linesearch_wolfe1 does not exit >> gracefully, so that Ralf's patch is never encountered. > > I'm not sure what's going on, > > I just copied the few lines from > https://github.com/scipy/scipy/commit/a31acbf into my scipy 0.9 and > the original example stops and I'm not able to produce an endless loop > anymore when I try to change around with any of the examples, even > when I start with a nan, it stops immediately. ?I only tried the one > parameter example. Nope, still endless in linesearch with -np.exp(-x/2.) -np.exp(-x**2) the third iteration starts with nan as parameter and never returns from the line search. So something like your original fix to exit a nan linesearch looks necessary. Josef > > Can you check that you actually run the code that has this fix? > > Josef > > >> Johann >> >> On 10/24/2011 09:26 PM, josef.pktd at gmail.com wrote: >>> >>> On Mon, Oct 24, 2011 at 2:39 PM, Johann Cohen-Tanugi >>> ?wrote: >>>> >>>> Dear Josef >>>> On 10/24/2011 07:58 PM, josef.pktd at gmail.com wrote: >>>>> >>>>> On Mon, Oct 24, 2011 at 1:56 PM, ? ?wrote: >>>>>> >>>>>> On Mon, Oct 24, 2011 at 1:50 PM, Johann Cohen-Tanugi >>>>>> ? ?wrote: >>>>>>> >>>>>>> Hello, >>>>>>> the OP is a colleague of mine and I looked quickly at the code. The >>>>>>> infinite >>>>>>> loop in the OP's illustrating script comes from the "while 1" loop in >>>>>>> l.144 >>>>>>> of linesearch.py : becuse phi0 is np.nan, phi1 is returned as np.nan >>>>>>> as >>>>>>> well, and the break condition is never met. There is an easy fix : >>>>>>> ? ? while 1: >>>>>>> ? ? ? ? stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, >>>>>>> derphi1, >>>>>>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?c1, c2, xtol, task, >>>>>>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?amin, amax, isave, >>>>>>> dsave) >>>>>>> ? ? ? ? if task[:2] == asbytes('FG') and not np.isnan(phi1): >>>>>>> ? ? ? ? ? ? alpha1 = stp >>>>>>> ? ? ? ? ? ? phi1 = phi(stp) >>>>>>> ? ? ? ? ? ? derphi1 = derphi(stp) >>>>>>> ? ? ? ? else: >>>>>>> ? ? ? ? ? ? break >>>>>>> >>>>>>> but it is not a nice kludge.... Is there a better way to secure this >>>>>>> while 1 >>>>>>> loop? I am sure I am not covering all possible pathological cases with >>>>>>> adding "not np.isnan(phi1)" in the code above. >>>>>> >>>>>> Is this still a problem with 0.10 ? >>>>>> I thought this fixed it, https://github.com/scipy/scipy/commit/a31acbf >>>> >>>> Well I am a complete newby with github, but I think I went to the head of >>>> master before testing, and the problem is still there. I can see the code >>>> snippet from https://github.com/scipy/scipy/commit/a31acbf in my local >>>> copy, >>>> and this is testing against +/-inf, not nan. Changing the OP's code to >>>> test >>>> against inf yields : >>>> In [1]: run test_bgfs.py >>>> >>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:303: >>>> RuntimeWarning: invalid value encountered in subtract >>>> ?if (max(numpy.ravel(abs(sim[1:] - sim[0])))<= xtol \ >>>> >>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:308: >>>> RuntimeWarning: invalid value encountered in subtract >>>> ?xr = (1 + rho)*xbar - rho*sim[-1] >>>> >>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:350: >>>> RuntimeWarning: invalid value encountered in subtract >>>> ?sim[j] = sim[0] + sigma*(sim[j] - sim[0]) >>>> fmin works [ inf] >>>> >>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:132: >>>> RuntimeWarning: invalid value encountered in subtract >>>> ?alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) >>>> >>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:308: >>>> RuntimeWarning: invalid value encountered in subtract >>>> ?alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) >>>> >>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:417: >>>> RuntimeWarning: invalid value encountered in subtract >>>> ?B = (fb-D-C*db)/(db*db) >>>> fmin_bfgs gets stuck in a loop [ nan] >>>> >>>> so it looks like your code solves the inf situation, but not the nan. >>> >>> It's not my fix (I'm still on scipy 0.9 and avoid bfgs because I don't >>> want to have to kill my interpreter session) >>> >>> isfinite also checks for nans >>> >>>>>> np.isfinite(np.nan) >>> >>> False >>> >>> so there should be another reason that the linesearch doesn't return. >>> >>> Josef >>> >>> >>> >>> >>> >>>> >>>>> Is http://projects.scipy.org/scipy/ticket/1542 the same? >>>> >>>> yes it looks like a duplicate >>>>> >>>>> josef >>>>> >>>>>> Josef >>>>>> >>>>>> >>>>>>> thoughts? >>>>>>> Johann >>>>>>> >>>>>>> On 08/14/2011 01:38 AM, b9o2jnbm tsd71eam wrote: >>>>>>> >>>>>>> I have run into a frustrating problem where scipy.optimize.fmin_bfgs >>>>>>> will >>>>>>> get stuck in an infinite loop. >>>>>>> >>>>>>> I have submitted a bug report: >>>>>>> >>>>>>> http://projects.scipy.org/scipy/ticket/1494 >>>>>>> >>>>>>> but would also like to see if anybody on this list has any suggestions >>>>>>> or >>>>>>> feedback. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> -- >>>>>>> This message has been scanned for viruses and >>>>>>> dangerous content by MailScanner, and is >>>>>>> believed to be clean. >>>>>>> >>>>>>> _______________________________________________ >>>>>>> SciPy-User mailing list >>>>>>> SciPy-User at scipy.org >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>> >>>>>>> _______________________________________________ >>>>>>> SciPy-User mailing list >>>>>>> SciPy-User at scipy.org >>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>> >>>>>>> >> > From josef.pktd at gmail.com Mon Oct 24 16:34:33 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Oct 2011 16:34:33 -0400 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> <4EA5B0E7.1080106@gmail.com> <4EA5BFF3.2070303@lupm.univ-montp2.fr> Message-ID: On Mon, Oct 24, 2011 at 4:14 PM, wrote: > On Mon, Oct 24, 2011 at 3:59 PM, ? wrote: >> tricky things these reply to all, forwarding to list >> >> ---------- Forwarded message ---------- >> From: ? >> Date: Mon, Oct 24, 2011 at 3:52 PM >> Subject: Re: [SciPy-User] fmin_bfgs stuck in infinite loop >> To: johann.cohen-tanugi at lupm.in2p3.fr >> >> >> On Mon, Oct 24, 2011 at 3:43 PM, Johann Cohen-Tanugi >> wrote: >>> indeed, see the email I just sent : for nan linesearch_wolfe1 does not exit >>> gracefully, so that Ralf's patch is never encountered. >> >> I'm not sure what's going on, >> >> I just copied the few lines from >> https://github.com/scipy/scipy/commit/a31acbf into my scipy 0.9 and >> the original example stops and I'm not able to produce an endless loop >> anymore when I try to change around with any of the examples, even >> when I start with a nan, it stops immediately. ?I only tried the one >> parameter example. > > Nope, still endless in linesearch with -np.exp(-x/2.) -np.exp(-x**2) > the third iteration starts with nan as parameter and never returns > from the line search. > > So something like your original fix to exit a nan linesearch looks necessary. In my example the linesearch is getting pk=np.nan So I tried the fix still in the optimization loop, at the end of the loop after Hk = numpy.dot(A1,numpy.dot(Hk,A2)) + rhok * sk[:,numpy.newaxis] \ * sk[numpy.newaxis,:] I added the pk check #add this pk = -numpy.dot(Hk,gfk) if not numpy.isfinite(pk): #make sure we will have a valid search direction warnflag = 2 break Now my example stops with the previous iteration, when there is no new search direction in the next iteration loop. I'm duplicating now the pk fix, so it's not really a fix yet, but it seems to work. (I would be glad if bfgs is more robust so I dare to use it also.) bfgs is missing a max function evaluation limit, I build it into my objective function to avoid the endless loop. Josef > > Josef > > >> >> Can you check that you actually run the code that has this fix? >> >> Josef >> >> >>> Johann >>> >>> On 10/24/2011 09:26 PM, josef.pktd at gmail.com wrote: >>>> >>>> On Mon, Oct 24, 2011 at 2:39 PM, Johann Cohen-Tanugi >>>> ?wrote: >>>>> >>>>> Dear Josef >>>>> On 10/24/2011 07:58 PM, josef.pktd at gmail.com wrote: >>>>>> >>>>>> On Mon, Oct 24, 2011 at 1:56 PM, ? ?wrote: >>>>>>> >>>>>>> On Mon, Oct 24, 2011 at 1:50 PM, Johann Cohen-Tanugi >>>>>>> ? ?wrote: >>>>>>>> >>>>>>>> Hello, >>>>>>>> the OP is a colleague of mine and I looked quickly at the code. The >>>>>>>> infinite >>>>>>>> loop in the OP's illustrating script comes from the "while 1" loop in >>>>>>>> l.144 >>>>>>>> of linesearch.py : becuse phi0 is np.nan, phi1 is returned as np.nan >>>>>>>> as >>>>>>>> well, and the break condition is never met. There is an easy fix : >>>>>>>> ? ? while 1: >>>>>>>> ? ? ? ? stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, >>>>>>>> derphi1, >>>>>>>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?c1, c2, xtol, task, >>>>>>>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?amin, amax, isave, >>>>>>>> dsave) >>>>>>>> ? ? ? ? if task[:2] == asbytes('FG') and not np.isnan(phi1): >>>>>>>> ? ? ? ? ? ? alpha1 = stp >>>>>>>> ? ? ? ? ? ? phi1 = phi(stp) >>>>>>>> ? ? ? ? ? ? derphi1 = derphi(stp) >>>>>>>> ? ? ? ? else: >>>>>>>> ? ? ? ? ? ? break >>>>>>>> >>>>>>>> but it is not a nice kludge.... Is there a better way to secure this >>>>>>>> while 1 >>>>>>>> loop? I am sure I am not covering all possible pathological cases with >>>>>>>> adding "not np.isnan(phi1)" in the code above. >>>>>>> >>>>>>> Is this still a problem with 0.10 ? >>>>>>> I thought this fixed it, https://github.com/scipy/scipy/commit/a31acbf >>>>> >>>>> Well I am a complete newby with github, but I think I went to the head of >>>>> master before testing, and the problem is still there. I can see the code >>>>> snippet from https://github.com/scipy/scipy/commit/a31acbf in my local >>>>> copy, >>>>> and this is testing against +/-inf, not nan. Changing the OP's code to >>>>> test >>>>> against inf yields : >>>>> In [1]: run test_bgfs.py >>>>> >>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:303: >>>>> RuntimeWarning: invalid value encountered in subtract >>>>> ?if (max(numpy.ravel(abs(sim[1:] - sim[0])))<= xtol \ >>>>> >>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:308: >>>>> RuntimeWarning: invalid value encountered in subtract >>>>> ?xr = (1 + rho)*xbar - rho*sim[-1] >>>>> >>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:350: >>>>> RuntimeWarning: invalid value encountered in subtract >>>>> ?sim[j] = sim[0] + sigma*(sim[j] - sim[0]) >>>>> fmin works [ inf] >>>>> >>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:132: >>>>> RuntimeWarning: invalid value encountered in subtract >>>>> ?alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) >>>>> >>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:308: >>>>> RuntimeWarning: invalid value encountered in subtract >>>>> ?alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) >>>>> >>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:417: >>>>> RuntimeWarning: invalid value encountered in subtract >>>>> ?B = (fb-D-C*db)/(db*db) >>>>> fmin_bfgs gets stuck in a loop [ nan] >>>>> >>>>> so it looks like your code solves the inf situation, but not the nan. >>>> >>>> It's not my fix (I'm still on scipy 0.9 and avoid bfgs because I don't >>>> want to have to kill my interpreter session) >>>> >>>> isfinite also checks for nans >>>> >>>>>>> np.isfinite(np.nan) >>>> >>>> False >>>> >>>> so there should be another reason that the linesearch doesn't return. >>>> >>>> Josef >>>> >>>> >>>> >>>> >>>> >>>>> >>>>>> Is http://projects.scipy.org/scipy/ticket/1542 the same? >>>>> >>>>> yes it looks like a duplicate >>>>>> >>>>>> josef >>>>>> >>>>>>> Josef >>>>>>> >>>>>>> >>>>>>>> thoughts? >>>>>>>> Johann >>>>>>>> >>>>>>>> On 08/14/2011 01:38 AM, b9o2jnbm tsd71eam wrote: >>>>>>>> >>>>>>>> I have run into a frustrating problem where scipy.optimize.fmin_bfgs >>>>>>>> will >>>>>>>> get stuck in an infinite loop. >>>>>>>> >>>>>>>> I have submitted a bug report: >>>>>>>> >>>>>>>> http://projects.scipy.org/scipy/ticket/1494 >>>>>>>> >>>>>>>> but would also like to see if anybody on this list has any suggestions >>>>>>>> or >>>>>>>> feedback. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> -- >>>>>>>> This message has been scanned for viruses and >>>>>>>> dangerous content by MailScanner, and is >>>>>>>> believed to be clean. >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> SciPy-User mailing list >>>>>>>> SciPy-User at scipy.org >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> SciPy-User mailing list >>>>>>>> SciPy-User at scipy.org >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>> >>>>>>>> >>> >> > From wesmckinn at gmail.com Mon Oct 24 23:54:42 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 24 Oct 2011 23:54:42 -0400 Subject: [SciPy-User] ANN: pandas 0.5.0 Message-ID: I'm very pleased to announce the pandas 0.5.0 major release. This release features bug fixes, speed optimizations, new features, removal of APIs deprecated in 0.4.0 series, and a number of other API changes related to the file parsing functions. See the full release notes below. Here are some highlights of the most important changes since 0.4.0 (released 9/12/2011): - Python 3 support - Retooled file (CSV, flat file) parsing with better type inference, 5-10x speedup in many cases, and the addition of a chunksize argument for reading large files piece by piece. pandas is now one of the fastest ways available to read structured text files into Python - IPython tab completion of DataFrame columns via attribute access - New pivot_table convenience function - Better integrated handling of indexing metadata (names for indexes) - New Int64Index for fast data alignment and merging of integer-indexed data. Will enable fast datetime64-based time series processing in a future release - Faster data alignment overall in Series and DataFrame - Significantly faster merging / joining of DataFrames - Multi-key DataFrame joins Thanks to all who contributed bug reports and pull requests between 0.4.3 and 0.5.0. It's been a short, very intense 2 weeks! best, Wes What is it ========== pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with ?relational? or ?labeled? data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. Links ===== Release Notes: https://github.com/wesm/pandas/blob/master/RELEASE.rst Documentation: http://pandas.sourceforge.net Installers: http://pypi.python.org/pypi/pandas Code Repository: http://github.com/wesm/pandas Mailing List: http://groups.google.com/group/pystatsmodels Blog: http://blog.wesmckinney.com pandas 0.5.0 ============ **Release date:** 10/24/2011 This release of pandas includes a number of API changes (see below) and cleanup of deprecated APIs from pre-0.4.0 releases. There are also bug fixes, new features, numerous significant performance enhancements, and includes a new IPython completer hook to enable tab completion of DataFrame columns accesses as attributes (a new feature). In addition to the changes listed here from 0.4.3 to 0.5.0, the minor releases 0.4.1, 0.4.2, and 0.4.3 brought some significant new functionality and performance improvements that are worth taking a look at. Thanks to all for bug reports, contributed patches and generally providing feedback on the library. **API Changes** - `read_table`, `read_csv`, and `ExcelFile.parse` default arguments for `index_col` is now None. To use one or more of the columns as the resulting DataFrame's index, these must be explicitly specified now - Parsing functions like `read_csv` no longer parse dates by default (GH #225) - Removed `weights` option in panel regression which was not doing anything principled (GH #155) - Changed `buffer` argument name in `Series.to_string` to `buf` - `Series.to_string` and `DataFrame.to_string` now return strings by default instead of printing to sys.stdout - Deprecated `nanRep` argument in various `to_string` and `to_csv` functions in favor of `na_rep`. Will be removed in 0.6 (GH #275) - Renamed `delimiter` to `sep` in `DataFrame.from_csv` for consistency - Changed order of `Series.clip` arguments to match those of `numpy.clip` and added (unimplemented) `out` argument so `numpy.clip` can be called on a Series (GH #272) - Series functions renamed (and thus deprecated) in 0.4 series have been removed: * `asOf`, use `asof` * `toDict`, use `to_dict` * `toString`, use `to_string` * `toCSV`, use `to_csv` * `merge`, use `map` * `applymap`, use `apply` * `combineFirst`, use `combine_first` * `_firstTimeWithValue` use `first_valid_index` * `_lastTimeWithValue` use `last_valid_index` - DataFrame functions renamed / deprecated in 0.4 series have been removed: * `asMatrix` method, use `as_matrix` or `values` attribute * `combineFirst`, use `combine_first` * `getXS`, use `xs` * `merge`, use `join` * `fromRecords`, use `from_records` * `fromcsv`, use `from_csv` * `toRecords`, use `to_records` * `toDict`, use `to_dict` * `toString`, use `to_string` * `toCSV`, use `to_csv` * `_firstTimeWithValue` use `first_valid_index` * `_lastTimeWithValue` use `last_valid_index` * `toDataMatrix` is no longer needed * `rows()` method, use `index` attribute * `cols()` method, use `columns` attribute * `dropEmptyRows()`, use `dropna(how='all')` * `dropIncompleteRows()`, use `dropna()` * `tapply(f)`, use `apply(f, axis=1)` * `tgroupby(keyfunc, aggfunc)`, use `groupby` with `axis=1` - Other outstanding deprecations have been removed: * `indexField` argument in `DataFrame.from_records` * `missingAtEnd` argument in `Series.order`. Use `na_last` instead * `Series.fromValue` classmethod, use regular `Series` constructor instead * Functions `parseCSV`, `parseText`, and `parseExcel` methods in `pandas.io.parsers` have been removed * `Index.asOfDate` function * `Panel.getMinorXS` (use `minor_xs`) and `Panel.getMajorXS` (use `major_xs`) * `Panel.toWide`, use `Panel.to_wide` instead **New features / modules** - Added `DataFrame.align` method with standard join options - Added `parse_dates` option to `read_csv` and `read_table` methods to optionally try to parse dates in the index columns - Add `nrows`, `chunksize`, and `iterator` arguments to `read_csv` and `read_table`. The last two return a new `TextParser` class capable of lazily iterating through chunks of a flat file (GH #242) - Added ability to join on multiple columns in `DataFrame.join` (GH #214) - Added private `_get_duplicates` function to `Index` for identifying duplicate values more easily - Added column attribute access to DataFrame, e.g. df.A equivalent to df['A'] if 'A' is a column in the DataFrame (PR #213) - Added IPython tab completion hook for DataFrame columns. (PR #233, GH #230) - Implement `Series.describe` for Series containing objects (PR #241) - Add inner join option to `DataFrame.join` when joining on key(s) (GH #248) - Can select set of DataFrame columns by passing a list to `__getitem__` (GH #253) - Can use & and | to intersection / union Index objects, respectively (GH #261) - Added `pivot_table` convenience function to pandas namespace (GH #234) - Implemented `Panel.rename_axis` function (GH #243) - DataFrame will show index level names in console output - Implemented `Panel.take` - Add `set_eng_float_format` function for setting alternate DataFrame floating point string formatting - Add convenience `set_index` function for creating a DataFrame index from its existing columns **Improvements to existing features** - Major performance improvements in file parsing functions `read_csv` and `read_table` - Added Cython function for converting tuples to ndarray very fast. Speeds up many MultiIndex-related operations - File parsing functions like `read_csv` and `read_table` will explicitly check if a parsed index has duplicates and raise a more helpful exception rather than deferring the check until later - Refactored merging / joining code into a tidy class and disabled unnecessary computations in the float/object case, thus getting about 10% better performance (GH #211) - Improved speed of `DataFrame.xs` on mixed-type DataFrame objects by about 5x, regression from 0.3.0 (GH #215) - With new `DataFrame.align` method, speeding up binary operations between differently-indexed DataFrame objects by 10-25%. - Significantly sped up conversion of nested dict into DataFrame (GH #212) - Can pass hierarchical index level name to `groupby` instead of the level number if desired (GH #223) - Add support for different delimiters in `DataFrame.to_csv` (PR #244) - Add more helpful error message when importing pandas post-installation from the source directory (GH #250) - Significantly speed up DataFrame `__repr__` and `count` on large mixed-type DataFrame objects - Better handling of pyx file dependencies in Cython module build (GH #271) **Bug fixes** - `read_csv` / `read_table` fixes - Be less aggressive about converting float->int in cases of floating point representations of integers like 1.0, 2.0, etc. - "True"/"False" will not get correctly converted to boolean - Index name attribute will get set when specifying an index column - Passing column names should force `header=None` (GH #257) - Don't modify passed column names when `index_col` is not None (GH #258) - Can sniff CSV separator in zip file (since seek is not supported, was failing before) - Worked around matplotlib "bug" in which series[:, np.newaxis] fails. Should be reported upstream to matplotlib (GH #224) - DataFrame.iteritems was not returning Series with the name attribute set. Also neither was DataFrame._series - Can store datetime.date objects in HDFStore (GH #231) - Index and Series names are now stored in HDFStore - Fixed problem in which data would get upcasted to object dtype in GroupBy.apply operations (GH #237) - Fixed outer join bug with empty DataFrame (GH #238) - Can create empty Panel (GH #239) - Fix join on single key when passing list with 1 entry (GH #246) - Don't raise Exception on plotting DataFrame with an all-NA column (GH #251, PR #254) - Bug min/max errors when called on integer DataFrames (PR #241) - `DataFrame.iteritems` and `DataFrame._series` not assigning name attribute - Panel.__repr__ raised exception on length-0 major/minor axes - `DataFrame.join` on key with empty DataFrame produced incorrect columns - Implemented `MultiIndex.diff` (GH #260) - `Int64Index.take` and `MultiIndex.take` lost name field, fix downstream issue GH #262 - Can pass list of tuples to `Series` (GH #270) - Can pass level name to `DataFrame.stack` - Support set operations between MultiIndex and Index - Fix many corner cases in MultiIndex set operations - Fix MultiIndex-handling bug with GroupBy.apply when returned groups are not indexed the same - Fix corner case bugs in DataFrame.apply - Setting DataFrame index did not cause Series cache to get cleared - Various int32 -> int64 platform-specific issues - Don't be too aggressive converting to integer when parsing file with MultiIndex (GH #285) - Fix bug when slicing Series with negative indices before beginning Thanks ------ - Thomas Kluyver - Daniel Fortunov - Aman Thakral - Luca Beltrame - Wouter Overmeire From pav at iki.fi Tue Oct 25 05:46:02 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 25 Oct 2011 11:46:02 +0200 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: <4EA5A55B.2010100@gmail.com> References: <4EA5A55B.2010100@gmail.com> Message-ID: 24.10.2011 19:50, Johann Cohen-Tanugi kirjoitti: [clip] > while 1: > stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, derphi1, > c1, c2, xtol, task, > amin, amax, isave, dsave) > if task[:2] == asbytes('FG') and not np.isnan(phi1): > alpha1 = stp > phi1 = phi(stp) > derphi1 = derphi(stp) > else: > break Looks correct to me. It should bail out from the loop on encountering a nan, as in that case it's unlikely it's possible to satisfy the wolfe conditions. -- Pauli Virtanen From JRadinger at gmx.at Tue Oct 25 06:49:37 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Tue, 25 Oct 2011 12:49:37 +0200 Subject: [SciPy-User] 2 questions: optimize.leastsq Message-ID: <20111025104937.130680@gmx.net> Hi, I've got two questions considering the least square optimization: 1 ) I asked that already some time ago, but couldn't find the email with the answer anymore. Maybe you can help me out again: It's all about the optimize.leastsq. I use that to fit a function with several conditions. As a result I get the parameter estimates I am looking for and the 'ier'. According to the manual this is: "An integer flag. If it is equal to 1, 2, 3 or 4, the solution was found. Otherwise, the solution was not found" I just want to know: What exactly do the numbers mean? What is if it is 1 or 2? Is that any information about the quality of the fit? 2) I am doing following optimization: def pdf(x,s1,s2,p): return (p/(math.sqrt(2*math.pi*s1**2))*numpy.exp(-((x-0)**(2)/(2*s1**(2)))))+((1-p)/(math.sqrt(2*math.pi*s2**2))*numpy.exp(-((x-0)**(2)/(2*s2**(2))))) def equ149(arg): s1,s2,p = numpy.abs(arg) cond1 = 0.7673 - integrate.quad(pdf,-25,25,args=(s1,s2,p))[0] cond2 = 0.8184 - integrate.quad(pdf,-45,45,args=(s1,s2,p))[0] cond3 = 0.8320 - integrate.quad(pdf,-55,55,args=(s1,s2,p))[0] cond4 = 0.8688 - integrate.quad(pdf,-75,75,args=(s1,s2,p))[0] cond5 = 0.8771 - integrate.quad(pdf,-85,85,args=(s1,s2,p))[0] cond6 = 0.8951 - integrate.quad(pdf,-95,95,args=(s1,s2,p))[0] cond7 = 0.9124 - integrate.quad(pdf,-105,105,args=(s1,s2,p))[0] cond8 = 0.9237 - integrate.quad(pdf,-115,115,args=(s1,s2,p))[0] cond9 = 0.935 - integrate.quad(pdf,-125,125,args=(s1,s2,p))[0] cond10 = 0.95 - integrate.quad(pdf,-145,145,args=(s1,s2,p))[0] cond11 = 0.962 - integrate.quad(pdf,-175,175,args=(s1,s2,p))[0] cond12 = 0.9748 - integrate.quad(pdf,-195,195,args=(s1,s2,p))[0] cond13 = 0.9876 - integrate.quad(pdf,-205,205,args=(s1,s2,p))[0] cond14 = 0.9913 - integrate.quad(pdf,-265,265,args=(s1,s2,p))[0] cond15 = 0.9988 - integrate.quad(pdf,-295,295,args=(s1,s2,p))[0] cond16 = 0.0012/2 - integrate.quad(pdf,315,numpy.inf,args=(s1,s2,p))[0] return [cond1, cond2,cond3, cond4, cond5, cond6, cond7, cond8, cond9, cond10, cond11, cond12, cond13, cond14, cond15, cond16] result=leastsq(equ149, [30.0, 200.0,0.7]) I do that about 100 times with different conditions and get always s1, s2 and p. If i then compare the results, s1 is more or less always 10 times smaller than s2. So it seems there is a constant factor. Now my question: Is that "factor 10" only a result of my data (which would be great!)? Or is it an mathematical artifact of the optimization procedure? Thank you very much! /Johannes -- NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zur?ck-Garantie! Jetzt informieren: http://www.gmx.net/de/go/freephone From josef.pktd at gmail.com Tue Oct 25 07:30:29 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 25 Oct 2011 07:30:29 -0400 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> Message-ID: On Tue, Oct 25, 2011 at 5:46 AM, Pauli Virtanen wrote: > 24.10.2011 19:50, Johann Cohen-Tanugi kirjoitti: > [clip] >> ? ? while 1: >> ? ? ? ? stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, derphi1, >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?c1, c2, xtol, task, >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?amin, amax, isave, dsave) >> ? ? ? ? if task[:2] == asbytes('FG') and not np.isnan(phi1): >> ? ? ? ? ? ? alpha1 = stp >> ? ? ? ? ? ? phi1 = phi(stp) >> ? ? ? ? ? ? derphi1 = derphi(stp) >> ? ? ? ? else: >> ? ? ? ? ? ? break > > Looks correct to me. It should bail out from the loop on encountering > a nan, as in that case it's unlikely it's possible to satisfy the wolfe > conditions. Is there an explanation for task? What does task[:2] == 'FG' mean? I tried the condition separately. if np.isnan(phi1) or np.isneginf(phi1): break if task[:2] == asbytes('FG'): I also added the isneginf check, because I think there is also the possibility of an infinite loop at phi = -inf. I would still feel safer if there is a maxiter in the line search, as in the python version, scalar_search_wolfe2 a separate issue: bfgs does not have an xtol, which means we don't get any indication if only one or some of the parameters go to inf. But I don't have a nice example yet. This case can show up for example in Logit estimation. In a variation of the example in this ticket, the optimized parameters when one parameter goes to inf and the others to a finite number looks sometimes pretty bad. Thanks, Josef > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From johann.cohentanugi at gmail.com Tue Oct 25 07:34:41 2011 From: johann.cohentanugi at gmail.com (Johann Cohen-Tanugi) Date: Tue, 25 Oct 2011 13:34:41 +0200 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> Message-ID: <4EA69ED1.2060003@gmail.com> I also note that scalar_search_wolfe2 has a maxiter cluase (maxiter=10 hardcoded there), which scalar_search_wolfe1 doesn't. J On 10/25/2011 11:46 AM, Pauli Virtanen wrote: > 24.10.2011 19:50, Johann Cohen-Tanugi kirjoitti: > [clip] >> while 1: >> stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, derphi1, >> c1, c2, xtol, task, >> amin, amax, isave, dsave) >> if task[:2] == asbytes('FG') and not np.isnan(phi1): >> alpha1 = stp >> phi1 = phi(stp) >> derphi1 = derphi(stp) >> else: >> break > Looks correct to me. It should bail out from the loop on encountering > a nan, as in that case it's unlikely it's possible to satisfy the wolfe > conditions. > From pav at iki.fi Tue Oct 25 07:40:23 2011 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 25 Oct 2011 13:40:23 +0200 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> Message-ID: 25.10.2011 13:30, josef.pktd at gmail.com kirjoitti: [clip] > Is there an explanation for task? What does task[:2] == 'FG' mean? Check the minpack source files for dcscrch.f > I tried the condition separately. > > if np.isnan(phi1) or np.isneginf(phi1): > break > if task[:2] == asbytes('FG'): > > I also added the isneginf check, because I think there is also the > possibility of an infinite loop at phi = -inf. > > I would still feel safer if there is a maxiter in the line search, as > in the python version, scalar_search_wolfe2 Transforming the while loop to a for-loop could be a reasonable alternative solution, for j in xrange(maxiter): ... else: ... return indicating wolfe search failed ... > a separate issue: > bfgs does not have an xtol, which means we don't get any indication if > only one or some of the parameters go to inf. The best thing would be to have all the optimizers deal with termination conditions in the same way. Pauli -- Pauli Virtanen From johann.cohentanugi at gmail.com Tue Oct 25 07:45:09 2011 From: johann.cohentanugi at gmail.com (Johann Cohen-Tanugi) Date: Tue, 25 Oct 2011 13:45:09 +0200 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> Message-ID: <4EA6A145.5070107@gmail.com> I created a pull request at https://github.com/scipy/scipy/pull/96 Maybe we should move the discussion there. J On 10/25/2011 01:30 PM, josef.pktd at gmail.com wrote: > On Tue, Oct 25, 2011 at 5:46 AM, Pauli Virtanen wrote: >> 24.10.2011 19:50, Johann Cohen-Tanugi kirjoitti: >> [clip] >>> while 1: >>> stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, derphi1, >>> c1, c2, xtol, task, >>> amin, amax, isave, dsave) >>> if task[:2] == asbytes('FG') and not np.isnan(phi1): >>> alpha1 = stp >>> phi1 = phi(stp) >>> derphi1 = derphi(stp) >>> else: >>> break >> Looks correct to me. It should bail out from the loop on encountering >> a nan, as in that case it's unlikely it's possible to satisfy the wolfe >> conditions. > Is there an explanation for task? What does task[:2] == 'FG' mean? > > I tried the condition separately. > > if np.isnan(phi1) or np.isneginf(phi1): > break > if task[:2] == asbytes('FG'): > > I also added the isneginf check, because I think there is also the > possibility of an infinite loop at phi = -inf. > > I would still feel safer if there is a maxiter in the line search, as > in the python version, scalar_search_wolfe2 > > a separate issue: > bfgs does not have an xtol, which means we don't get any indication if > only one or some of the parameters go to inf. > But I don't have a nice example yet. This case can show up for example > in Logit estimation. In a variation of the example in this ticket, the > optimized parameters when one parameter goes to inf and the others to > a finite number looks sometimes pretty bad. > > Thanks, > > Josef > > >> -- >> Pauli Virtanen >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From dave.hirschfeld at gmail.com Tue Oct 25 07:53:33 2011 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Tue, 25 Oct 2011 11:53:33 +0000 (UTC) Subject: [SciPy-User] scikits.timesereies plotting broken with latest matplotlib Message-ID: With matplotlib v1.1.0 the timeseries plotting code no longer works. I had a look and it seems that the matplotlib API has changed but not being familiar with the internals of matplotlib I couldn't find a workaround. Since the timeseries code is important to a lot of my work and visualising the data is an large part of that I'll have to revert to 1.0.1 in the interim. The specific error I get when running the "Adaptation of date_demo2.py" example is shown below: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) C:\dev\code\ in () 28 series = ts.fill_missing_dates(raw_series) 29 fig = tpl.tsfigure() ---> 30 fsp = fig.add_tsplot(111) 31 fsp.tsplot(series, '-') 32 C:\dev\bin\Python27\lib\site-packages\scikits\timeseries\lib\plotlib.pyc in add_tsplot(self, *args, **kwargs) 1282 if self._series is not None: 1283 kwargs.update(series=self._series) -> 1284 return add_generic_subplot(self, *args, **kwargs) 1285 1286 add_subplot = add_tsplot C:\dev\bin\Python27\lib\site-packages\scikits\timeseries\lib\plotlib.pyc in add_generic_subplot(figure_instance, *args, **kwargs) 175 key = str(key) 176 --> 177 if key in figure_instance._seen: 178 ax = figure_instance._seen[key] 179 figure_instance.sca(ax) AttributeError: 'TimeSeriesFigure' object has no attribute '_seen' Chees, Dave From johann.cohen-tanugi at lupm.univ-montp2.fr Mon Oct 24 16:46:44 2011 From: johann.cohen-tanugi at lupm.univ-montp2.fr (Johann Cohen-Tanugi) Date: Mon, 24 Oct 2011 22:46:44 +0200 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> <4EA5B0E7.1080106@gmail.com> <4EA5BFF3.2070303@lupm.univ-montp2.fr> Message-ID: <4EA5CEB4.6020700@lupm.univ-montp2.fr> I posted a comment in https://github.com/scipy/scipy/commit/a31acbf966f50665997d1fd1ced41d3a8b3e4187 but I agree that a comprehensive set of checks at optimize.py level is warranted, rather than a kludge in a sub-method. J On 10/24/2011 10:34 PM, josef.pktd at gmail.com wrote: > On Mon, Oct 24, 2011 at 4:14 PM, wrote: >> On Mon, Oct 24, 2011 at 3:59 PM, wrote: >>> tricky things these reply to all, forwarding to list >>> >>> ---------- Forwarded message ---------- >>> From: >>> Date: Mon, Oct 24, 2011 at 3:52 PM >>> Subject: Re: [SciPy-User] fmin_bfgs stuck in infinite loop >>> To: johann.cohen-tanugi at lupm.in2p3.fr >>> >>> >>> On Mon, Oct 24, 2011 at 3:43 PM, Johann Cohen-Tanugi >>> wrote: >>>> indeed, see the email I just sent : for nan linesearch_wolfe1 does not exit >>>> gracefully, so that Ralf's patch is never encountered. >>> I'm not sure what's going on, >>> >>> I just copied the few lines from >>> https://github.com/scipy/scipy/commit/a31acbf into my scipy 0.9 and >>> the original example stops and I'm not able to produce an endless loop >>> anymore when I try to change around with any of the examples, even >>> when I start with a nan, it stops immediately. I only tried the one >>> parameter example. >> Nope, still endless in linesearch with -np.exp(-x/2.) -np.exp(-x**2) >> the third iteration starts with nan as parameter and never returns >> from the line search. >> >> So something like your original fix to exit a nan linesearch looks necessary. > In my example the linesearch is getting pk=np.nan > > So I tried the fix still in the optimization loop, at the end of the loop after > > Hk = numpy.dot(A1,numpy.dot(Hk,A2)) + rhok * sk[:,numpy.newaxis] \ > * sk[numpy.newaxis,:] > > I added the pk check > #add this > pk = -numpy.dot(Hk,gfk) > if not numpy.isfinite(pk): > #make sure we will have a valid search direction > warnflag = 2 > break > > Now my example stops with the previous iteration, when there is no new > search direction in the next iteration loop. > I'm duplicating now the pk fix, so it's not really a fix yet, but it > seems to work. > > (I would be glad if bfgs is more robust so I dare to use it also.) > bfgs is missing a max function evaluation limit, I build it into my > objective function to avoid the endless loop. > > Josef > >> Josef >> >> >>> Can you check that you actually run the code that has this fix? >>> >>> Josef >>> >>> >>>> Johann >>>> >>>> On 10/24/2011 09:26 PM, josef.pktd at gmail.com wrote: >>>>> On Mon, Oct 24, 2011 at 2:39 PM, Johann Cohen-Tanugi >>>>> wrote: >>>>>> Dear Josef >>>>>> On 10/24/2011 07:58 PM, josef.pktd at gmail.com wrote: >>>>>>> On Mon, Oct 24, 2011 at 1:56 PM, wrote: >>>>>>>> On Mon, Oct 24, 2011 at 1:50 PM, Johann Cohen-Tanugi >>>>>>>> wrote: >>>>>>>>> Hello, >>>>>>>>> the OP is a colleague of mine and I looked quickly at the code. The >>>>>>>>> infinite >>>>>>>>> loop in the OP's illustrating script comes from the "while 1" loop in >>>>>>>>> l.144 >>>>>>>>> of linesearch.py : becuse phi0 is np.nan, phi1 is returned as np.nan >>>>>>>>> as >>>>>>>>> well, and the break condition is never met. There is an easy fix : >>>>>>>>> while 1: >>>>>>>>> stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, >>>>>>>>> derphi1, >>>>>>>>> c1, c2, xtol, task, >>>>>>>>> amin, amax, isave, >>>>>>>>> dsave) >>>>>>>>> if task[:2] == asbytes('FG') and not np.isnan(phi1): >>>>>>>>> alpha1 = stp >>>>>>>>> phi1 = phi(stp) >>>>>>>>> derphi1 = derphi(stp) >>>>>>>>> else: >>>>>>>>> break >>>>>>>>> >>>>>>>>> but it is not a nice kludge.... Is there a better way to secure this >>>>>>>>> while 1 >>>>>>>>> loop? I am sure I am not covering all possible pathological cases with >>>>>>>>> adding "not np.isnan(phi1)" in the code above. >>>>>>>> Is this still a problem with 0.10 ? >>>>>>>> I thought this fixed it, https://github.com/scipy/scipy/commit/a31acbf >>>>>> Well I am a complete newby with github, but I think I went to the head of >>>>>> master before testing, and the problem is still there. I can see the code >>>>>> snippet from https://github.com/scipy/scipy/commit/a31acbf in my local >>>>>> copy, >>>>>> and this is testing against +/-inf, not nan. Changing the OP's code to >>>>>> test >>>>>> against inf yields : >>>>>> In [1]: run test_bgfs.py >>>>>> >>>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:303: >>>>>> RuntimeWarning: invalid value encountered in subtract >>>>>> if (max(numpy.ravel(abs(sim[1:] - sim[0])))<= xtol \ >>>>>> >>>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:308: >>>>>> RuntimeWarning: invalid value encountered in subtract >>>>>> xr = (1 + rho)*xbar - rho*sim[-1] >>>>>> >>>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:350: >>>>>> RuntimeWarning: invalid value encountered in subtract >>>>>> sim[j] = sim[0] + sigma*(sim[j] - sim[0]) >>>>>> fmin works [ inf] >>>>>> >>>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:132: >>>>>> RuntimeWarning: invalid value encountered in subtract >>>>>> alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) >>>>>> >>>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:308: >>>>>> RuntimeWarning: invalid value encountered in subtract >>>>>> alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) >>>>>> >>>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:417: >>>>>> RuntimeWarning: invalid value encountered in subtract >>>>>> B = (fb-D-C*db)/(db*db) >>>>>> fmin_bfgs gets stuck in a loop [ nan] >>>>>> >>>>>> so it looks like your code solves the inf situation, but not the nan. >>>>> It's not my fix (I'm still on scipy 0.9 and avoid bfgs because I don't >>>>> want to have to kill my interpreter session) >>>>> >>>>> isfinite also checks for nans >>>>> >>>>>>>> np.isfinite(np.nan) >>>>> False >>>>> >>>>> so there should be another reason that the linesearch doesn't return. >>>>> >>>>> Josef >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>>> Is http://projects.scipy.org/scipy/ticket/1542 the same? >>>>>> yes it looks like a duplicate >>>>>>> josef >>>>>>> >>>>>>>> Josef >>>>>>>> >>>>>>>> >>>>>>>>> thoughts? >>>>>>>>> Johann >>>>>>>>> >>>>>>>>> On 08/14/2011 01:38 AM, b9o2jnbm tsd71eam wrote: >>>>>>>>> >>>>>>>>> I have run into a frustrating problem where scipy.optimize.fmin_bfgs >>>>>>>>> will >>>>>>>>> get stuck in an infinite loop. >>>>>>>>> >>>>>>>>> I have submitted a bug report: >>>>>>>>> >>>>>>>>> http://projects.scipy.org/scipy/ticket/1494 >>>>>>>>> >>>>>>>>> but would also like to see if anybody on this list has any suggestions >>>>>>>>> or >>>>>>>>> feedback. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> -- >>>>>>>>> This message has been scanned for viruses and >>>>>>>>> dangerous content by MailScanner, and is >>>>>>>>> believed to be clean. >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> SciPy-User mailing list >>>>>>>>> SciPy-User at scipy.org >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> SciPy-User mailing list >>>>>>>>> SciPy-User at scipy.org >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>>> >>>>>>>>> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From thomas.d.richardson at kcl.ac.uk Tue Oct 25 05:48:27 2011 From: thomas.d.richardson at kcl.ac.uk (tomrichardson) Date: Tue, 25 Oct 2011 02:48:27 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Speeding up scipy.integrate.quad Message-ID: <32716200.post@talk.nabble.com> I'm currently trying to implement a Metropolis Hastings algorithm for which a numerical integration has to be called to obtain the acceptance probability of any given trial in the iteration. I'm currently using the scipy.integrate.quad function but it's too slow.Only part of the integrand has an analytical form - in fact it depends on an interpolation function that uses data in two numpy arrays. I've plotted the integrand and it diverges near the lower limit of integration, there is a term 1/(r^2-R^2) where r is the the dummy variable over which I am integrating and R is the lower limit of integration [I've already added a small increment to R to avoid a zero division error] Things I've tried: -I've implemented all of the other QUADPACK integration methods in scipy which are all slower -reduced the epsabs and epsrel limits -split the interval into subranges that weight the divergence Are there any special purpose integrators that anyone can recommend that can deal with this integrand and compete favourably with quad for speed? Ideas: I'm not familiar with the C language but I have Cython and Weave installed as part of the Enthought Python pack. Would implementing the numerical integration in the C compiler increase the speed dramatically? If so how do I introduce the numpy arrays in the cdef terminology for Cython / are there any C++ recipes that use a sampled data input that I can implement (relatively pain free as a C++ noob). -- View this message in context: http://old.nabble.com/Speeding-up-scipy.integrate.quad-tp32716200p32716200.html Sent from the Scipy-User mailing list archive at Nabble.com. From johann.cohen-tanugi at lupm.univ-montp2.fr Tue Oct 25 07:43:28 2011 From: johann.cohen-tanugi at lupm.univ-montp2.fr (Johann Cohen-Tanugi) Date: Tue, 25 Oct 2011 13:43:28 +0200 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> Message-ID: <4EA6A0E0.9000300@lupm.univ-montp2.fr> I created a pull request at https://github.com/scipy/scipy/pull/96 Maybe we should move the discussion there. J On 10/25/2011 01:30 PM, josef.pktd at gmail.com wrote: > On Tue, Oct 25, 2011 at 5:46 AM, Pauli Virtanen wrote: >> 24.10.2011 19:50, Johann Cohen-Tanugi kirjoitti: >> [clip] >>> while 1: >>> stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, derphi1, >>> c1, c2, xtol, task, >>> amin, amax, isave, dsave) >>> if task[:2] == asbytes('FG') and not np.isnan(phi1): >>> alpha1 = stp >>> phi1 = phi(stp) >>> derphi1 = derphi(stp) >>> else: >>> break >> Looks correct to me. It should bail out from the loop on encountering >> a nan, as in that case it's unlikely it's possible to satisfy the wolfe >> conditions. > Is there an explanation for task? What does task[:2] == 'FG' mean? > > I tried the condition separately. > > if np.isnan(phi1) or np.isneginf(phi1): > break > if task[:2] == asbytes('FG'): > > I also added the isneginf check, because I think there is also the > possibility of an infinite loop at phi = -inf. > > I would still feel safer if there is a maxiter in the line search, as > in the python version, scalar_search_wolfe2 > > a separate issue: > bfgs does not have an xtol, which means we don't get any indication if > only one or some of the parameters go to inf. > But I don't have a nice example yet. This case can show up for example > in Logit estimation. In a variation of the example in this ticket, the > optimized parameters when one parameter goes to inf and the others to > a finite number looks sometimes pretty bad. > > Thanks, > > Josef > > >> -- >> Pauli Virtanen >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From denis.laxalde at mcgill.ca Tue Oct 25 09:21:34 2011 From: denis.laxalde at mcgill.ca (Denis Laxalde) Date: Tue, 25 Oct 2011 09:21:34 -0400 Subject: [SciPy-User] 2 questions: optimize.leastsq In-Reply-To: <20111025104937.130680@gmx.net> References: <20111025104937.130680@gmx.net> Message-ID: <20111025092134.2b1acb92@mcgill.ca> On Tue, 25 Oct 2011 12:49:37 +0200, Johannes Radinger wrote: > It's all about the optimize.leastsq. I use that to fit a function with several conditions. As a result I get the parameter estimates I am looking for and the 'ier'. According to the manual this is: > > "An integer flag. If it is equal to 1, 2, 3 or 4, the solution was found. Otherwise, > the solution was not found" > > I just want to know: What exactly do the numbers mean? What is if it is 1 or 2? Is that any information > about the quality of the fit? These numbers indicate how the solver terminated, i.e. based on which criterion. You may have a look at the 'mesg' output which describes how the solver terminated. The documentation is probably misleading here as it describes mesg as a "message giving information about the cause of failure" whereas it's actually a general message. -- Denis Laxalde From jsseabold at gmail.com Tue Oct 25 09:46:23 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 25 Oct 2011 09:46:23 -0400 Subject: [SciPy-User] 2 questions: optimize.leastsq In-Reply-To: <20111025092134.2b1acb92@mcgill.ca> References: <20111025104937.130680@gmx.net> <20111025092134.2b1acb92@mcgill.ca> Message-ID: On Tue, Oct 25, 2011 at 9:21 AM, Denis Laxalde wrote: > On Tue, 25 Oct 2011 12:49:37 +0200, > Johannes Radinger wrote: >> It's all about the optimize.leastsq. I use that to fit a function with several conditions. As a result I get the parameter estimates I am looking for and the 'ier'. According to the manual this is: >> >> "An integer flag. If it is equal to 1, 2, 3 or 4, the solution was found. Otherwise, >> the solution was not found" >> >> I just want to know: What exactly do the numbers mean? What is if it is 1 or 2? Is that any information >> about the quality of the fit? > > These numbers indicate how the solver terminated, i.e. based on which > criterion. You may have a look at the 'mesg' output which describes > how the solver terminated. The documentation is probably > misleading here as it describes mesg as a "message giving information > about the cause of failure" whereas it's actually a general message. > You can also look at the source, if you don't want to use full_output=1 for some reason. https://github.com/scipy/scipy/blob/master/scipy/optimize/minpack.py#L294 http://www.netlib.org/minpack/lmdif.f Skipper From josef.pktd at gmail.com Tue Oct 25 10:55:10 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 25 Oct 2011 10:55:10 -0400 Subject: [SciPy-User] 2 questions: optimize.leastsq In-Reply-To: <20111025104937.130680@gmx.net> References: <20111025104937.130680@gmx.net> Message-ID: On Tue, Oct 25, 2011 at 6:49 AM, Johannes Radinger wrote: > Hi, > > I've got two questions considering the least square optimization: > > 1 ) I asked that already some time ago, but couldn't find the email with the answer anymore. Maybe you can help me out again: > > It's all about the optimize.leastsq. I use that to fit a function with several conditions. As a result I get the parameter estimates I am looking for and the 'ier'. According to the manual this is: > > "An integer flag. If it is equal to 1, 2, 3 or 4, the solution was found. Otherwise, > the solution was not found" > > I just want to know: What exactly do the numbers mean? What is if it is 1 or 2? Is that any information > about the quality of the fit? > > 2) I am doing following optimization: > > def pdf(x,s1,s2,p): > ? ?return (p/(math.sqrt(2*math.pi*s1**2))*numpy.exp(-((x-0)**(2)/(2*s1**(2)))))+((1-p)/(math.sqrt(2*math.pi*s2**2))*numpy.exp(-((x-0)**(2)/(2*s2**(2))))) > def equ149(arg): > ? ?s1,s2,p = numpy.abs(arg) > ? ?cond1 = 0.7673 - integrate.quad(pdf,-25,25,args=(s1,s2,p))[0] > ? ?cond2 = 0.8184 - integrate.quad(pdf,-45,45,args=(s1,s2,p))[0] > ? ?cond3 = 0.8320 - integrate.quad(pdf,-55,55,args=(s1,s2,p))[0] > ? ?cond4 = 0.8688 - integrate.quad(pdf,-75,75,args=(s1,s2,p))[0] > ? ?cond5 = 0.8771 - integrate.quad(pdf,-85,85,args=(s1,s2,p))[0] > ? ?cond6 = 0.8951 - integrate.quad(pdf,-95,95,args=(s1,s2,p))[0] > ? ?cond7 = 0.9124 - integrate.quad(pdf,-105,105,args=(s1,s2,p))[0] > ? ?cond8 = 0.9237 - integrate.quad(pdf,-115,115,args=(s1,s2,p))[0] > ? ?cond9 = 0.935 - integrate.quad(pdf,-125,125,args=(s1,s2,p))[0] > ? ?cond10 = 0.95 - integrate.quad(pdf,-145,145,args=(s1,s2,p))[0] > ? ?cond11 = 0.962 - integrate.quad(pdf,-175,175,args=(s1,s2,p))[0] > ? ?cond12 = 0.9748 - integrate.quad(pdf,-195,195,args=(s1,s2,p))[0] > ? ?cond13 = 0.9876 - integrate.quad(pdf,-205,205,args=(s1,s2,p))[0] > ? ?cond14 = 0.9913 - integrate.quad(pdf,-265,265,args=(s1,s2,p))[0] > ? ?cond15 = 0.9988 - integrate.quad(pdf,-295,295,args=(s1,s2,p))[0] > ? ?cond16 = 0.0012/2 - integrate.quad(pdf,315,numpy.inf,args=(s1,s2,p))[0] > ? ?return [cond1, cond2,cond3, cond4, cond5, cond6, cond7, cond8, cond9, cond10, cond11, cond12, cond13, cond14, cond15, cond16] > result=leastsq(equ149, [30.0, 200.0,0.7]) > > I do that about 100 times with different conditions and get always s1, s2 and p. If i then compare the results, s1 is more or less always 10 times smaller than s2. So it seems there is a constant factor. Now my question: Is that "factor 10" only a result of my data (which would be great!)? Or is it an mathematical artifact of the optimization procedure? I don't see a reason why there should be a relationship between s1 and s2. Did you try it with different data? Try a mixture that has s2/s1 = 30 or 2. I don't know if in this case the function has some non-convexities. As an aside: I think using the cdf of the normal distribution would save you the numerical integration and should be faster. Good idea, I never thought of using leastsq (instead of fmin) for this. Josef > > Thank you very much! > > /Johannes > > -- > NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zur?ck-Garantie! > Jetzt informieren: http://www.gmx.net/de/go/freephone > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From warren.weckesser at enthought.com Tue Oct 25 10:54:51 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 25 Oct 2011 09:54:51 -0500 Subject: [SciPy-User] [SciPy-user] Speeding up scipy.integrate.quad In-Reply-To: <32716200.post@talk.nabble.com> References: <32716200.post@talk.nabble.com> Message-ID: On Tue, Oct 25, 2011 at 4:48 AM, tomrichardson < thomas.d.richardson at kcl.ac.uk> wrote: > > I'm currently trying to implement a Metropolis Hastings algorithm for which > a > numerical integration has to be called to obtain the acceptance probability > of any given trial in the iteration. I'm currently using the > scipy.integrate.quad function but it's too slow.Only part of the integrand > has an analytical form - in fact it depends on an interpolation function > that uses data in two numpy arrays. I've plotted the integrand and it > diverges near the lower limit of integration, there is a term > > 1/(r^2-R^2) where r is the the dummy variable over which I am integrating > and R is the lower limit of integration [I've already added a small > increment to R to avoid a zero division error] > The integral of 1/(r^2 - R^2) from r=R to r=a>R is divergent (i.e. the area under that curve is infinite). Does some other term in your integrand cancel the singularity at r=R to give a convergent integral? If so, can you rewrite your integrand so that these terms are handled separately (ideally analytically), and only use quad for the well-behaved part of the integrand? Warren > > Things I've tried: > -I've implemented all of the other QUADPACK integration methods in scipy > which are all slower > -reduced the epsabs and epsrel limits > -split the interval into subranges that weight the divergence > > Are there any special purpose integrators that anyone can recommend that > can > deal with this integrand and compete favourably with quad for speed? > > Ideas: > > I'm not familiar with the C language but I have Cython and Weave installed > as part of the Enthought Python pack. Would implementing the numerical > integration in the C compiler increase the speed dramatically? If so how do > I introduce the numpy arrays in the cdef terminology for Cython / are there > any C++ recipes that use a sampled data input that I can implement > (relatively pain free as a C++ noob). > > -- > View this message in context: > http://old.nabble.com/Speeding-up-scipy.integrate.quad-tp32716200p32716200.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Oct 25 11:46:35 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 25 Oct 2011 17:46:35 +0200 Subject: [SciPy-User] scikits.timesereies plotting broken with latest matplotlib In-Reply-To: References: Message-ID: On Tue, Oct 25, 2011 at 1:53 PM, Dave Hirschfeld wrote: > With matplotlib v1.1.0 the timeseries plotting code no longer works. I had > a > look and it seems that the matplotlib API has changed but not being > familiar > with the internals of matplotlib I couldn't find a workaround. > Matplotlib has its own mailing list, please ask there. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From JRadinger at gmx.at Tue Oct 25 12:01:45 2011 From: JRadinger at gmx.at (Johannes Radinger) Date: Tue, 25 Oct 2011 18:01:45 +0200 Subject: [SciPy-User] 2 questions: optimize.leastsq In-Reply-To: References: Message-ID: <20111025160145.36530@gmx.net> First, thank you for your comments, information and help. Now some additional questions based on the others arose: 1) @Dennis & Skipper: So a ier-mesagge of 1 indicates that the fit is below the given tolerance level? This is by default ftol=1.49012e- 08. So is this a good value? Or how can this be interpreted? What does it say about the fit resp. the optimization result? Can I state anything like: "The optimization result ensures ..." 2) @Josef: At the moment it is okay to stay with the pdf instead of cdf. Anyway so you mean that this "factor 10" can only be caused by the original data which act as input for the conditions. I tried it now more than 50 times with different sets of conditions and based on the set with different starting values (to be fitted correctly). I just want to ensure that there is no methotological background error which causes these interesting results. Thank you Johannes > Message: 5 > Date: Tue, 25 Oct 2011 09:21:34 -0400 > From: Denis Laxalde > Subject: Re: [SciPy-User] 2 questions: optimize.leastsq > To: SciPy Users List > Message-ID: <20111025092134.2b1acb92 at mcgill.ca> > Content-Type: text/plain; charset="US-ASCII" > > On Tue, 25 Oct 2011 12:49:37 +0200, > Johannes Radinger wrote: > > It's all about the optimize.leastsq. I use that to fit a function with > several conditions. As a result I get the parameter estimates I am looking > for and the 'ier'. According to the manual this is: > > > > "An integer flag. If it is equal to 1, 2, 3 or 4, the solution was > found. Otherwise, > > the solution was not found" > > > > I just want to know: What exactly do the numbers mean? What is if it is > 1 or 2? Is that any information > > about the quality of the fit? > > These numbers indicate how the solver terminated, i.e. based on which > criterion. You may have a look at the 'mesg' output which describes > how the solver terminated. The documentation is probably > misleading here as it describes mesg as a "message giving information > about the cause of failure" whereas it's actually a general message. > > -- > Denis Laxalde > > > ------------------------------ > > Message: 6 > Date: Tue, 25 Oct 2011 09:46:23 -0400 > From: Skipper Seabold > Subject: Re: [SciPy-User] 2 questions: optimize.leastsq > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > On Tue, Oct 25, 2011 at 9:21 AM, Denis Laxalde > wrote: > > On Tue, 25 Oct 2011 12:49:37 +0200, > > Johannes Radinger wrote: > >> It's all about the optimize.leastsq. I use that to fit a function with > several conditions. As a result I get the parameter estimates I am looking > for and the 'ier'. According to the manual this is: > >> > >> "An integer flag. If it is equal to 1, 2, 3 or 4, the solution was > found. Otherwise, > >> the solution was not found" > >> > >> I just want to know: What exactly do the numbers mean? What is if it is > 1 or 2? Is that any information > >> about the quality of the fit? > > > > These numbers indicate how the solver terminated, i.e. based on which > > criterion. You may have a look at the 'mesg' output which describes > > how the solver terminated. The documentation is probably > > misleading here as it describes mesg as a "message giving information > > about the cause of failure" whereas it's actually a general message. > > > > You can also look at the source, if you don't want to use > full_output=1 for some reason. > > https://github.com/scipy/scipy/blob/master/scipy/optimize/minpack.py#L294 > http://www.netlib.org/minpack/lmdif.f > > Skipper > > > ------------------------------ > > Message: 7 > Date: Tue, 25 Oct 2011 10:55:10 -0400 > From: josef.pktd at gmail.com > Subject: Re: [SciPy-User] 2 questions: optimize.leastsq > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > On Tue, Oct 25, 2011 at 6:49 AM, Johannes Radinger > wrote: > > Hi, > > > > I've got two questions considering the least square optimization: > > > > 1 ) I asked that already some time ago, but couldn't find the email with > the answer anymore. Maybe you can help me out again: > > > > It's all about the optimize.leastsq. I use that to fit a function with > several conditions. As a result I get the parameter estimates I am looking > for and the 'ier'. According to the manual this is: > > > > "An integer flag. If it is equal to 1, 2, 3 or 4, the solution was > found. Otherwise, > > the solution was not found" > > > > I just want to know: What exactly do the numbers mean? What is if it is > 1 or 2? Is that any information > > about the quality of the fit? > > > > 2) I am doing following optimization: > > > > def pdf(x,s1,s2,p): > > ? ?return > (p/(math.sqrt(2*math.pi*s1**2))*numpy.exp(-((x-0)**(2)/(2*s1**(2)))))+((1-p)/(math.sqrt(2*math.pi*s2**2))*numpy.exp(-((x-0)**(2)/(2*s2**(2))))) > > def equ149(arg): > > ? ?s1,s2,p = numpy.abs(arg) > > ? ?cond1 = 0.7673 - integrate.quad(pdf,-25,25,args=(s1,s2,p))[0] > > ? ?cond2 = 0.8184 - integrate.quad(pdf,-45,45,args=(s1,s2,p))[0] > > ? ?cond3 = 0.8320 - integrate.quad(pdf,-55,55,args=(s1,s2,p))[0] > > ? ?cond4 = 0.8688 - integrate.quad(pdf,-75,75,args=(s1,s2,p))[0] > > ? ?cond5 = 0.8771 - integrate.quad(pdf,-85,85,args=(s1,s2,p))[0] > > ? ?cond6 = 0.8951 - integrate.quad(pdf,-95,95,args=(s1,s2,p))[0] > > ? ?cond7 = 0.9124 - integrate.quad(pdf,-105,105,args=(s1,s2,p))[0] > > ? ?cond8 = 0.9237 - integrate.quad(pdf,-115,115,args=(s1,s2,p))[0] > > ? ?cond9 = 0.935 - integrate.quad(pdf,-125,125,args=(s1,s2,p))[0] > > ? ?cond10 = 0.95 - integrate.quad(pdf,-145,145,args=(s1,s2,p))[0] > > ? ?cond11 = 0.962 - integrate.quad(pdf,-175,175,args=(s1,s2,p))[0] > > ? ?cond12 = 0.9748 - integrate.quad(pdf,-195,195,args=(s1,s2,p))[0] > > ? ?cond13 = 0.9876 - integrate.quad(pdf,-205,205,args=(s1,s2,p))[0] > > ? ?cond14 = 0.9913 - integrate.quad(pdf,-265,265,args=(s1,s2,p))[0] > > ? ?cond15 = 0.9988 - integrate.quad(pdf,-295,295,args=(s1,s2,p))[0] > > ? ?cond16 = 0.0012/2 - > integrate.quad(pdf,315,numpy.inf,args=(s1,s2,p))[0] > > ? ?return [cond1, cond2,cond3, cond4, cond5, cond6, cond7, cond8, cond9, > cond10, cond11, cond12, cond13, cond14, cond15, cond16] > > result=leastsq(equ149, [30.0, 200.0,0.7]) > > > > I do that about 100 times with different conditions and get always s1, > s2 and p. If i then compare the results, s1 is more or less always 10 times > smaller than s2. So it seems there is a constant factor. Now my question: > Is that "factor 10" only a result of my data (which would be great!)? Or is > it an mathematical artifact of the optimization procedure? > > I don't see a reason why there should be a relationship between s1 and > s2. Did you try it with different data? > Try a mixture that has s2/s1 = 30 or 2. I don't know if in this case > the function has some non-convexities. > > As an aside: > I think using the cdf of the normal distribution would save you the > numerical integration and should be faster. > Good idea, I never thought of using leastsq (instead of fmin) for this. > > Josef > > > > > > Thank you very much! > > > > /Johannes > > -- NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zur?ck-Garantie! Jetzt informieren: http://www.gmx.net/de/go/freephone From dave.hirschfeld at gmail.com Tue Oct 25 13:13:38 2011 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Tue, 25 Oct 2011 17:13:38 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?scikits=2Etimesereies_plotting_broken_with?= =?utf-8?q?_latest=09matplotlib?= References: Message-ID: Ralf Gommers googlemail.com> writes: > > > On Tue, Oct 25, 2011 at 1:53 PM, Dave Hirschfeld gmail.com> wrote: > With matplotlib v1.1.0 the timeseries plotting code no longer works. I had a > look and it seems that the matplotlib API has changed but not being familiar > with the internals of matplotlib I couldn't find a workaround. > > > Matplotlib has its own mailing list, please ask there.Ralf > Just to clarify, the problem lies with the scikits.timeseries package which depends (in part) upon matplotlib but which hasn't kept up with recent developments. AFAIK scikits.timeseries doesn't have it's own list and the authors recommend this list as the best place to solicit help. -Dave From jsseabold at gmail.com Tue Oct 25 13:17:23 2011 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 25 Oct 2011 13:17:23 -0400 Subject: [SciPy-User] scikits.timesereies plotting broken with latest matplotlib In-Reply-To: References: Message-ID: On Tue, Oct 25, 2011 at 1:13 PM, Dave Hirschfeld wrote: > Ralf Gommers googlemail.com> writes: > >> >> >> On Tue, Oct 25, 2011 at 1:53 PM, Dave Hirschfeld > gmail.com> wrote: >> With matplotlib v1.1.0 the timeseries plotting code no longer works. I had a >> look and it seems that the matplotlib API has changed but not being familiar >> with the internals of matplotlib I couldn't find a workaround. >> >> >> Matplotlib has its own mailing list, please ask there.Ralf >> > > Just to clarify, the problem lies with the scikits.timeseries package which > depends (in part) upon matplotlib but which hasn't kept up with recent > developments. > > AFAIK scikits.timeseries doesn't have it's own list and the authors recommend > this list as the best place to solicit help. > FWIW, I've had these lines in scikits.timeseries commented out for a while with no (known) adverse effects until a proper fix. YMMV |4 $ svn diff plotlib.py Index: plotlib.py =================================================================== --- plotlib.py (revision 2267) +++ plotlib.py (working copy) @@ -174,10 +174,16 @@ except TypeError: key = str(key) + """ if key in figure_instance._seen: ax = figure_instance._seen[key] figure_instance.sca(ax) return ax + """ + ax = figure_instance._axstack.get(key) + if ax is not None: + figure_instance.sca(ax) + return ax SubplotClass = kwargs.pop("SubplotClass", Subplot) SubplotClass = kwargs.pop("subclass", SubplotClass) @@ -187,10 +193,12 @@ else: a = SubplotClass(figure_instance, *args, **kwargs) + figure_instance.axes.append(a) - figure_instance._axstack.push(a) + #figure_instance._axstack.push(a) + figure_instance._axstack.add(key,a) figure_instance.sca(a) - figure_instance._seen[key] = a + #figure_instance._seen[key] = a return a ##### ------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimodisasha at gmail.com Tue Oct 25 15:28:13 2011 From: massimodisasha at gmail.com (Massimo Di Stefano) Date: Tue, 25 Oct 2011 15:28:13 -0400 Subject: [SciPy-User] skip lines at the end of file with loadtxt Message-ID: i'm tring to generate an array reading a txt file from internet. my target is to use python instead of matlab, to replace this steps in matlab : url=['http://www.cdc.noaa.gov/Correlation/amon.us.long.data']; urlwrite(url,'file.txt'); i'm using this code : urllib.urlretrieve('http://www.cdc.noaa.gov/Correlation/amon.us.long.data', 'file.txt') a = np.loadtxt('file.txt', skiprows=1) but it fails becouse of the txt description at the end of the file, do you know if exist a way to skip the X lines at the end, something like "skipmultiplerows='1,-4'" (to skip the first and the last 4 rows in the file) or i have to use some sort of string manipulation (readlines?) instead ? Thanks! --Massimo -------------- next part -------------- An HTML attachment was scrubbed... URL: From hturesson at gmail.com Tue Oct 25 21:51:25 2011 From: hturesson at gmail.com (Hjalmar Turesson) Date: Tue, 25 Oct 2011 21:51:25 -0400 Subject: [SciPy-User] Linking to BLAS from an extension Message-ID: Hi, I'm trying to call dgemm (a BLAS routine) from a C extension. It compiles fine, but when I try to import my python file containing the extension, I get an import error: ImportError: ./_C_fast_spikesort.so: undefined symbol: dgemm_ >From http://www.math.utah.edu/software/lapack.html I got the impression that gcc doesn't automatically record the location of the BLAS libary, so that at runtime dgemm is not found. Thus, they recommend to put the LDFLAGS line in the make file. But this doesn't help. Make file below: # ---- Record the location of BLAS----- LDFLAGS = Wl, -rpath /usr/lib # ---- Link --------------------------- _C_fast_spikesort.so: C_fast_spikesort.o gcc -shared C_fast_spikesort.o -o _C_fast_spikesort.so # ---- gcc C compile ------------------ C_fast_spikesort.o: C_fast_spikesort.c C_fast_spikesort.h gcc -c -fPIC C_fast_spikesort.c -I/usr/include/python2.7 -I/usr/lib/python2.7/site-packages/numpy/core/include/numpy -L /usr/lib -lblas How should I fix it? Is it better to use cblas, and include cblas.h instead? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hturesson at gmail.com Tue Oct 25 22:19:48 2011 From: hturesson at gmail.com (Hjalmar Turesson) Date: Tue, 25 Oct 2011 22:19:48 -0400 Subject: [SciPy-User] Linking to BLAS from an extension In-Reply-To: References: Message-ID: I changed my code to use cblas instead. I include cblas.h and call cblas_dgemm. It compiles ok, but I get the same import error: ImportError: ./_C_fast_spikesort.so: undefined symbol: cblas_dgemm What am I doing wrong? Thanks, Hjalmar On Tue, Oct 25, 2011 at 9:51 PM, Hjalmar Turesson wrote: > Hi, > > I'm trying to call dgemm (a BLAS routine) from a C extension. It compiles > fine, but when I try to import my python file containing the extension, I > get an import error: > > > ImportError: ./_C_fast_spikesort.so: undefined symbol: dgemm_ > > > From http://www.math.utah.edu/software/lapack.html I got the impression > that gcc doesn't automatically record the location of the BLAS libary, so > that at runtime dgemm is not found. Thus, they recommend to put the LDFLAGS > line in the make file. But this doesn't help. Make file below: > > # ---- Record the location of BLAS----- > LDFLAGS = Wl, -rpath /usr/lib > > # ---- Link --------------------------- > _C_fast_spikesort.so: C_fast_spikesort.o > gcc -shared C_fast_spikesort.o -o _C_fast_spikesort.so > > # ---- gcc C compile ------------------ > C_fast_spikesort.o: C_fast_spikesort.c C_fast_spikesort.h > gcc -c -fPIC C_fast_spikesort.c -I/usr/include/python2.7 > -I/usr/lib/python2.7/site-packages/numpy/core/include/numpy -L /usr/lib > -lblas > > How should I fix it? Is it better to use cblas, and include cblas.h > instead? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.hirschfeld at gmail.com Wed Oct 26 07:52:09 2011 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Wed, 26 Oct 2011 11:52:09 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?scikits=2Etimesereies_plotting_broken_with?= =?utf-8?q?_latest=09matplotlib?= References: Message-ID: Skipper Seabold gmail.com> writes: > FWIW, I've had these lines in scikits.timeseries commented out for a while with no (known) adverse effects until a proper fix. YMMV|4 $ svn diff plotlib.py > <--snip--> Brilliant, thanks Skipper - works great for me! Cheers, Dave From johann.cohen-tanugi at lupm.univ-montp2.fr Wed Oct 26 01:18:37 2011 From: johann.cohen-tanugi at lupm.univ-montp2.fr (Johann Cohen-Tanugi) Date: Wed, 26 Oct 2011 07:18:37 +0200 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: References: <4EA5A55B.2010100@gmail.com> <4EA5B0E7.1080106@gmail.com> <4EA5BFF3.2070303@lupm.univ-montp2.fr> Message-ID: <4EA7982D.90705@lupm.univ-montp2.fr> Hi Josef, can you provide me with the code with which you exercize the nan occurrence? It looks more reasonable than the brutal code the OP put in track, and could be used in optimize/tests/test_optimize.py best, Johann On 10/24/2011 10:14 PM, josef.pktd at gmail.com wrote: > On Mon, Oct 24, 2011 at 3:59 PM, wrote: >> tricky things these reply to all, forwarding to list >> >> ---------- Forwarded message ---------- >> From: >> Date: Mon, Oct 24, 2011 at 3:52 PM >> Subject: Re: [SciPy-User] fmin_bfgs stuck in infinite loop >> To: johann.cohen-tanugi at lupm.in2p3.fr >> >> >> On Mon, Oct 24, 2011 at 3:43 PM, Johann Cohen-Tanugi >> wrote: >>> indeed, see the email I just sent : for nan linesearch_wolfe1 does not exit >>> gracefully, so that Ralf's patch is never encountered. >> I'm not sure what's going on, >> >> I just copied the few lines from >> https://github.com/scipy/scipy/commit/a31acbf into my scipy 0.9 and >> the original example stops and I'm not able to produce an endless loop >> anymore when I try to change around with any of the examples, even >> when I start with a nan, it stops immediately. I only tried the one >> parameter example. > Nope, still endless in linesearch with -np.exp(-x/2.) -np.exp(-x**2) > the third iteration starts with nan as parameter and never returns > from the line search. > > So something like your original fix to exit a nan linesearch looks necessary. > > Josef > > >> Can you check that you actually run the code that has this fix? >> >> Josef >> >> >>> Johann >>> >>> On 10/24/2011 09:26 PM, josef.pktd at gmail.com wrote: >>>> On Mon, Oct 24, 2011 at 2:39 PM, Johann Cohen-Tanugi >>>> wrote: >>>>> Dear Josef >>>>> On 10/24/2011 07:58 PM, josef.pktd at gmail.com wrote: >>>>>> On Mon, Oct 24, 2011 at 1:56 PM, wrote: >>>>>>> On Mon, Oct 24, 2011 at 1:50 PM, Johann Cohen-Tanugi >>>>>>> wrote: >>>>>>>> Hello, >>>>>>>> the OP is a colleague of mine and I looked quickly at the code. The >>>>>>>> infinite >>>>>>>> loop in the OP's illustrating script comes from the "while 1" loop in >>>>>>>> l.144 >>>>>>>> of linesearch.py : becuse phi0 is np.nan, phi1 is returned as np.nan >>>>>>>> as >>>>>>>> well, and the break condition is never met. There is an easy fix : >>>>>>>> while 1: >>>>>>>> stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, >>>>>>>> derphi1, >>>>>>>> c1, c2, xtol, task, >>>>>>>> amin, amax, isave, >>>>>>>> dsave) >>>>>>>> if task[:2] == asbytes('FG') and not np.isnan(phi1): >>>>>>>> alpha1 = stp >>>>>>>> phi1 = phi(stp) >>>>>>>> derphi1 = derphi(stp) >>>>>>>> else: >>>>>>>> break >>>>>>>> >>>>>>>> but it is not a nice kludge.... Is there a better way to secure this >>>>>>>> while 1 >>>>>>>> loop? I am sure I am not covering all possible pathological cases with >>>>>>>> adding "not np.isnan(phi1)" in the code above. >>>>>>> Is this still a problem with 0.10 ? >>>>>>> I thought this fixed it, https://github.com/scipy/scipy/commit/a31acbf >>>>> Well I am a complete newby with github, but I think I went to the head of >>>>> master before testing, and the problem is still there. I can see the code >>>>> snippet from https://github.com/scipy/scipy/commit/a31acbf in my local >>>>> copy, >>>>> and this is testing against +/-inf, not nan. Changing the OP's code to >>>>> test >>>>> against inf yields : >>>>> In [1]: run test_bgfs.py >>>>> >>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:303: >>>>> RuntimeWarning: invalid value encountered in subtract >>>>> if (max(numpy.ravel(abs(sim[1:] - sim[0])))<= xtol \ >>>>> >>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:308: >>>>> RuntimeWarning: invalid value encountered in subtract >>>>> xr = (1 + rho)*xbar - rho*sim[-1] >>>>> >>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:350: >>>>> RuntimeWarning: invalid value encountered in subtract >>>>> sim[j] = sim[0] + sigma*(sim[j] - sim[0]) >>>>> fmin works [ inf] >>>>> >>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:132: >>>>> RuntimeWarning: invalid value encountered in subtract >>>>> alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) >>>>> >>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:308: >>>>> RuntimeWarning: invalid value encountered in subtract >>>>> alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) >>>>> >>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:417: >>>>> RuntimeWarning: invalid value encountered in subtract >>>>> B = (fb-D-C*db)/(db*db) >>>>> fmin_bfgs gets stuck in a loop [ nan] >>>>> >>>>> so it looks like your code solves the inf situation, but not the nan. >>>> It's not my fix (I'm still on scipy 0.9 and avoid bfgs because I don't >>>> want to have to kill my interpreter session) >>>> >>>> isfinite also checks for nans >>>> >>>>>>> np.isfinite(np.nan) >>>> False >>>> >>>> so there should be another reason that the linesearch doesn't return. >>>> >>>> Josef >>>> >>>> >>>> >>>> >>>> >>>>>> Is http://projects.scipy.org/scipy/ticket/1542 the same? >>>>> yes it looks like a duplicate >>>>>> josef >>>>>> >>>>>>> Josef >>>>>>> >>>>>>> >>>>>>>> thoughts? >>>>>>>> Johann >>>>>>>> >>>>>>>> On 08/14/2011 01:38 AM, b9o2jnbm tsd71eam wrote: >>>>>>>> >>>>>>>> I have run into a frustrating problem where scipy.optimize.fmin_bfgs >>>>>>>> will >>>>>>>> get stuck in an infinite loop. >>>>>>>> >>>>>>>> I have submitted a bug report: >>>>>>>> >>>>>>>> http://projects.scipy.org/scipy/ticket/1494 >>>>>>>> >>>>>>>> but would also like to see if anybody on this list has any suggestions >>>>>>>> or >>>>>>>> feedback. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> -- >>>>>>>> This message has been scanned for viruses and >>>>>>>> dangerous content by MailScanner, and is >>>>>>>> believed to be clean. >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> SciPy-User mailing list >>>>>>>> SciPy-User at scipy.org >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> SciPy-User mailing list >>>>>>>> SciPy-User at scipy.org >>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>> >>>>>>>> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Wed Oct 26 12:16:18 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 26 Oct 2011 12:16:18 -0400 Subject: [SciPy-User] fmin_bfgs stuck in infinite loop In-Reply-To: <4EA7982D.90705@lupm.univ-montp2.fr> References: <4EA5A55B.2010100@gmail.com> <4EA5B0E7.1080106@gmail.com> <4EA5BFF3.2070303@lupm.univ-montp2.fr> <4EA7982D.90705@lupm.univ-montp2.fr> Message-ID: On Wed, Oct 26, 2011 at 1:18 AM, Johann Cohen-Tanugi wrote: > Hi Josef, can you provide me with the code with which you exercize the > nan occurrence? It looks more reasonable than the brutal code the OP put > in track, and could be used in optimize/tests/test_optimize.py I just got the message. bgfs is a variation on the original example bgfs2 is for multiple parameters, current example has bound objective, but one parameter goes to inf but bfgs produces "Optimization terminated successfully." fmin hits maxiter or max function evaluation The scripts are a bit dirty, since I change around things a lot, and I don't know which version triggered which bug. If it's not clear, I can clean them up later. I would like to get a collection of corner case problems, but the umbrella maxiter limit will fix them all, I guess. Thanks, Josef > > best, > Johann > > On 10/24/2011 10:14 PM, josef.pktd at gmail.com wrote: >> On Mon, Oct 24, 2011 at 3:59 PM, ?wrote: >>> tricky things these reply to all, forwarding to list >>> >>> ---------- Forwarded message ---------- >>> From: >>> Date: Mon, Oct 24, 2011 at 3:52 PM >>> Subject: Re: [SciPy-User] fmin_bfgs stuck in infinite loop >>> To: johann.cohen-tanugi at lupm.in2p3.fr >>> >>> >>> On Mon, Oct 24, 2011 at 3:43 PM, Johann Cohen-Tanugi >>> ?wrote: >>>> indeed, see the email I just sent : for nan linesearch_wolfe1 does not exit >>>> gracefully, so that Ralf's patch is never encountered. >>> I'm not sure what's going on, >>> >>> I just copied the few lines from >>> https://github.com/scipy/scipy/commit/a31acbf into my scipy 0.9 and >>> the original example stops and I'm not able to produce an endless loop >>> anymore when I try to change around with any of the examples, even >>> when I start with a nan, it stops immediately. ?I only tried the one >>> parameter example. >> Nope, still endless in linesearch with -np.exp(-x/2.) -np.exp(-x**2) >> the third iteration starts with nan as parameter and never returns >> from the line search. >> >> So something like your original fix to exit a nan linesearch looks necessary. >> >> Josef >> >> >>> Can you check that you actually run the code that has this fix? >>> >>> Josef >>> >>> >>>> Johann >>>> >>>> On 10/24/2011 09:26 PM, josef.pktd at gmail.com wrote: >>>>> On Mon, Oct 24, 2011 at 2:39 PM, Johann Cohen-Tanugi >>>>> ? ?wrote: >>>>>> Dear Josef >>>>>> On 10/24/2011 07:58 PM, josef.pktd at gmail.com wrote: >>>>>>> On Mon, Oct 24, 2011 at 1:56 PM, ? ? ?wrote: >>>>>>>> On Mon, Oct 24, 2011 at 1:50 PM, Johann Cohen-Tanugi >>>>>>>> ? ? ?wrote: >>>>>>>>> Hello, >>>>>>>>> the OP is a colleague of mine and I looked quickly at the code. The >>>>>>>>> infinite >>>>>>>>> loop in the OP's illustrating script comes from the "while 1" loop in >>>>>>>>> l.144 >>>>>>>>> of linesearch.py : becuse phi0 is np.nan, phi1 is returned as np.nan >>>>>>>>> as >>>>>>>>> well, and the break condition is never met. There is an easy fix : >>>>>>>>> ? ? ?while 1: >>>>>>>>> ? ? ? ? ?stp, phi1, derphi1, task = minpack2.dcsrch(alpha1, phi1, >>>>>>>>> derphi1, >>>>>>>>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? c1, c2, xtol, task, >>>>>>>>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? amin, amax, isave, >>>>>>>>> dsave) >>>>>>>>> ? ? ? ? ?if task[:2] == asbytes('FG') and not np.isnan(phi1): >>>>>>>>> ? ? ? ? ? ? ?alpha1 = stp >>>>>>>>> ? ? ? ? ? ? ?phi1 = phi(stp) >>>>>>>>> ? ? ? ? ? ? ?derphi1 = derphi(stp) >>>>>>>>> ? ? ? ? ?else: >>>>>>>>> ? ? ? ? ? ? ?break >>>>>>>>> >>>>>>>>> but it is not a nice kludge.... Is there a better way to secure this >>>>>>>>> while 1 >>>>>>>>> loop? I am sure I am not covering all possible pathological cases with >>>>>>>>> adding "not np.isnan(phi1)" in the code above. >>>>>>>> Is this still a problem with 0.10 ? >>>>>>>> I thought this fixed it, https://github.com/scipy/scipy/commit/a31acbf >>>>>> Well I am a complete newby with github, but I think I went to the head of >>>>>> master before testing, and the problem is still there. I can see the code >>>>>> snippet from https://github.com/scipy/scipy/commit/a31acbf in my local >>>>>> copy, >>>>>> and this is testing against +/-inf, not nan. Changing the OP's code to >>>>>> test >>>>>> against inf yields : >>>>>> In [1]: run test_bgfs.py >>>>>> >>>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:303: >>>>>> RuntimeWarning: invalid value encountered in subtract >>>>>> ? if (max(numpy.ravel(abs(sim[1:] - sim[0])))<= xtol \ >>>>>> >>>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:308: >>>>>> RuntimeWarning: invalid value encountered in subtract >>>>>> ? xr = (1 + rho)*xbar - rho*sim[-1] >>>>>> >>>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/optimize.py:350: >>>>>> RuntimeWarning: invalid value encountered in subtract >>>>>> ? sim[j] = sim[0] + sigma*(sim[j] - sim[0]) >>>>>> fmin works [ inf] >>>>>> >>>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:132: >>>>>> RuntimeWarning: invalid value encountered in subtract >>>>>> ? alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) >>>>>> >>>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:308: >>>>>> RuntimeWarning: invalid value encountered in subtract >>>>>> ? alpha1 = min(1.0, 1.01*2*(phi0 - old_phi0)/derphi0) >>>>>> >>>>>> /home/cohen/sources/python/scipydev/lib/python2.6/site-packages/scipy/optimize/linesearch.py:417: >>>>>> RuntimeWarning: invalid value encountered in subtract >>>>>> ? B = (fb-D-C*db)/(db*db) >>>>>> fmin_bfgs gets stuck in a loop [ nan] >>>>>> >>>>>> so it looks like your code solves the inf situation, but not the nan. >>>>> It's not my fix (I'm still on scipy 0.9 and avoid bfgs because I don't >>>>> want to have to kill my interpreter session) >>>>> >>>>> isfinite also checks for nans >>>>> >>>>>>>> np.isfinite(np.nan) >>>>> False >>>>> >>>>> so there should be another reason that the linesearch doesn't return. >>>>> >>>>> Josef >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>>> Is http://projects.scipy.org/scipy/ticket/1542 the same? >>>>>> yes it looks like a duplicate >>>>>>> josef >>>>>>> >>>>>>>> Josef >>>>>>>> >>>>>>>> >>>>>>>>> thoughts? >>>>>>>>> Johann >>>>>>>>> >>>>>>>>> On 08/14/2011 01:38 AM, b9o2jnbm tsd71eam wrote: >>>>>>>>> >>>>>>>>> I have run into a frustrating problem where scipy.optimize.fmin_bfgs >>>>>>>>> will >>>>>>>>> get stuck in an infinite loop. >>>>>>>>> >>>>>>>>> I have submitted a bug report: >>>>>>>>> >>>>>>>>> http://projects.scipy.org/scipy/ticket/1494 >>>>>>>>> >>>>>>>>> but would also like to see if anybody on this list has any suggestions >>>>>>>>> or >>>>>>>>> feedback. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> -- >>>>>>>>> This message has been scanned for viruses and >>>>>>>>> dangerous content by MailScanner, and is >>>>>>>>> believed to be clean. >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> SciPy-User mailing list >>>>>>>>> SciPy-User at scipy.org >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> SciPy-User mailing list >>>>>>>>> SciPy-User at scipy.org >>>>>>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>>>>>> >>>>>>>>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- A non-text attachment was scrubbed... Name: try_endless_bgfs2.py Type: text/x-python Size: 1454 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: try_endless_bgfs.py Type: text/x-python Size: 1206 bytes Desc: not available URL: From hturesson at gmail.com Wed Oct 26 21:27:52 2011 From: hturesson at gmail.com (Hjalmar Turesson) Date: Wed, 26 Oct 2011 21:27:52 -0400 Subject: [SciPy-User] Linking to BLAS from an extension In-Reply-To: References: Message-ID: I got some expert help and the problem is solved. I needed to add -L /usr/include -lblas to the make file. The cblas.h resides in /usr/include. The make file: # ---- Link --------------------------- _C_fast_spikesort.so: C_fast_spikesort.o gcc -shared C_fast_spikesort.o -o _C_fast_spikesort.so -L /usr/lib -lblas # ---- gcc C compile ------------------ C_fast_spikesort.o: C_fast_spikesort.c C_fast_spikesort.h gcc -c -fPIC C_fast_spikesort.c -I/usr/include/python2.7 -I/usr/lib/python2.7/site-packages/numpy/core/include/numpy Best, Hjalmar On Tue, Oct 25, 2011 at 10:19 PM, Hjalmar Turesson wrote: > I changed my code to use cblas instead. I include cblas.h and call > cblas_dgemm. It compiles ok, but I get the same import error: > > ImportError: ./_C_fast_spikesort.so: undefined symbol: cblas_dgemm > > > What am I doing wrong? > > Thanks, > Hjalmar > > > On Tue, Oct 25, 2011 at 9:51 PM, Hjalmar Turesson wrote: > >> Hi, >> >> I'm trying to call dgemm (a BLAS routine) from a C extension. It compiles >> fine, but when I try to import my python file containing the extension, I >> get an import error: >> >> >> ImportError: ./_C_fast_spikesort.so: undefined symbol: dgemm_ >> >> >> From http://www.math.utah.edu/software/lapack.html I got the impression >> that gcc doesn't automatically record the location of the BLAS libary, so >> that at runtime dgemm is not found. Thus, they recommend to put the LDFLAGS >> line in the make file. But this doesn't help. Make file below: >> >> # ---- Record the location of BLAS----- >> LDFLAGS = Wl, -rpath /usr/lib >> >> # ---- Link --------------------------- >> _C_fast_spikesort.so: C_fast_spikesort.o >> gcc -shared C_fast_spikesort.o -o _C_fast_spikesort.so >> >> # ---- gcc C compile ------------------ >> C_fast_spikesort.o: C_fast_spikesort.c C_fast_spikesort.h >> gcc -c -fPIC C_fast_spikesort.c -I/usr/include/python2.7 >> -I/usr/lib/python2.7/site-packages/numpy/core/include/numpy -L /usr/lib >> -lblas >> >> How should I fix it? Is it better to use cblas, and include cblas.h >> instead? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Fri Oct 28 13:41:06 2011 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 28 Oct 2011 10:41:06 -0700 Subject: [SciPy-User] [ANN] SciPy India 2011 Abstracts due November 2nd Message-ID: ========================== SciPy 2011 Call for Papers ========================== The third `SciPy India Conference `_ will be held from December 4th through the 7th at the `Indian Institute of Technology, Bombay (IITB) `_ in Mumbai, Maharashtra India. At this conference, novel applications and breakthroughs made in the pursuit of science using Python are presented. Attended by leading figures from both academia and industry, it is an excellent opportunity to experience the cutting edge of scientific software development. The conference is followed by two days of tutorials and a code sprint, during which community experts provide training on several scientific Python packages. We invite you to take part by submitting a talk abstract on the conference website at: http://scipy.in Talk/Paper Submission ========================== We solicit talks and accompanying papers (either formal academic or magazine-style articles) that discuss topics regarding scientific computing using Python, including applications, teaching, development and research. We welcome contributions from academia as well as industry. Important Dates ========================== November 2, 2011, Wednesday: Abstracts Due November 7, 2011, Monday: Schedule announced November 28, 2011, Monday: Proceedings paper submission due December 4-5, 2011, Sunday-Monday: Conference December 6-7 2011, Tuesday-Wednesday: Tutorials/Sprints Organizers ========================== * Jarrod Millman, Neuroscience Institute, UC Berkeley, USA (Conference Co-Chair) * Prabhu Ramachandran, Department of Aerospace Engineering, IIT Bombay, India (Conference Co-Chair) * FOSSEE Team From boulogne.f at gmail.com Sun Oct 30 11:48:30 2011 From: boulogne.f at gmail.com (=?ISO-8859-1?Q?Fran=E7ois_Boulogne?=) Date: Sun, 30 Oct 2011 16:48:30 +0100 Subject: [SciPy-User] Derivative in scipy? Message-ID: <4EAD71CE.8090506@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Dear all, I was wondering if a piece of code has been developped for derivative calculations, espacially for a sample (array of points) like for integration: http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html with different methods (right or left first derivatives, second derivatives...) I didn't succeed in finding this in the documentation. Does it exist? If not, is it planned by someone? Thanks. Cheers, - -- Fran?ois Boulogne. Membre de l'April - Promouvoir et d?fendre le logiciel libre http://www.april.org -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBAgAGBQJOrXHDAAoJEKkn15fZnrRLxbYP/3/NUzU5metNhGc4tw6Uo0mw kIlAxf1cvtopbJ+JTXrmKkJpyYUI1bMsqVKpPniW073dgPAfAX3Bajymysh5Hgnd 5+0hh7v8JmsIvKm9HrEePSrENYrIVTfFyvRV+tBLhkfHJ9Vj7uUy7a3/lyRS6s7v FEItXLhkNYtqEir8h2eZ7uW178mwq6nBl6Zi5UOjOXq7u0SxcnusKkYyiL+CirSN KafcrU+rUdHw6khE1exj435GMSKx+N3+rV/kDAQWWQc0ncWmdX2Jr7PapT1DQbNN b3UK2NjYGDZE2NMRuZmRmeTIk2S+PVPqRxEu0x62CS4Y4JnTaZx1xSiPGP0d9cZC NG4/9h0LFM8rxj/kxYjBakbIs6LhwPnS1ZvYf00o3eaY4+tLGx3UozoLpEWSCOuQ GrONTWvdUlfVWsd3qOhwCj+NORkz9yXJNqJHulgz6T8fKnMyPL+R80XfRuS1alxj m166ySHxrPDzwiSlV5vWR/1ajrEMAI/GYev55dtzlIx6hxfLVCOryFei/LNtlrN+ lYK48QqateqvLDs+NUq+7azmCAkc8fW5UeD7bBLdoGwlXvYaOEZh8+lzxU2sc0T4 93kNZAzOzv/CdcI1uhnp2NP8ODGEyRLx5cUlcKqmF/iAqtM2zc5Zca9YiygR2w83 XQsGdZYw4rkVMnUTk6Ep =/Nlz -----END PGP SIGNATURE----- From josef.pktd at gmail.com Sun Oct 30 12:03:35 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 30 Oct 2011 12:03:35 -0400 Subject: [SciPy-User] Derivative in scipy? In-Reply-To: <4EAD71CE.8090506@gmail.com> References: <4EAD71CE.8090506@gmail.com> Message-ID: 2011/10/30 Fran?ois Boulogne : > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Dear all, > > I was wondering if a piece of code has been developped for derivative > calculations, espacially for a sample (array of points) like for > integration: > http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html with > different methods (right or left first derivatives, second derivatives...) > I didn't succeed in finding this in the documentation. Does it exist? If > not, is it planned by someone? There is one helper function in scipy.optimize (scipy.optimize.optimize), nothing else in scipy. My standard recommendation for finite differences is numdifftools, it's on pypi. There is a ticket that asks for it's inclusion in scipy, IIRC. There are some packages on automatic differentiation. (and we have our own hacked together numdiff in statsmodels, just for optimization and Hessian calculations.) Josef > > Thanks. > Cheers, > > - -- > Fran?ois Boulogne. > > Membre de l'April - Promouvoir et d?fendre le logiciel libre > http://www.april.org > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.11 (GNU/Linux) > > iQIcBAEBAgAGBQJOrXHDAAoJEKkn15fZnrRLxbYP/3/NUzU5metNhGc4tw6Uo0mw > kIlAxf1cvtopbJ+JTXrmKkJpyYUI1bMsqVKpPniW073dgPAfAX3Bajymysh5Hgnd > 5+0hh7v8JmsIvKm9HrEePSrENYrIVTfFyvRV+tBLhkfHJ9Vj7uUy7a3/lyRS6s7v > FEItXLhkNYtqEir8h2eZ7uW178mwq6nBl6Zi5UOjOXq7u0SxcnusKkYyiL+CirSN > KafcrU+rUdHw6khE1exj435GMSKx+N3+rV/kDAQWWQc0ncWmdX2Jr7PapT1DQbNN > b3UK2NjYGDZE2NMRuZmRmeTIk2S+PVPqRxEu0x62CS4Y4JnTaZx1xSiPGP0d9cZC > NG4/9h0LFM8rxj/kxYjBakbIs6LhwPnS1ZvYf00o3eaY4+tLGx3UozoLpEWSCOuQ > GrONTWvdUlfVWsd3qOhwCj+NORkz9yXJNqJHulgz6T8fKnMyPL+R80XfRuS1alxj > m166ySHxrPDzwiSlV5vWR/1ajrEMAI/GYev55dtzlIx6hxfLVCOryFei/LNtlrN+ > lYK48QqateqvLDs+NUq+7azmCAkc8fW5UeD7bBLdoGwlXvYaOEZh8+lzxU2sc0T4 > 93kNZAzOzv/CdcI1uhnp2NP8ODGEyRLx5cUlcKqmF/iAqtM2zc5Zca9YiygR2w83 > XQsGdZYw4rkVMnUTk6Ep > =/Nlz > -----END PGP SIGNATURE----- > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ralf.gommers at googlemail.com Sun Oct 30 12:54:35 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 30 Oct 2011 17:54:35 +0100 Subject: [SciPy-User] Derivative in scipy? In-Reply-To: References: <4EAD71CE.8090506@gmail.com> Message-ID: On Sun, Oct 30, 2011 at 5:03 PM, wrote: > 2011/10/30 Fran?ois Boulogne : > > > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > Dear all, > > > > I was wondering if a piece of code has been developped for derivative > > calculations, espacially for a sample (array of points) like for > > integration: > > http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html with > > different methods (right or left first derivatives, second > derivatives...) > > I didn't succeed in finding this in the documentation. Does it exist? If > > not, is it planned by someone? > > There is one helper function in scipy.optimize > (scipy.optimize.optimize), nothing else in scipy. > > My standard recommendation for finite differences is numdifftools, > it's on pypi. There is a ticket that asks for it's inclusion in scipy, > IIRC. > That's http://projects.scipy.org/scipy/ticket/1510, for those that are interested. Ralf > There are some packages on automatic differentiation. > > (and we have our own hacked together numdiff in statsmodels, just for > optimization and Hessian calculations.) > > Josef > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e.antero.tammi at gmail.com Sun Oct 30 13:50:54 2011 From: e.antero.tammi at gmail.com (eat) Date: Sun, 30 Oct 2011 19:50:54 +0200 Subject: [SciPy-User] Derivative in scipy? In-Reply-To: <4EAD71CE.8090506@gmail.com> References: <4EAD71CE.8090506@gmail.com> Message-ID: Hi, 2011/10/30 Fran?ois Boulogne > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Dear all, > > I was wondering if a piece of code has been developped for derivative > calculations, espacially for a sample (array of points) like for > integration: > http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html with > different methods (right or left first derivatives, second derivatives...) > I didn't succeed in finding this in the documentation. Does it exist? If > not, is it planned by someone? > Some other (possible relevant) links: http://docs.scipy.org/doc/numpy/reference/generated/numpy.diff.html http://docs.scipy.org/doc/numpy/reference/generated/numpy.ediff1d.html http://docs.scipy.org/doc/numpy/reference/generated/numpy.gradient.html http://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.central_diff_weights.html#scipy.misc.central_diff_weights Regards, eat > > Thanks. > Cheers, > > - -- > Fran?ois Boulogne. > > Membre de l'April - Promouvoir et d?fendre le logiciel libre > http://www.april.org > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.11 (GNU/Linux) > > iQIcBAEBAgAGBQJOrXHDAAoJEKkn15fZnrRLxbYP/3/NUzU5metNhGc4tw6Uo0mw > kIlAxf1cvtopbJ+JTXrmKkJpyYUI1bMsqVKpPniW073dgPAfAX3Bajymysh5Hgnd > 5+0hh7v8JmsIvKm9HrEePSrENYrIVTfFyvRV+tBLhkfHJ9Vj7uUy7a3/lyRS6s7v > FEItXLhkNYtqEir8h2eZ7uW178mwq6nBl6Zi5UOjOXq7u0SxcnusKkYyiL+CirSN > KafcrU+rUdHw6khE1exj435GMSKx+N3+rV/kDAQWWQc0ncWmdX2Jr7PapT1DQbNN > b3UK2NjYGDZE2NMRuZmRmeTIk2S+PVPqRxEu0x62CS4Y4JnTaZx1xSiPGP0d9cZC > NG4/9h0LFM8rxj/kxYjBakbIs6LhwPnS1ZvYf00o3eaY4+tLGx3UozoLpEWSCOuQ > GrONTWvdUlfVWsd3qOhwCj+NORkz9yXJNqJHulgz6T8fKnMyPL+R80XfRuS1alxj > m166ySHxrPDzwiSlV5vWR/1ajrEMAI/GYev55dtzlIx6hxfLVCOryFei/LNtlrN+ > lYK48QqateqvLDs+NUq+7azmCAkc8fW5UeD7bBLdoGwlXvYaOEZh8+lzxU2sc0T4 > 93kNZAzOzv/CdcI1uhnp2NP8ODGEyRLx5cUlcKqmF/iAqtM2zc5Zca9YiygR2w83 > XQsGdZYw4rkVMnUTk6Ep > =/Nlz > -----END PGP SIGNATURE----- > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun Oct 30 13:55:03 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 30 Oct 2011 12:55:03 -0500 Subject: [SciPy-User] Derivative in scipy? In-Reply-To: References: <4EAD71CE.8090506@gmail.com> Message-ID: On Sun, Oct 30, 2011 at 11:03 AM, wrote: > 2011/10/30 Fran?ois Boulogne : > > > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > Dear all, > > > > I was wondering if a piece of code has been developped for derivative > > calculations, espacially for a sample (array of points) like for > > integration: > > http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html with > > different methods (right or left first derivatives, second > derivatives...) > > I didn't succeed in finding this in the documentation. Does it exist? If > > not, is it planned by someone? > > There is one helper function in scipy.optimize > (scipy.optimize.optimize), nothing else in scipy. > Well, not exactly "nothing else"... (eat's email arrived as I was typing this, so it will echo some of what he said.) Functions that operate on a discrete sample: numpy.diff This can be used to compute a derivative by dividing by the appropriate power of dx. numpy.ediff1d Like numpy.diff, but strictly for 1D arrays. It also provides the option for specifying values to append to the ends of the array before computing the difference. numpy.gradient Return the gradient of an n-d array. scipy.fftpack.diff Derivative of a periodic sequence. See http://www.scipy.org/Cookbook/KdV for an example. Functions that operate on a callable function: scipy.misc.derivative Find the n-th derivative of a function at point x0. scipy.misc.central_difference_weights Return weights for an Np-point central derivative scipy.optimize.approx_fprime No docstring (sigh), but from the source code (use approx_fprime?? in ipython), it is pretty easy to figure out what it does. Having said that, I think a module specifically for computing derivatives (with good docs and tests), as being discussed in the ticket #1510 ( http://projects.scipy.org/scipy/ticket/1510) would be a nice addition. Warren > > My standard recommendation for finite differences is numdifftools, > it's on pypi. There is a ticket that asks for it's inclusion in scipy, > IIRC. > > There are some packages on automatic differentiation. > > (and we have our own hacked together numdiff in statsmodels, just for > optimization and Hessian calculations.) > > Josef > > > > > > Thanks. > > Cheers, > > > > - -- > > Fran?ois Boulogne. > > > > Membre de l'April - Promouvoir et d?fendre le logiciel libre > > http://www.april.org > > -----BEGIN PGP SIGNATURE----- > > Version: GnuPG v1.4.11 (GNU/Linux) > > > > iQIcBAEBAgAGBQJOrXHDAAoJEKkn15fZnrRLxbYP/3/NUzU5metNhGc4tw6Uo0mw > > kIlAxf1cvtopbJ+JTXrmKkJpyYUI1bMsqVKpPniW073dgPAfAX3Bajymysh5Hgnd > > 5+0hh7v8JmsIvKm9HrEePSrENYrIVTfFyvRV+tBLhkfHJ9Vj7uUy7a3/lyRS6s7v > > FEItXLhkNYtqEir8h2eZ7uW178mwq6nBl6Zi5UOjOXq7u0SxcnusKkYyiL+CirSN > > KafcrU+rUdHw6khE1exj435GMSKx+N3+rV/kDAQWWQc0ncWmdX2Jr7PapT1DQbNN > > b3UK2NjYGDZE2NMRuZmRmeTIk2S+PVPqRxEu0x62CS4Y4JnTaZx1xSiPGP0d9cZC > > NG4/9h0LFM8rxj/kxYjBakbIs6LhwPnS1ZvYf00o3eaY4+tLGx3UozoLpEWSCOuQ > > GrONTWvdUlfVWsd3qOhwCj+NORkz9yXJNqJHulgz6T8fKnMyPL+R80XfRuS1alxj > > m166ySHxrPDzwiSlV5vWR/1ajrEMAI/GYev55dtzlIx6hxfLVCOryFei/LNtlrN+ > > lYK48QqateqvLDs+NUq+7azmCAkc8fW5UeD7bBLdoGwlXvYaOEZh8+lzxU2sc0T4 > > 93kNZAzOzv/CdcI1uhnp2NP8ODGEyRLx5cUlcKqmF/iAqtM2zc5Zca9YiygR2w83 > > XQsGdZYw4rkVMnUTk6Ep > > =/Nlz > > -----END PGP SIGNATURE----- > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun Oct 30 13:57:22 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 30 Oct 2011 12:57:22 -0500 Subject: [SciPy-User] Derivative in scipy? In-Reply-To: References: <4EAD71CE.8090506@gmail.com> Message-ID: On Sun, Oct 30, 2011 at 12:55 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Sun, Oct 30, 2011 at 11:03 AM, wrote: > >> 2011/10/30 Fran?ois Boulogne : >> > >> > -----BEGIN PGP SIGNED MESSAGE----- >> > Hash: SHA1 >> > >> > Dear all, >> > >> > I was wondering if a piece of code has been developped for derivative >> > calculations, espacially for a sample (array of points) like for >> > integration: >> > http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html with >> > different methods (right or left first derivatives, second >> derivatives...) >> > I didn't succeed in finding this in the documentation. Does it exist? If >> > not, is it planned by someone? >> >> There is one helper function in scipy.optimize >> (scipy.optimize.optimize), nothing else in scipy. >> > > > Well, not exactly "nothing else"... > (eat's email arrived as I was typing this, so it will echo some of what he > said.) > > Functions that operate on a discrete sample: > > numpy.diff > This can be used to compute a derivative by dividing by the appropriate > power of dx. > > numpy.ediff1d > Like numpy.diff, but strictly for 1D arrays. It also provides the > option > for specifying values to append to the ends of the array before > computing > the difference. > > numpy.gradient > Return the gradient of an n-d array. > > scipy.fftpack.diff > Derivative of a periodic sequence. > See http://www.scipy.org/Cookbook/KdV for an example. > > > Functions that operate on a callable function: > > scipy.misc.derivative > Find the n-th derivative of a function at point x0. > > scipy.misc.central_difference_weights > Return weights for an Np-point central derivative > Correction: I put this in the wrong list; central_difference_weights is just a utility function for computing weights (as the name says). It does not compute derivatives of a callable function. Warren > scipy.optimize.approx_fprime > No docstring (sigh), but from the source code (use approx_fprime?? in > ipython), it is pretty easy to figure out what it does. > > > Having said that, I think a module specifically for computing derivatives > (with good docs and tests), as being discussed in the ticket #1510 ( > http://projects.scipy.org/scipy/ticket/1510) would be a nice addition. > > > Warren > > > >> >> My standard recommendation for finite differences is numdifftools, >> it's on pypi. There is a ticket that asks for it's inclusion in scipy, >> IIRC. >> > >> There are some packages on automatic differentiation. >> >> (and we have our own hacked together numdiff in statsmodels, just for >> optimization and Hessian calculations.) >> >> Josef >> >> >> > >> > Thanks. >> > Cheers, >> > >> > - -- >> > Fran?ois Boulogne. >> > >> > Membre de l'April - Promouvoir et d?fendre le logiciel libre >> > http://www.april.org >> > -----BEGIN PGP SIGNATURE----- >> > Version: GnuPG v1.4.11 (GNU/Linux) >> > >> > iQIcBAEBAgAGBQJOrXHDAAoJEKkn15fZnrRLxbYP/3/NUzU5metNhGc4tw6Uo0mw >> > kIlAxf1cvtopbJ+JTXrmKkJpyYUI1bMsqVKpPniW073dgPAfAX3Bajymysh5Hgnd >> > 5+0hh7v8JmsIvKm9HrEePSrENYrIVTfFyvRV+tBLhkfHJ9Vj7uUy7a3/lyRS6s7v >> > FEItXLhkNYtqEir8h2eZ7uW178mwq6nBl6Zi5UOjOXq7u0SxcnusKkYyiL+CirSN >> > KafcrU+rUdHw6khE1exj435GMSKx+N3+rV/kDAQWWQc0ncWmdX2Jr7PapT1DQbNN >> > b3UK2NjYGDZE2NMRuZmRmeTIk2S+PVPqRxEu0x62CS4Y4JnTaZx1xSiPGP0d9cZC >> > NG4/9h0LFM8rxj/kxYjBakbIs6LhwPnS1ZvYf00o3eaY4+tLGx3UozoLpEWSCOuQ >> > GrONTWvdUlfVWsd3qOhwCj+NORkz9yXJNqJHulgz6T8fKnMyPL+R80XfRuS1alxj >> > m166ySHxrPDzwiSlV5vWR/1ajrEMAI/GYev55dtzlIx6hxfLVCOryFei/LNtlrN+ >> > lYK48QqateqvLDs+NUq+7azmCAkc8fW5UeD7bBLdoGwlXvYaOEZh8+lzxU2sc0T4 >> > 93kNZAzOzv/CdcI1uhnp2NP8ODGEyRLx5cUlcKqmF/iAqtM2zc5Zca9YiygR2w83 >> > XQsGdZYw4rkVMnUTk6Ep >> > =/Nlz >> > -----END PGP SIGNATURE----- >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Oct 30 14:23:08 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 30 Oct 2011 14:23:08 -0400 Subject: [SciPy-User] Derivative in scipy? In-Reply-To: References: <4EAD71CE.8090506@gmail.com> Message-ID: On Sun, Oct 30, 2011 at 1:57 PM, Warren Weckesser wrote: > > > On Sun, Oct 30, 2011 at 12:55 PM, Warren Weckesser > wrote: >> >> >> On Sun, Oct 30, 2011 at 11:03 AM, wrote: >>> >>> 2011/10/30 Fran?ois Boulogne : >>> > >>> > -----BEGIN PGP SIGNED MESSAGE----- >>> > Hash: SHA1 >>> > >>> > Dear all, >>> > >>> > I was wondering if a piece of code has been developped for derivative >>> > calculations, espacially for a sample (array of points) like for >>> > integration: >>> > http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html with >>> > different methods (right or left first derivatives, second >>> > derivatives...) >>> > I didn't succeed in finding this in the documentation. Does it exist? >>> > If >>> > not, is it planned by someone? >>> >>> There is one helper function in scipy.optimize >>> (scipy.optimize.optimize), nothing else in scipy. >> >> >> Well, not exactly "nothing else"... >> (eat's email arrived as I was typing this, so it will echo some of what he >> said.) >> >> Functions that operate on a discrete sample: >> >> numpy.diff >> ??? This can be used to compute a derivative by dividing by the >> appropriate >> ??? power of dx. >> >> numpy.ediff1d >> ??? Like numpy.diff, but strictly for 1D arrays.? It also provides the >> option >> ??? for specifying values to append to the ends of the array before >> computing >> ??? the difference. >> >> numpy.gradient >> ??? Return the gradient of an n-d array. >> >> scipy.fftpack.diff >> ??? Derivative of a periodic sequence. >> ??? See http://www.scipy.org/Cookbook/KdV for an example. >> >> >> Functions that operate on a callable function: >> >> scipy.misc.derivative >> ??? Find the n-th derivative of a function at point x0. >> >> scipy.misc.central_difference_weights >> ??? Return weights for an Np-point central derivative > > > Correction:? I put this in the wrong list;? central_difference_weights is > just a utility function for computing weights (as the name says). It does > not compute derivatives of a callable function. > > Warren > > >> >> scipy.optimize.approx_fprime >> ??? No docstring (sigh), but from the source code (use approx_fprime?? in >> ??? ipython), it is pretty easy to figure out what it does. >> >> >> Having said that, I think a module specifically for computing derivatives >> (with good docs and tests), as being discussed in the ticket #1510 >> (http://projects.scipy.org/scipy/ticket/1510) would be a nice addition. >> >> >> Warren Ok. I take back the "nothing else" for the original question since in reading to fast, I misinterpreted the initial question with what I usually need, derivatives of a function in several arguments. Josef >> >>> >>> My standard recommendation for finite differences is numdifftools, >>> it's on pypi. There is a ticket that asks for it's inclusion in scipy, >>> IIRC. >>> >>> There are some packages on automatic differentiation. >>> >>> (and we have our own hacked together numdiff in statsmodels, just for >>> optimization and Hessian calculations.) >>> >>> Josef >>> >>> >>> > >>> > Thanks. >>> > Cheers, >>> > >>> > - -- >>> > Fran?ois Boulogne. >>> > >>> > Membre de l'April - Promouvoir et d?fendre le logiciel libre >>> > http://www.april.org >>> > -----BEGIN PGP SIGNATURE----- >>> > Version: GnuPG v1.4.11 (GNU/Linux) >>> > >>> > iQIcBAEBAgAGBQJOrXHDAAoJEKkn15fZnrRLxbYP/3/NUzU5metNhGc4tw6Uo0mw >>> > kIlAxf1cvtopbJ+JTXrmKkJpyYUI1bMsqVKpPniW073dgPAfAX3Bajymysh5Hgnd >>> > 5+0hh7v8JmsIvKm9HrEePSrENYrIVTfFyvRV+tBLhkfHJ9Vj7uUy7a3/lyRS6s7v >>> > FEItXLhkNYtqEir8h2eZ7uW178mwq6nBl6Zi5UOjOXq7u0SxcnusKkYyiL+CirSN >>> > KafcrU+rUdHw6khE1exj435GMSKx+N3+rV/kDAQWWQc0ncWmdX2Jr7PapT1DQbNN >>> > b3UK2NjYGDZE2NMRuZmRmeTIk2S+PVPqRxEu0x62CS4Y4JnTaZx1xSiPGP0d9cZC >>> > NG4/9h0LFM8rxj/kxYjBakbIs6LhwPnS1ZvYf00o3eaY4+tLGx3UozoLpEWSCOuQ >>> > GrONTWvdUlfVWsd3qOhwCj+NORkz9yXJNqJHulgz6T8fKnMyPL+R80XfRuS1alxj >>> > m166ySHxrPDzwiSlV5vWR/1ajrEMAI/GYev55dtzlIx6hxfLVCOryFei/LNtlrN+ >>> > lYK48QqateqvLDs+NUq+7azmCAkc8fW5UeD7bBLdoGwlXvYaOEZh8+lzxU2sc0T4 >>> > 93kNZAzOzv/CdcI1uhnp2NP8ODGEyRLx5cUlcKqmF/iAqtM2zc5Zca9YiygR2w83 >>> > XQsGdZYw4rkVMnUTk6Ep >>> > =/Nlz >>> > -----END PGP SIGNATURE----- >>> > >>> > _______________________________________________ >>> > SciPy-User mailing list >>> > SciPy-User at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> > >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From e.antero.tammi at gmail.com Sun Oct 30 14:59:36 2011 From: e.antero.tammi at gmail.com (eat) Date: Sun, 30 Oct 2011 20:59:36 +0200 Subject: [SciPy-User] Derivative in scipy? In-Reply-To: References: <4EAD71CE.8090506@gmail.com> Message-ID: Hi, On Sun, Oct 30, 2011 at 7:55 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Sun, Oct 30, 2011 at 11:03 AM, wrote: > >> 2011/10/30 Fran?ois Boulogne : >> > >> > -----BEGIN PGP SIGNED MESSAGE----- >> > Hash: SHA1 >> > >> > Dear all, >> > >> > I was wondering if a piece of code has been developped for derivative >> > calculations, espacially for a sample (array of points) like for >> > integration: >> > http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html with >> > different methods (right or left first derivatives, second >> derivatives...) >> > I didn't succeed in finding this in the documentation. Does it exist? If >> > not, is it planned by someone? >> >> There is one helper function in scipy.optimize >> (scipy.optimize.optimize), nothing else in scipy. >> > > > Well, not exactly "nothing else"... > (eat's email arrived as I was typing this, so it will echo some of what he > said.) > > Functions that operate on a discrete sample: > > numpy.diff > This can be used to compute a derivative by dividing by the appropriate > power of dx. > > numpy.ediff1d > Like numpy.diff, but strictly for 1D arrays. It also provides the > option > for specifying values to append to the ends of the array before > computing > the difference. > > numpy.gradient > Return the gradient of an n-d array. > > scipy.fftpack.diff > Derivative of a periodic sequence. > See http://www.scipy.org/Cookbook/KdV for an example. > > > Functions that operate on a callable function: > > scipy.misc.derivative > Find the n-th derivative of a function at point x0. > > scipy.misc.central_difference_weights > Return weights for an Np-point central derivative > > scipy.optimize.approx_fprime > No docstring (sigh), but from the source code (use approx_fprime?? in > ipython), it is pretty easy to figure out what it does. > > > Having said that, I think a module specifically for computing derivatives > (with good docs and tests), as being discussed in the ticket #1510 ( > http://projects.scipy.org/scipy/ticket/1510) would be a nice addition. > Perhaps it would be quite reasonable to upgrade the scipy documentation with a new 'derivation topic'. Above outline would be already a nice skeleton for it. Thanks, eat > > > Warren > > > >> >> My standard recommendation for finite differences is numdifftools, >> it's on pypi. There is a ticket that asks for it's inclusion in scipy, >> IIRC. >> > >> There are some packages on automatic differentiation. >> >> (and we have our own hacked together numdiff in statsmodels, just for >> optimization and Hessian calculations.) >> >> Josef >> >> >> > >> > Thanks. >> > Cheers, >> > >> > - -- >> > Fran?ois Boulogne. >> > >> > Membre de l'April - Promouvoir et d?fendre le logiciel libre >> > http://www.april.org >> > -----BEGIN PGP SIGNATURE----- >> > Version: GnuPG v1.4.11 (GNU/Linux) >> > >> > iQIcBAEBAgAGBQJOrXHDAAoJEKkn15fZnrRLxbYP/3/NUzU5metNhGc4tw6Uo0mw >> > kIlAxf1cvtopbJ+JTXrmKkJpyYUI1bMsqVKpPniW073dgPAfAX3Bajymysh5Hgnd >> > 5+0hh7v8JmsIvKm9HrEePSrENYrIVTfFyvRV+tBLhkfHJ9Vj7uUy7a3/lyRS6s7v >> > FEItXLhkNYtqEir8h2eZ7uW178mwq6nBl6Zi5UOjOXq7u0SxcnusKkYyiL+CirSN >> > KafcrU+rUdHw6khE1exj435GMSKx+N3+rV/kDAQWWQc0ncWmdX2Jr7PapT1DQbNN >> > b3UK2NjYGDZE2NMRuZmRmeTIk2S+PVPqRxEu0x62CS4Y4JnTaZx1xSiPGP0d9cZC >> > NG4/9h0LFM8rxj/kxYjBakbIs6LhwPnS1ZvYf00o3eaY4+tLGx3UozoLpEWSCOuQ >> > GrONTWvdUlfVWsd3qOhwCj+NORkz9yXJNqJHulgz6T8fKnMyPL+R80XfRuS1alxj >> > m166ySHxrPDzwiSlV5vWR/1ajrEMAI/GYev55dtzlIx6hxfLVCOryFei/LNtlrN+ >> > lYK48QqateqvLDs+NUq+7azmCAkc8fW5UeD7bBLdoGwlXvYaOEZh8+lzxU2sc0T4 >> > 93kNZAzOzv/CdcI1uhnp2NP8ODGEyRLx5cUlcKqmF/iAqtM2zc5Zca9YiygR2w83 >> > XQsGdZYw4rkVMnUTk6Ep >> > =/Nlz >> > -----END PGP SIGNATURE----- >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Oct 30 17:44:56 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 30 Oct 2011 17:44:56 -0400 Subject: [SciPy-User] applications for tukeylambda distribution ? Message-ID: Are there any applications for the Tukey Lambda distribution http://en.wikipedia.org/wiki/Tukey_lambda_distribution ? I just stumbled over it looking at PPCC plots ( http://www.itl.nist.gov/div898/handbook/eda/section3/ppccplot.htm and scipy.stats.morestats) and it looks quite useful covering or approximating a large range of distributions. Josef From akshar.bhosale at gmail.com Sun Oct 30 23:45:19 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Mon, 31 Oct 2011 09:15:19 +0530 Subject: [SciPy-User] problem in running test(nose) with numpy/scipy Message-ID: hi, i have installed numpy (1.6.0) and scipy (0.9). , nose version is 1.0 i have intel cluster toolkit installed on my system. (11/069 version and mkl 10.3). i have machine having intel xeon processor and rhel 5.2 x86_64 platform. i have installed it with intel compilers. when i execute numpy.test and scipy.test, it hangs . numpy.test(verbose=3) Running unit tests for numpy NumPy version 1.6.0 NumPy is installed in /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy Python version 2.6 (r26:66714, May 29 2011, 15:10:47) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] nose version 1.0.0 nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] nose.selector: INFO: /home/akshar/Python-2.6/lib/ python2.6/site-packages/numpy/core/multiarray.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/scalarmath.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/umath.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/multiarray_tests.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/core/umath_tests.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/fft/fftpack_lite.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/linalg/lapack_lite.so is executable; skipped nose.selector: INFO: /home/akshar/Python-2.6/lib/python2.6/site-packages/numpy/random/mtrand.so is executable; skipped test_api.test_fastCopyAndTranspose ... ok test_arrayprint.TestArrayRepr.test_nan_inf ... ok test_str (test_arrayprint.TestComplexArray) ... ok Ticket 844. ... ok test_blasdot.test_blasdot_used ... ok test_blasdot.test_dot_2args ... ok test_blasdot.test_dot_3args ... ok test_blasdot.test_dot_3args_errors ... ok test_creation (test_datetime.TestDateTime) ... ok test_creation_overflow (test_datetime.TestDateTime) ... ok test_divisor_conversion_as (test_datetime.TestDateTime) ... ok test_divisor_conversion_bday (test_datetime.TestDateTime) ... ok test_divisor_conversion_day (test_datetime.TestDateTime) ... ok test_divisor_conversion_fs (test_datetime.TestDateTime) ... ok test_divisor_conversion_hour (test_datetime.TestDateTime) ... ok test_divisor_conversion_minute (test_datetime.TestDateTime) ... ok test_divisor_conversion_month (test_datetime.TestDateTime) ... ok test_divisor_conversion_second (test_datetime.TestDateTime) ... ok test_divisor_conversion_week (test_datetime.TestDateTime) ... ok test_divisor_conversion_year (test_datetime.TestDateTime) ... ok test_hours (test_datetime.TestDateTime) ... ok test_from_object_array (test_defchararray.TestBasic) ... ok test_from_object_array_unicode (test_defchararray.TestBasic) ... ok test_from_string (test_defchararray.TestBasic) ... ok test_from_string_array (test_defchararray.TestBasic) ... ok test_from_unicode (test_defchararray.TestBasic) ... ok test_from_unicode_array (test_defchararray.TestBasic) ... ok test_unicode_upconvert (test_defchararray.TestBasic) ... ok test_it (test_defchararray.TestChar) ... ok test_equal (test_defchararray.TestComparisons) ... ok test_greater (test_defchararray.TestComparisons) ... ok test_greater_equal (test_defchararray.TestComparisons) ... ok test_less (test_defchararray.TestComparisons) ... ok test_less_equal (test_defchararray.TestComparisons) ... ok test_not_equal (test_defchararray.TestComparisons) ... ok test_equal (test_defchararray.TestComparisonsMixed1) ... ok test_greater (test_defchararray.TestComparisonsMixed1) ... ok test_greater_equal (test_defchararray.TestComparisonsMixed1) ... ok test_less (test_defchararray.TestComparisonsMixed1) ... ok test_less_equal (test_defchararray.TestComparisonsMixed1) ... ok test_not_equal (test_defchararray.TestComparisonsMixed1) ... ok test_equal (test_defchararray.TestComparisonsMixed2) ... ok test_greater (test_defchararray.TestComparisonsMixed2) ... ok test_greater_equal (test_defchararray.TestComparisonsMixed2) ... ok test_less (test_defchararray.TestComparisonsMixed2) ... ok test_less_equal (test_defchararray.TestComparisonsMixed2) ... ok test_not_equal (test_defchararray.TestComparisonsMixed2) ... ok test_count (test_defchararray.TestInformation) ... ok test_endswith (test_defchararray.TestInformation) ... ok test_find (test_defchararray.TestInformation) ... ok test_index (test_defchararray.TestInformation) ... ok test_isalnum (test_defchararray.TestInformation) ... ok test_isalpha (test_defchararray.TestInformation) ... ok test_isdigit (test_defchararray.TestInformation) ... ok test_islower (test_defchararray.TestInformation) ... ok test_isspace (test_defchararray.TestInformation) ... ok test_istitle (test_defchararray.TestInformation) ... ok test_isupper (test_defchararray.TestInformation) ... ok test_len (test_defchararray.TestInformation) ... ok test_rfind (test_defchararray.TestInformation) ... ok test_rindex (test_defchararray.TestInformation) ... ok test_startswith (test_defchararray.TestInformation) ... ok test_capitalize (test_defchararray.TestMethods) ... ok test_center (test_defchararray.TestMethods) ... ok test_decode (test_defchararray.TestMethods) ... ok test_encode (test_defchararray.TestMethods) ... ok test_expandtabs (test_defchararray.TestMethods) ... ok test_isdecimal (test_defchararray.TestMethods) ... ok test_isnumeric (test_defchararray.TestMethods) ... ok test_join (test_defchararray.TestMethods) ... ok test_ljust (test_defchararray.TestMethods) ... ok test_lower (test_defchararray.TestMethods) ... ok test_lstrip (test_defchararray.TestMethods) ... ok test_partition (test_defchararray.TestMethods) ... ok test_replace (test_defchararray.TestMethods) ... ok test_rjust (test_defchararray.TestMethods) ... ok test_rpartition (test_defchararray.TestMethods) ... ok test_rsplit (test_defchararray.TestMethods) ... ok test_rstrip (test_defchararray.TestMethods) ... ok test_split (test_defchararray.TestMethods) ... ok test_splitlines (test_defchararray.TestMethods) ... ok test_strip (test_defchararray.TestMethods) ... ok test_swapcase (test_defchararray.TestMethods) ... ok test_title (test_defchararray.TestMethods) ... ok test_upper (test_defchararray.TestMethods) ... ok test_add (test_defchararray.TestOperations) ... ok Ticket #856 ... ok test_mul (test_defchararray.TestOperations) ... ok test_radd (test_defchararray.TestOperations) ... ok test_rmod (test_defchararray.TestOperations) ... ok test_rmul (test_defchararray.TestOperations) ... ok test_broadcast_error (test_defchararray.TestVecString) ... ok test_invalid_args_tuple (test_defchararray.TestVecString) ... ok test_invalid_function_args (test_defchararray.TestVecString) ... ok test_invalid_result_type (test_defchararray.TestVecString) ... ok test_invalid_type_descr (test_defchararray.TestVecString) ... ok test_non_existent_method (test_defchararray.TestVecString) ... ok test_non_string_array (test_defchararray.TestVecString) ... ok test1 (test_defchararray.TestWhitespace) ... ok test_dtype (test_dtype.TestBuiltin) ... ok Only test hash runs at all. ... ok test_metadata_rejects_nondict (test_dtype.TestMetadata) ... ok test_metadata_takes_dict (test_dtype.TestMetadata) ... ok test_nested_metadata (test_dtype.TestMetadata) ... ok test_no_metadata (test_dtype.TestMetadata) ... ok test1 (test_dtype.TestMonsterType) ... ok test_different_names (test_dtype.TestRecord) ... ok test_different_titles (test_dtype.TestRecord) ... ok Test whether equivalent record dtypes hash the same. ... ok Test if an appropriate exception is raised when passing bad values to ... ok Test whether equivalent subarray dtypes hash the same. ... ok Test whether different subarray dtypes hash differently. ... ok Test some data types that are equal ... ok Test some more complicated cases that shouldn't be equal ... ok Test some simple cases that shouldn't be equal ... ok test_single_subarray (test_dtype.TestSubarray) ... ok test_einsum_errors (test_einsum.TestEinSum) ... ok test_einsum_sums_cfloat128 (test_einsum.TestEinSum) ... It hangs here.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From boulogne.f at gmail.com Mon Oct 31 15:28:49 2011 From: boulogne.f at gmail.com (=?UTF-8?B?RnJhbsOnb2lzIEJvdWxvZ25l?=) Date: Mon, 31 Oct 2011 20:28:49 +0100 Subject: [SciPy-User] Derivative in scipy? In-Reply-To: References: <4EAD71CE.8090506@gmail.com> Message-ID: <4EAEF6F1.30905@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Le 30/10/2011 18:55, Warren Weckesser a ?crit : > > Having said that, I think a module specifically for computing derivatives (with good docs and tests), as being discussed in the ticket #1510 (http://projects.scipy.org/scipy/ticket/1510) would be a nice addition. Thank to all of you for your responses. I'll follow the discussion on the bug tracker. Regards, - -- Fran?ois Boulogne. Membre de l'April - Promouvoir et d?fendre le logiciel libre http://www.april.org -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBAgAGBQJOrvbrAAoJEKkn15fZnrRLnRAP/3mB+lbggEegl7Bfvswv1KaO qm8b+TKqoNSNJ42zhq3dg5sRb4RiobAX+nXhS8SOifotnPm4M+BLybEq2bu+TovC 0i83KM86aK7c9fLV5jmN/rhLlFTEZB9Ga722ZYWc5PAg56QSrvjiTPFp4b6cJJLC utOGLCadB7HB6S6m8XzJQ/G66eAMcz2CUCcBvAyOzY+wRLhXLqRqPZhcwrb1QeSf ff8xOQiOAMmcHtnqJfxo6PpLuoItqUmCOXAsfC5yRjdY1AjO82voZ7ZDNuAhhFOl GGJg7z/MEHQRx68gWJ70BxlTIqDhfvc5TI0E7/SFsO4yPd+kYpS5w6Uf6j916WGE X7DdHD6etsqSSpuApx2vXPGgl/ozi4gM+W/H0Ey/8m+KM0N6shkrWEVg8WejUAX0 dfrAR1txo4TioFrx0VwbFtKSsqjyxztbT0nqlO2XSJ5pwGgc6zrLoArJFT1uL25w oMt4pB2UGvLhko8F2LM6Cirr0jfC7c1bjhHkRFpq/8TOvLg+DcRtu9Ag7C1KmskX xpl6s9WtGxFKTPzj5IXNXGboNHaeq454tO5NTd/lRod7r7q0VO1gIldv16AhC7IO UBWfmEIZRXMhKj4ocrRLnyijpUnxZ7ruZBQwMkAJ+SX7pTC5f2/qOFT4ERBjEvyW lL0Qzj8p12lA6DN4Th6t =x1nX -----END PGP SIGNATURE----- From lorenzo.isella at gmail.com Mon Oct 31 17:14:01 2011 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Mon, 31 Oct 2011 22:14:01 +0100 Subject: [SciPy-User] SciPy for Computational Geometry Message-ID: <4EAF0F99.7080407@gmail.com> Dear All, This is admittedly a bit off topic, but I wonder if anybody on the list is familiar with this problem (which should belong to computational geometry) and is able to point me to an implementation (possibly relying on scipy). Imagine that you are sitting at the origin (0,0,0) of a 3D coordinate system and that you are looking at a set of (non-overlapping) spheres (all the spheres are identical and with radius R=1). You ask yourself how many spheres you can see overall. The result is in general a (positive) real number as one sphere may partially eclipse another sphere for an observer in the origin (e.g. if one sphere is located at (0,0,5) and the other (0,0.3,10)). Does anybody know an algorithm to calculate this quantity efficiently? I have in mind (for now at least) configurations of less that 100 spheres, so hopefully this should not be too demanding. I had a look at http://www.qhull.org/ but I am not 100% sure that this is the way to go. Any suggestion is appreciated. Many thanks Lorenzo From fspaolo at gmail.com Mon Oct 31 17:45:28 2011 From: fspaolo at gmail.com (Fernando Paolo) Date: Mon, 31 Oct 2011 14:45:28 -0700 Subject: [SciPy-User] Error in Numpy 1.6 timedelta() Message-ID: Hello, I receive the following error when trying: >>> import numpy as np >>> np.timedelta64(10, 's') TypeError: function takes at most 1 argument (2 given) I know this is probably fully implemented in Numpy 1.7, but what about 1.6.1? Thanks! -Fernando -- Fernando Paolo Institute of Geophysics & Planetary Physics Scripps Institution of Oceanography University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0225 -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Oct 31 18:24:19 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 31 Oct 2011 16:24:19 -0600 Subject: [SciPy-User] Error in Numpy 1.6 timedelta() In-Reply-To: References: Message-ID: On Mon, Oct 31, 2011 at 3:45 PM, Fernando Paolo wrote: > Hello, > > I receive the following error when trying: > > >>> import numpy as np > >>> np.timedelta64(10, 's') > > TypeError: function takes at most 1 argument (2 given) > > I know this is probably fully implemented in Numpy 1.7, but what about > 1.6.1? > > Yep, it's working in current master. I'll have to check 1.6.1 later unless someone who is running it can comment. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From hayne at sympatico.ca Mon Oct 31 17:58:18 2011 From: hayne at sympatico.ca (hayne at sympatico.ca) Date: Mon, 31 Oct 2011 17:58:18 -0400 Subject: [SciPy-User] SciPy for Computational Geometry In-Reply-To: <4EAF0F99.7080407@gmail.com> References: <4EAF0F99.7080407@gmail.com> Message-ID: Maybe have a look at "microsphere interpolation": http://www.dudziak.com/how_microsphere_projection_works.php (Perhaps just looking at the diagram at the bottom of that page would suffice for a start.) This is not a Python implementation, but it might give you some ideas. -- Cameron Hayne macdev at hayne.net On 31-Oct-11, at 5:14 PM, Lorenzo Isella wrote: > This is admittedly a bit off topic, but I wonder if anybody on the > list > is familiar with this problem (which should belong to computational > geometry) and is able to point me to an implementation (possibly > relying > on scipy). > Imagine that you are sitting at the origin (0,0,0) of a 3D coordinate > system and that you are looking at a set of (non-overlapping) spheres > (all the spheres are identical and with radius R=1). > You ask yourself how many spheres you can see overall. > The result is in general a (positive) real number as one sphere may > partially eclipse another sphere for an observer in the origin (e.g. > if > one sphere is located at (0,0,5) and the other (0,0.3,10)). > Does anybody know an algorithm to calculate this quantity efficiently? > I have in mind (for now at least) configurations of less that 100 > spheres, so hopefully this should not be too demanding. > I had a look at > http://www.qhull.org/ > but I am not 100% sure that this is the way to go.