From josef.pktd at gmail.com Fri Mar 1 00:20:30 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 1 Mar 2013 00:20:30 -0500 Subject: [SciPy-User] optimize.brentq and step function In-Reply-To: References: Message-ID: On Thu, Feb 28, 2013 at 11:57 PM, Charles R Harris wrote: > > > On Thu, Feb 28, 2013 at 9:43 PM, Charles R Harris > wrote: >> >> >> >> On Thu, Feb 28, 2013 at 8:44 PM, wrote: >>> >>> brentq documentation says "f must be a continuous function" >>> >>> I forgot that I have a step function and tried brentq and it worked. >>> Is this an accident or a feature? >>> >> >> Feature, the documentation is off. Brentq falls back on bisection when it >> converges too slowly. And there is some subtlety in 'slowly', but it does >> work find a point of sign where the function changes sign, all that is >> required is the the function be defined everywhere on the interval and there >> be a finite number of 'zeros'. If you know you have a discontinuity, plain >> old bisection is probably faster, but one of the best things about brentq is >> its generality. > > > And the finite part is wrong. Bisection will always find a `zero` if the > ends have opposite signs since the interval is halved on every iteration and > it will terminate when the interval is sufficiently small. But all you know > at that point is that the ends have different signs, not that the function > is almost zero there. So continuity is required for the function to actually > be close to zero. Thanks for the explanation, all clear. I also tried optimize.bisect with similar results as brentq. Related: Is there a general interest in an integer bisection? function possibly defined only at integer points There is one in stats.distributions, I fixed it, but I haven't read anything about bisections (at least not since college). Josef > > Chuck > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From sguo at bournemouth.ac.uk Fri Mar 1 07:11:55 2013 From: sguo at bournemouth.ac.uk (SHIHUI GUO) Date: Fri, 1 Mar 2013 12:11:55 +0000 Subject: [SciPy-User] _dop.error: failed in processing argument list for call-back fcn. In-Reply-To: References: Message-ID: Thanks Warren. You are so helpful! Shihui On 1 March 2013 04:57, Warren Weckesser wrote: > On 2/26/13, SHIHUI GUO wrote: > > Hi Warren, > > > > Thanks for this. The major issue is that I declared the "param" in > function > > definition, but didn't pass that when calling. > > > > Another question, when we do: > > ========================== > > while test.successful() and test.t > test.integrate(test.t+dt) > > ========================== > > the result is returned after each time step. Is there any default way I > > could specify the time span and get the result over the whole time span, > > instead of individual integrative step? > > > > Yes, in fact, that is what it is doing, but the loop is set up to get > a value every `dt` time units. If you want just the final value, > eliminate the loop, and just give the final time as the argument to > test.integrate. > > For example, the following computes the solution to dy/dt = -y with > y(0) = 1 at time t=10: > > ----- > In [34]: def func(t, y): > ....: return -y > ....: > > In [35]: solver = ode(f=func) > > In [36]: solver.set_integrator("lsoda") > Out[36]: > > In [37]: solver.set_initial_value(1.0) > Out[37]: > > In [38]: result = solver.integrate(10.0) # Get the solution at t=10. > > In [39]: result > Out[39]: array([ 4.53998024e-05]) > > In [40]: np.exp(-10) > Out[40]: 4.5399929762484854e-05 > > In [41]: solver.successful() > Out[41]: True > ----- > > Warren > > > > Thanks. > > > > Shihui > > > > > > On 26 February 2013 11:16, Warren Weckesser > > wrote: > > > >> On 2/26/13, SHIHUI GUO wrote: > >> > HI all, > >> > > >> > I want to use scipy to implement an oscillator, ie. solving a > >> > secod-order > >> > ordinary differential equation, my code is: > >> > =================================== > >> > from scipy.integrate import ode > >> > > >> > y0,t0 = [0, 1], 0 > >> > > >> > def fun(t, y, params): > >> > rou = 1 > >> > omega = 10 > >> > sigma = 1 > >> > # convergence rate, ie, lambda > >> > conrate = 10 > >> > temp = -conrate*((y[0]^2+y[1]^2)/rou^2-sigma) > >> > dy = temp*y[0] - omega*y[1] > >> > ddy = omega*y[0] + temp*y[1] > >> > return [dy, ddy] > >> > > >> > test = ode(fun).set_integrator('dopri5') > >> > test.set_initial_value(y0, t0) > >> > t1 = 10 > >> > dt = 0.1 > >> > > >> > while test.successful() and test.t >> > test.integrate(test.t+dt) > >> > print test.t, test.yy > >> > =================================== > >> > > >> > =================================== > >> > The error says: > >> > _dop.error: failed in processing argument list for call-back fcn. > >> > File "/home/shepherd/python/research/testode.py", line 23, in > >> > test.integrate(test.t+dt) > >> > File > >> > > >> > "/home/shepherd/epd/epd_free-7.3-2-rh5-x86/lib/python2.7/site-packages/scipy/integrate/_ode.py", > >> > line 333, in integrate > >> > self.f_params, self.jac_params) > >> > File > >> > > >> > "/home/shepherd/epd/epd_free-7.3-2-rh5-x86/lib/python2.7/site-packages/scipy/integrate/_ode.py", > >> > line 827, in run > >> > tuple(self.call_args) + (f_params,))) > >> > =================================== > >> > > >> > Previously I use the ubuntu default python, and scipy is 0.9.0. Some > >> thread > >> > says it is a bug and has been fixed in 0.10.0, so I switched to > >> enthought, > >> > now the scipy is newest version, but the error remains. > >> > > >> > Thanks for any help. > >> > > >> > Shihui > >> > > >> > >> > >> There are a few problems in your code. > >> > >> You haven't told the `ode` object that your function accepts the extra > >> argument `params`. Normally you would do this with > >> `test.set_f_params(...)`, but since your function doesn't actually use > >> `params`, it is simpler to just change the function signature to > >> > >> def fun(t, y): > >> > >> Next, in Python, the operator to raise a value to a power is **, not > >> ^, so the formula for `temp` should be: > >> > >> temp = -conrate*((y[0]**2+y[1]**2)/rou**2-sigma) > >> > >> Finally, the attribute for the solution is `y`, not `yy`, so the last > >> line should be: > >> > >> print test.t, test.y > >> > >> > >> Cheers, > >> > >> Warren > >> > >> > >> > -- > >> > * > >> > > >> > --------------------------------------------------------------------------------------- > >> > * > >> > > >> > SHIHUI GUO > >> > National Center for Computer Animation > >> > Bournemouth University > >> > United Kingdom > >> > > >> > BU is a Disability Two Ticks Employer and has signed up to the Mindful > >> > Employer charter. Information about the accessibility of University > >> > buildings can be found on the BU DisabledGo webpages [ > >> > http://www.disabledgo.com/en/org/bournemouth-university ] > >> > This email is intended only for the person to whom it is addressed and > >> may > >> > contain confidential information. If you have received this email in > >> error, > >> > please notify the sender and delete this email, which must not be > >> > copied, > >> > distributed or disclosed to any other person. > >> > Any views or opinions presented are solely those of the author and do > >> > not > >> > necessarily represent those of Bournemouth University or its > subsidiary > >> > companies. Nor can any contract be formed on behalf of the University > >> > or > >> its > >> > subsidiary companies via email. > >> > > >> > > >> > > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > >> > >> > > > > > > -- > > * > > > --------------------------------------------------------------------------------------- > > * > > > > SHIHUI GUO > > National Center for Computer Animation > > Bournemouth University > > United Kingdom > > > > BU is a Disability Two Ticks Employer and has signed up to the Mindful > > Employer charter. Information about the accessibility of University > > buildings can be found on the BU DisabledGo webpages [ > > http://www.disabledgo.com/en/org/bournemouth-university ] > > This email is intended only for the person to whom it is addressed and > may > > contain confidential information. If you have received this email in > error, > > please notify the sender and delete this email, which must not be copied, > > distributed or disclosed to any other person. > > Any views or opinions presented are solely those of the author and do not > > necessarily represent those of Bournemouth University or its subsidiary > > companies. Nor can any contract be formed on behalf of the University or > its > > subsidiary companies via email. > > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > -- * --------------------------------------------------------------------------------------- * SHIHUI GUO National Center for Computer Animation Bournemouth University United Kingdom BU is a Disability Two Ticks Employer and has signed up to the Mindful Employer charter. Information about the accessibility of University buildings can be found on the BU DisabledGo webpages [ http://www.disabledgo.com/en/org/bournemouth-university ] This email is intended only for the person to whom it is addressed and may contain confidential information. If you have received this email in error, please notify the sender and delete this email, which must not be copied, distributed or disclosed to any other person. Any views or opinions presented are solely those of the author and do not necessarily represent those of Bournemouth University or its subsidiary companies. Nor can any contract be formed on behalf of the University or its subsidiary companies via email. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -------------- next part -------------- An HTML attachment was scrubbed... URL: From evilper at gmail.com Fri Mar 1 14:24:13 2013 From: evilper at gmail.com (Per Nielsen) Date: Fri, 1 Mar 2013 11:24:13 -0800 Subject: [SciPy-User] List of functions with different arguments Message-ID: Hi all I apologize if this is too off topic, but I figured it was common in scientific computing. I am trying to make a list of copies of the same function, but with different extra input arguments. However, I am having problems getting the wanted output. I thought it was a problem with the way I refered to the function, but I tried a deepcopy and also the functions seem to have different ids. Here is my output from the attached script: simple [3, 3, 3] [4549677376, 4549677496, 4549677616] deepcopy [3, 3, 3] [4549677736, 4549677856, 4549677976] wanted/expected output: [1, 2, 3] Any help would be greatly appreciated :) Cheers, Per -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fct_pyt.py Type: application/octet-stream Size: 388 bytes Desc: not available URL: From warren.weckesser at gmail.com Fri Mar 1 14:53:39 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 1 Mar 2013 14:53:39 -0500 Subject: [SciPy-User] List of functions with different arguments In-Reply-To: References: Message-ID: On Fri, Mar 1, 2013 at 2:24 PM, Per Nielsen wrote: > Hi all > > I apologize if this is too off topic, but I figured it was common in > scientific computing. > > I am trying to make a list of copies of the same function, but with > different extra input arguments. However, I am having problems getting the > wanted output. I thought it was a problem with the way I refered to the > function, but I tried a deepcopy and also the functions seem to have > different ids. > > Here is my output from the attached script: > > simple > [3, 3, 3] > [4549677376, 4549677496, 4549677616] > deepcopy > [3, 3, 3] > [4549677736, 4549677856, 4549677976] > wanted/expected output: [1, 2, 3] > > Any help would be greatly appreciated :) > > Cheers, > Per > > The problem is this: fct_list = [lambda x: fct(x, aa) for aa in range(1, 4)] does not do what you want it to. The use of `aa` in the lambda expression is not replaced by the values of `aa` in the list comprehension. Instead, your lambdas will use whatever value the name 'aa' has when they are called. In this case, after the list comprehension is done, `aa` is 3, so they always get a=3. In fact, if immediately after you define `fct_list` you do `del aa`, you'll get an error when you try to call fct_list[0](1). One alternative is to import `partial` from `functools`: from functools import partial fct_list = [partial(fct, a=aa) for aa in range(1, 4)] Warren > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Mar 1 15:01:47 2013 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 1 Mar 2013 20:01:47 +0000 Subject: [SciPy-User] List of functions with different arguments In-Reply-To: References: Message-ID: On Fri, Mar 1, 2013 at 7:24 PM, Per Nielsen wrote: > Hi all > > I apologize if this is too off topic, but I figured it was common in > scientific computing. > > I am trying to make a list of copies of the same function, but with > different extra input arguments. However, I am having problems getting the > wanted output. I thought it was a problem with the way I refered to the > function, but I tried a deepcopy and also the functions seem to have > different ids. > > Here is my output from the attached script: > > simple > [3, 3, 3] > [4549677376, 4549677496, 4549677616] > deepcopy > [3, 3, 3] > [4549677736, 4549677856, 4549677976] > wanted/expected output: [1, 2, 3] > > Any help would be greatly appreciated :) You probably want to use functools.partial() for this. http://docs.python.org/2/library/functools#functools.partial The problem you are encountering is that the lambda code block looks up the variable `aa` from the global namespace at runtime, not definition-time. The inner `for` loops for the list comprehensions leak the `aa` variable so you end up with the last one in the global namespace. An alternative is to specify and bind the `aa` value as a keyword argument to the lambda: fct_list = [lambda x, aa=aa: fct(x, aa) for aa in range(1, 4)] -- Robert Kern From michael.aye at ucla.edu Fri Mar 1 19:18:39 2013 From: michael.aye at ucla.edu (Michael Aye) Date: Fri, 1 Mar 2013 16:18:39 -0800 Subject: [SciPy-User] Spline interpolation array instead of loop? Message-ID: Hi! So, I have to do the same spine interpolation (same=same x-axis) for 189 data channels, so I was wonderding if there is a way to store the 189 measurements into a 2D array and to set that at once to an interpolator, is that possible somehow? I noticed that InterpolatedUnivariateSpline does not allow me to do that, it seems. So, in other words, what I would like to be able to do is: len(x) = 20 len(y_i) = 20 i = 1 ..189 y.shape == (20,189) s = UnivariateSpline(x, y) res = s(new_x) res.shape == (20, 189) Does something like that exist? Have a nice weekend! Michael From charlesr.harris at gmail.com Fri Mar 1 22:07:46 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 1 Mar 2013 20:07:46 -0700 Subject: [SciPy-User] Spline interpolation array instead of loop? In-Reply-To: References: Message-ID: On Fri, Mar 1, 2013 at 5:18 PM, Michael Aye wrote: > Hi! > > So, I have to do the same spine interpolation (same=same x-axis) for > 189 data channels, so I was wonderding if there is a way to store the > 189 measurements into a 2D array and to set that at once to an > interpolator, is that possible somehow? > > I noticed that InterpolatedUnivariateSpline does not allow me to do > that, it seems. > > So, in other words, what I would like to be able to do is: > > len(x) = 20 > > len(y_i) = 20 > > i = 1 ..189 > > y.shape == (20,189) > > s = UnivariateSpline(x, y) > > > res = s(new_x) > > res.shape == (20, 189) > > > Does something like that exist? > > Well, I'm working on an spline interpolater for scipy and it works with vector valued splines, so that should cover your case. At the moment it is in pure python prototype form, so pretty slow. You might have a look if you are interested and see if it is in the right direction. It is here. Not much documented at the moment since I've only started on it. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Mar 1 22:15:25 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 1 Mar 2013 20:15:25 -0700 Subject: [SciPy-User] Spline interpolation array instead of loop? In-Reply-To: References: Message-ID: On Fri, Mar 1, 2013 at 8:07 PM, Charles R Harris wrote: > > > On Fri, Mar 1, 2013 at 5:18 PM, Michael Aye wrote: > >> Hi! >> >> So, I have to do the same spine interpolation (same=same x-axis) for >> 189 data channels, so I was wonderding if there is a way to store the >> 189 measurements into a 2D array and to set that at once to an >> interpolator, is that possible somehow? >> >> I noticed that InterpolatedUnivariateSpline does not allow me to do >> that, it seems. >> >> So, in other words, what I would like to be able to do is: >> >> len(x) = 20 >> >> len(y_i) = 20 >> >> i = 1 ..189 >> >> y.shape == (20,189) >> >> s = UnivariateSpline(x, y) >> >> >> res = s(new_x) >> >> res.shape == (20, 189) >> >> >> Does something like that exist? >> >> > Well, I'm working on an spline interpolater for scipy and it works with > vector valued splines, so that should cover your case. At the moment it is > in pure python prototype form, so pretty slow. You might have a look if you > are interested and see if it is in the right direction. It is here. > Not much documented at the moment since I've only started on it. > > If you do try it out, and it would be a favor to me if you did ;), feel free to email me directly if you need help setting it up. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Mar 1 22:24:44 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 1 Mar 2013 20:24:44 -0700 Subject: [SciPy-User] Spline interpolation array instead of loop? In-Reply-To: References: Message-ID: On Fri, Mar 1, 2013 at 8:15 PM, Charles R Harris wrote: > > > On Fri, Mar 1, 2013 at 8:07 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Fri, Mar 1, 2013 at 5:18 PM, Michael Aye wrote: >> >>> Hi! >>> >>> So, I have to do the same spine interpolation (same=same x-axis) for >>> 189 data channels, so I was wonderding if there is a way to store the >>> 189 measurements into a 2D array and to set that at once to an >>> interpolator, is that possible somehow? >>> >>> I noticed that InterpolatedUnivariateSpline does not allow me to do >>> that, it seems. >>> >>> So, in other words, what I would like to be able to do is: >>> >>> len(x) = 20 >>> >>> len(y_i) = 20 >>> >>> i = 1 ..189 >>> >>> y.shape == (20,189) >>> >>> s = UnivariateSpline(x, y) >>> >>> >>> res = s(new_x) >>> >>> res.shape == (20, 189) >>> >>> >>> Does something like that exist? >>> >>> >> Well, I'm working on an spline interpolater for scipy and it works with >> vector valued splines, so that should cover your case. At the moment it is >> in pure python prototype form, so pretty slow. You might have a look if you >> are interested and see if it is in the right direction. It is here. >> Not much documented at the moment since I've only started on it. >> >> > If you do try it out, and it would be a favor to me if you did ;), feel > free to email me directly if you need help setting it up. > And now I remember that I don't do spline creation yet. What sort of boundary conditions are you using? I also plan to solve for the spline weights for vector valued data, I'll rough that out this weekend if you are interested. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Mar 2 09:22:41 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 2 Mar 2013 09:22:41 -0500 Subject: [SciPy-User] TODO: more one-sided options in statistical tests Message-ID: options that would be nice to have, if someone has a few spare cycles. most statistical tests in scipy stats don't have an option for a one-sided alternative. I could use a one-sided ``binom_test`` right now. http://ugrad.stat.ubc.ca/R/library/stats/html/binom.test.html My guess is, it's at most of few lines, but it takes time to find them and to write the tests. For example, a ttest_ind that allows for one-sided alternatives and for a Null hypothesis that has a mean difference different from zero https://github.com/statsmodels/statsmodels/pull/535/files#L13R461 (I needed it to write an equivalence test) Josef From member at linkedin.com Sat Mar 2 22:15:54 2013 From: member at linkedin.com (Harshad Surdi via LinkedIn) Date: Sun, 3 Mar 2013 03:15:54 +0000 (UTC) Subject: [SciPy-User] Invitation to connect on LinkedIn Message-ID: <1166090592.7356722.1362280554692.JavaMail.app@ela4-app2321.prod> LinkedIn ------------ Harshad Surdi requested to add you as a connection on LinkedIn: ------------------------------------------ Jose, I'd like to add you to my professional network on LinkedIn. - Harshad Accept invitation from Harshad Surdi http://www.linkedin.com/e/-3wy1w2-hdtmo3b0-4v/Q6WKH0LACopGJkAw_6fSqajo6R7VMvIz/blk/I698614865_20/3wOtCVFbmdxnSVFbm8JrnpKqlZJrmZzbmNJpjRQnOpBtn9QfmhBt71BoSd1p65Lr6lOfP0OnPkSe3gNdzwVdAALiRBmiz1qul0Lc34VdjkTcj0Qcj4LrCBxbOYWrSlI/eml-comm_invm-b-in_ac-inv28/?hs=false&tok=0mdUF0CgYzsBE1 View profile of Harshad Surdi http://www.linkedin.com/e/-3wy1w2-hdtmo3b0-4v/rso/235918465/oHlG/name/21555906_I698614865_20/?hs=false&tok=00RJEblQEzsBE1 ------------------------------------------ You are receiving Invitation emails. This email was intended for Jose Gomez-Dans. Learn why this is included: http://www.linkedin.com/e/-3wy1w2-hdtmo3b0-4v/plh/http%3A%2F%2Fhelp%2Elinkedin%2Ecom%2Fapp%2Fanswers%2Fdetail%2Fa_id%2F4788/-GXI/?hs=false&tok=0aCl8mwbwzsBE1 (c) 2012, LinkedIn Corporation. 2029 Stierlin Ct, Mountain View, CA 94043, USA. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicole.haenni at gmail.com Sat Mar 2 21:40:17 2013 From: nicole.haenni at gmail.com (Nicole Haenni) Date: Sun, 3 Mar 2013 03:40:17 +0100 Subject: [SciPy-User] Survey for framework and library developers: "Information needs in software ecosystems" In-Reply-To: References: Message-ID: Hi, I?m Nicole Haenni and I'm doing research for my thesis at the University of Berne (scg.unibe.ch) with Mircea Lungu and Niko Schwarz. We are researching on monitoring the activity in software ecosystems. This is a study about information needs that arise in such software ecosystems. I need your help to fill out the survey below. It takes about 10 minutes to complete it. A software ecosystem can be a project repository like GitHub, an open source community (e.g. the Apache community) or a language-based community (e.g. Smalltalk has Squeaksource, Ruby has Rubyforge). We formulate our research question as follows: "What information needs arise when developers use code from other projects, or see their own code used elsewhere." Survey link: http://bit.ly/14Zc71N or original link: https://docs.google.com/spreadsheet/viewform?formkey=dFBJUmVodVU1V3BMMGRPRWxBdjdDbVE6MA Thank you for your support! Nicole Haenni ----------------------------------------- Software Composition Group Institut f?r Informatik Universit?t Bern Neubr?ckstrasse 10 CH-3012 Bern SWITZERLAND -------------- next part -------------- An HTML attachment was scrubbed... URL: From opossumnano at gmail.com Mon Mar 4 05:37:53 2013 From: opossumnano at gmail.com (Tiziano Zito) Date: Mon, 4 Mar 2013 11:37:53 +0100 Subject: [SciPy-User] EuroSciPy 2013 Call for Abstracts Message-ID: <20130304103753.GB30426@bio230.biologie.hu-berlin.de> Dear Scientist using Python, EuroSciPy 2013, the Sixth Annual Conference on Python in Science, takes place in Brussels on 21 - 24 August 2013. The conference features two days of tutorials followed by two days of scientific talks that start with our keynote speakers, Cameron Neylon and Peter Wang. The topics presented at EuroSciPy are very diverse, with a focus on advanced software engineering and original uses of Python and its scientific libraries, either in theoretical or experimental research, from both academia and the industry. The program includes contributed talks and posters. Submissions for talks and posters are welcome on our website (http://www.euroscipy.org/). Authors must use the web interface to submit an abstract to the conference. In your abstract, please provide details on what Python tools are being employed, and how. The deadline for submission is 28 April 2013. Until 31 March 2013, you can apply for a sprint session on 25 August 2013. Also, potential organizers for EuroSciPy 2014 are welcome to contact the conference committee. SciPythonic Regards, The EuroSciPy 2013 Committee http://www.euroscipy.org/ Conference Chairs: Pierre de Buyl and Nicolas Pettiaux, Universit? libre de Bruxelles, Belgium Tutorial Chair: Nicolas Rougier, INRIA, Nancy, France Program Chair: Tiziano Zito, Humboldt-Universit?t zu Berlin, Germany Program Committee Ralf Gommers, ASML, The Netherlands Emmanuelle Gouillart, Joint Unit CNRS/Saint-Gobain, France Kael Hanson, Universit? Libre de Bruxelles, Belgium Konrad Hinsen, Centre National de la Recherche Scientifique (CNRS), France Hans Petter Langtangen, Simula and University of Oslo, Norway Mike M?ller, Python Academy, Germany Raphael Ritz, International Neuroinformatics Coordinating Facility, Stockholm, Sweden St?fan van der Walt, Applied Mathematics, Stellenbosch University, South Africa Ga?l Varoquaux, INRIA Parietal, Saclay, France Nelle Varoquaux, Mines ParisTech, France Pauli Virtanen, Aalto University, Finland Organizing Committee Nicolas Chauvat, Logilab, France Emmanuelle Gouillart, Joint Unit CNRS/Saint-Gobain, France Kael Hanson, Universit? Libre de Bruxelles, Belgium Renaud Lambiotte, University of Namur, Belgium Thomas Lecocq, Royal Observatory of Belgium Mike M?ller, Python Academy, Germany Didrik Pinte, Enthought Europe Ga?l Varoquaux, INRIA Parietal, Saclay, France Nelle Varoquaux, Mines ParisTech, France From daniele at grinta.net Mon Mar 4 08:24:15 2013 From: daniele at grinta.net (Daniele Nicolodi) Date: Mon, 04 Mar 2013 14:24:15 +0100 Subject: [SciPy-User] Allan Variance Message-ID: <5134A07F.4050907@grinta.net> Hello, does anyone know or have written and is willing to share some code to compute the Allan variance, modified Allan variance and friends, ideally both from frequency than from phase data? I'm writing my own code, it is not that difficult, but I would prefer the time required for testing and validation doing something more fun. If there is no available code implementing Allan variance, would it be interesting to include in in SciPy? Thank you so much. Best, Daniele From jschoenberger at demuc.de Mon Mar 4 15:38:05 2013 From: jschoenberger at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Mon, 4 Mar 2013 21:38:05 +0100 Subject: [SciPy-User] Announcement: Release of scikits-image 0.8.0 Message-ID: <6466270E-ED8B-49FB-9ECD-EB383D6C1B14@demuc.de> Announcement: scikits-image 0.8.0 ================================= We're happy to announce the 8th version of scikit-image! scikit-image is an image processing toolbox for SciPy that includes algorithms for segmentation, geometric transformations, color space manipulation, analysis, filtering, morphology, feature detection, and more. For more information, examples, and documentation, please visit our website: http://scikit-image.org New Features ------------ - New rank filter package with many new functions and a very fast underlying local histogram algorithm, especially for large structuring elements `skimage.filter.rank.*` - New function for small object removal `skimage.morphology.remove_small_objects` - New circular hough transformation `skimage.transform.hough_circle` - New function to draw circle perimeter `skimage.draw.circle_perimeter` and ellipse perimeter `skimage.draw.ellipse_perimeter` - New dense DAISY feature descriptor `skimage.feature.daisy` - New bilateral filter `skimage.filter.denoise_bilateral` - New faster TV denoising filter based on split-Bregman algorithm `skimage.filter.denoise_tv_bregman` - New linear hough peak detection `skimage.transform.hough_peaks` - New Scharr edge detection `skimage.filter.scharr` - New geometric image scaling as convenience function `skimage.transform.rescale` - New theme for documentation and website - Faster median filter through vectorization `skimage.filter.median_filter` - Grayscale images supported for SLIC segmentation - Unified peak detection with more options `skimage.feature.peak_local_max` - `imread` can read images via URL and knows more formats `skimage.io.imread` Additionally, this release adds lots of bug fixes, new examples, and performance enhancements. Contributors to this release ---------------------------- This release was only possible due to the efforts of many contributors, both new and old. - Adam Ginsburg - Anders Boesen Lindbo Larsen - Andreas Mueller - Christoph Gohlke - Christos Psaltis - Colin Lea - Fran?ois Boulogne - Jan Margeta - Johannes Sch?nberger - Josh Warner (Mac) - Juan Nunez-Iglesias - Luis Pedro Coelho - Marianne Corvellec - Matt McCormick - Nicolas Pinto - Olivier Debeir - Paul Ivanov - Sergey Karayev - Stefan van der Walt - Steven Silvester - Thouis (Ray) Jones - Tony S Yu From ndbecker2 at gmail.com Tue Mar 5 07:13:29 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 05 Mar 2013 07:13:29 -0500 Subject: [SciPy-User] Allan Variance References: <5134A07F.4050907@grinta.net> Message-ID: Daniele Nicolodi wrote: > Hello, > > does anyone know or have written and is willing to share some code to > compute the Allan variance, modified Allan variance and friends, ideally > both from frequency than from phase data? > > I'm writing my own code, it is not that difficult, but I would prefer > the time required for testing and validation doing something more fun. > > If there is no available code implementing Allan variance, would it be > interesting to include in in SciPy? > > Thank you so much. > > Best, > Daniele Do you mean, given phase noise power spectral density? From daniele at grinta.net Tue Mar 5 07:25:37 2013 From: daniele at grinta.net (Daniele Nicolodi) Date: Tue, 05 Mar 2013 13:25:37 +0100 Subject: [SciPy-User] Allan Variance In-Reply-To: References: <5134A07F.4050907@grinta.net> Message-ID: <5135E441.80504@grinta.net> On 05/03/2013 13:13, Neal Becker wrote: > Daniele Nicolodi wrote: > >> Hello, >> >> does anyone know or have written and is willing to share some code to >> compute the Allan variance, modified Allan variance and friends, ideally >> both from frequency than from phase data? >> >> I'm writing my own code, it is not that difficult, but I would prefer >> the time required for testing and validation doing something more fun. >> >> If there is no available code implementing Allan variance, would it be >> interesting to include in in SciPy? >> >> Thank you so much. >> >> Best, >> Daniele > > Do you mean, given phase noise power spectral density? I mean given phase or frequency time series. But also code that computes Allan variation from power spectral density may come handy for me in future. Cheers, Daniele From ndbecker2 at gmail.com Tue Mar 5 08:29:35 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 05 Mar 2013 08:29:35 -0500 Subject: [SciPy-User] Allan Variance References: <5134A07F.4050907@grinta.net> <5135E441.80504@grinta.net> Message-ID: Daniele Nicolodi wrote: > On 05/03/2013 13:13, Neal Becker wrote: >> Daniele Nicolodi wrote: >> >>> Hello, >>> >>> does anyone know or have written and is willing to share some code to >>> compute the Allan variance, modified Allan variance and friends, ideally >>> both from frequency than from phase data? >>> >>> I'm writing my own code, it is not that difficult, but I would prefer >>> the time required for testing and validation doing something more fun. >>> >>> If there is no available code implementing Allan variance, would it be >>> interesting to include in in SciPy? >>> >>> Thank you so much. >>> >>> Best, >>> Daniele >> >> Do you mean, given phase noise power spectral density? > > I mean given phase or frequency time series. But also code that > computes Allan variation from power spectral density may come handy for > me in future. > > Cheers, > Daniele I don't have that - but do have code to generate samples of blocks of phase noise from a given spectral mask. From preetigupta25 at gmail.com Mon Mar 4 13:24:38 2013 From: preetigupta25 at gmail.com (Preeti) Date: Mon, 4 Mar 2013 10:24:38 -0800 (PST) Subject: [SciPy-User] pupynere/scipy.io.netcdf In-Reply-To: <496B8AD0.2000600@gmail.com> References: <496B8AD0.2000600@gmail.com> Message-ID: <1362421478675-17963.post@n7.nabble.com> Hello, Can someone tell me how to use this package? I got the source but where do I put it on my mac? -- View this message in context: http://scipy-user.10969.n7.nabble.com/pupynere-scipy-io-netcdf-tp9235p17963.html Sent from the Scipy-User mailing list archive at Nabble.com. From ralf.gommers at gmail.com Wed Mar 6 12:41:18 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 6 Mar 2013 18:41:18 +0100 Subject: [SciPy-User] pupynere/scipy.io.netcdf In-Reply-To: <1362421478675-17963.post@n7.nabble.com> References: <496B8AD0.2000600@gmail.com> <1362421478675-17963.post@n7.nabble.com> Message-ID: On Mon, Mar 4, 2013 at 7:24 PM, Preeti wrote: > Hello, > > Can someone tell me how to use this package? I got the source but where do > I > put it on my mac? > Don't you have scipy installed? Then no need to download a separate source and put it somewhere. Just do: >>> from scipy.io import netcdf >>> f = netcdf.netcdf_file('myfilename.nc') >>> f. # look at methods to explore contents in file Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From mooyix at gmail.com Wed Mar 6 20:00:19 2013 From: mooyix at gmail.com (Brendan Dolan-Gavitt) Date: Wed, 6 Mar 2013 20:00:19 -0500 Subject: [SciPy-User] Efficiently adding a vector to every row of a sparse CSR matrix? Message-ID: Hi, As part of implementing a batch calculation of Jensen-Shannon divergence, I need to take a (sparse) 65536-element vector "V" and add it to every row of a (sparse) 500000x65536 matrix "O" of observations. Is there any way to do this that is both space and time efficient? The usual O+V tries to convert O to a dense matrix, which fails because O is too big to fit in memory (it would take up ~120 GB!). I also can't do it slowly via iteration, because it looks like it's not possible to update a sparse matrix in place. My current solution is to tile V into a new 500000x65536 matrix and then add: import numpy as np import sparse as sp [...] V = sp.csr_matrix(V) # Create the CSR matrix directly Vindptr = np.arange(0, len(V.indices)*O.shape[0]+1, len(V.indices), dtype=np.int32) Vindices = np.tile(V.indices, O.shape[0]) Vdata = np.tile(V.data, O.shape[0]) mV = sp.csr_matrix((Vdata, Vindices, Vindptr), shape=O.shape) result = O+mV This is reasonably fast (though creating mV takes around 6 seconds on its own), but takes up a lot of memory to store even though there's a ton of duplicate data. Is there any way to do this efficiently? It seems like there ought to be... -Brendan -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Mar 7 11:00:04 2013 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 7 Mar 2013 16:00:04 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?Efficiently_adding_a_vector_to_every_row_o?= =?utf-8?q?f_a_sparse=09CSR_matrix=3F?= References: Message-ID: Brendan Dolan-Gavitt gmail.com> writes: > As part of implementing a batch calculation of > Jensen-Shannon divergence, I need to take a (sparse) > 65536-element vector "V" and add it to every row of > a (sparse) 500000x65536 matrix "O" of observations. > Is there any way to do this that is both space and > time efficient? The usual O+V tries to convert O to > a dense matrix, which fails because O is too big to > fit in memory (it would take up ~120 GB!). What do you need to do with the final M = O + 1 V^T matrix? If you need it for matrix-vector products (e.g. sparse SVD), it will be cheaper to keep M around as an abstract linear operator rather than to actually form the sparse matrix (which will, in the end, contain n=500000 times duplicated information). I don't think Scipy has a specialized routine for adding a vector to each row of the sparse matrix. To speed up the computation of O + 1 V^T over what you have, you can try writing your own routine for that. This will cut the memory requirements probably by a factor of 2, and probably also speed up the computation by some factor. -- Pauli Virtanen From vanleeuwen.martin at gmail.com Thu Mar 7 18:35:53 2013 From: vanleeuwen.martin at gmail.com (Martin van Leeuwen) Date: Thu, 7 Mar 2013 15:35:53 -0800 Subject: [SciPy-User] Question about the position and orientation of the 2dcircle in Mayavi's quiver3d plotting function Message-ID: Dear Python users, I am using the Mayavi toolkit quite a bit, because it is a very nicely packaged 3D visualization toolkit. I use it mainly for displaying triangle meshes and point cloud data, but recently I found it necessary to visualize disks as well. The mayavi.mlab.quiver3d() function seems to be adequate for this, however its '2dcircles' are oriented contrary to what I expected. I expected that these 2dcircles would be oriented so that their normal vectors would point in the direction that a normal '2darrow' would point to, but it doesn't. Instead, the normal vector of the 2dcircle is orthogonal to the direction of the 2darray. In addition the 2dcircle is not centered on the 3D point (specified by the x,y,z input arrays), but touches it. Is there a way around this? Or is there another function I overlooked that centers the circles onto a 3D coordinate? Thanks in advance! Any help is appreciated. Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From afraser at lanl.gov Thu Mar 7 23:42:24 2013 From: afraser at lanl.gov (Fraser, Andy) Date: Fri, 8 Mar 2013 04:42:24 +0000 Subject: [SciPy-User] Efficiently adding a vector to every row of a sparse CSR matrix? In-Reply-To: References: Message-ID: If you don't need the 500000x65536 result as a stored object but can use the results one at a time, you can write a function that digs into the internal representation of the sparse matrix and "yields" the result (I think as an iterator). If you use cython, it will be pretty fast. Let me know if that approach will help. If so, I can make a more detailed suggestion. Andy ________________________________ From: scipy-user-bounces at scipy.org [scipy-user-bounces at scipy.org] on behalf of Brendan Dolan-Gavitt [mooyix at gmail.com] Sent: Wednesday, March 06, 2013 6:00 PM To: scipy-user at scipy.org Subject: [SciPy-User] Efficiently adding a vector to every row of a sparse CSR matrix? Hi, As part of implementing a batch calculation of Jensen-Shannon divergence, I need to take a (sparse) 65536-element vector "V" and add it to every row of a (sparse) 500000x65536 matrix "O" of observations. [...] -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Mar 8 05:24:37 2013 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 8 Mar 2013 10:24:37 +0000 Subject: [SciPy-User] Question about the position and orientation of the 2dcircle in Mayavi's quiver3d plotting function In-Reply-To: References: Message-ID: On Thu, Mar 7, 2013 at 11:35 PM, Martin van Leeuwen wrote: > Dear Python users, > > I am using the Mayavi toolkit quite a bit, because it is a very nicely > packaged 3D visualization toolkit. I use it mainly for displaying triangle > meshes and point cloud data, but recently I found it necessary to visualize > disks as well. The mayavi.mlab.quiver3d() function seems to be adequate for > this, however its '2dcircles' are oriented contrary to what I expected. I > expected that these 2dcircles would be oriented so that their normal vectors > would point in the direction that a normal '2darrow' would point to, but it > doesn't. Instead, the normal vector of the 2dcircle is orthogonal to the > direction of the 2darray. In addition the 2dcircle is not centered on the 3D > point (specified by the x,y,z input arrays), but touches it. Is there a way > around this? Or is there another function I overlooked that centers the > circles onto a 3D coordinate? > > Thanks in advance! Any help is appreciated. Questions about Mayavi are best asked on enthought-dev or StackOverflow. https://mail.enthought.com/mailman/listinfo/enthought-dev -- Robert Kern From martin.james.harrison at gmail.com Thu Mar 7 09:00:53 2013 From: martin.james.harrison at gmail.com (Martin Harrison) Date: Thu, 7 Mar 2013 06:00:53 -0800 (PST) Subject: [SciPy-User] A NonLinearModelFit algorithm Message-ID: Hi All - I am very new to scipy so bare with me please :-) I have been using mathematica recently to mess around with my data. I have a method of calculating an x,y coordinate from 2 or more distance measurements coming from static devices (also x,y coords). The function I use to do this most effectively with the data I have is the mathematica function: NonlinearModelFit[data, Norm[{x, y} - {x0, y0}], {x0, y0}, {x, y}, Weights -> 1/observations^2] where... data = {{548189.217202, 5912779.96059, 93}, {548236.967784, 5912717.80716, 39}, {548359.406452, 5912752.54022, 88}, {548358.636206, 5912690.89573, 97}}; observations = {93, 39, 88, 97}; x0, y0 is my solution The mathematica output to the above is: FittedModel[{"Nonlinear", {x0 -> *548334.5910278788*, y0 -> * 5.912703316316227*^6*}, {{x, y}, Sqrt[Abs[x - x0]^2 + Abs[y - y0]^2]}}, {{1/9604, 1/16900, 1/3249, 1/676, 1/900}}, {{548236.967784, 5.91271780716*^6, 98}, {548236.967784, 5.91271780716*^6, 130}, {548359.406452, 5.91275254022*^6, 57}, {548358.636206, 5.91269089573*^6, 26}, {548358.636206, 5.91269089573*^6, 30}}, Function[Null, Internal`LocalizedBlock[{x, x0, y, y0}, #1], {HoldAll}]] Figures in bold are my x0,y0 solution. So I am not fitting a curve but fitting to a point (with weights inversely proportional to the squared distance). I have looked around on google but am simply not sure where to start with this. Any help with this would be very much appreciated. So why am I doing this if mathematica does it? Well to call mathematicascript (using subprocess module) from python code is too slow for what I want to do with live data so want to try rewriting in python to see if the speed can be improved.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdroe at stsci.edu Thu Mar 7 18:54:23 2013 From: mdroe at stsci.edu (Michael Droettboom) Date: Thu, 7 Mar 2013 18:54:23 -0500 Subject: [SciPy-User] SciPy John Hunter Excellence in Plotting Contest In-Reply-To: References: Message-ID: <513928AF.7010201@stsci.edu> Apologies for any accidental cross-posting. Email not displaying correctly? View it in your browser. Scientific Computing with Python-Austin, Texas-June 24-29, 2013 SciPy John Hunter Excellence in Plotting Contest In memory of John Hunter, we are pleased to announce the first SciPy John Hunter Excellence in Plotting Competition. This open competition aims to highlight the importance of quality plotting to scientific progress and showcase the capabilities of the current generation of plotting software. Participants are invited to submit scientific plots to be judged by a panel. The winning entries will be announced and displayed at the conference. NumFOCUS is graciously sponsoring cash prizes for the winners in the following amounts: * 1st prize: $500 * 2nd prize: $200 * 3rd prize: $100 Instructions * Entries must be submitted by April 3 via e-mail . * Plots may be produced with any combination of Python-based tools (it is not required that they use matplotlib, for example). * Source code for the plot must be provided, along with a rendering of the plot in a vector format (PDF, PS, etc.). If the data can not be shared for reasons of size or licensing, "fake" data may be substituted, along with an image of the plot using real data. * Entries will be judged on their clarity, innovation and aesthetics, but most importantly for their effectiveness in illuminating real scientific work. Entrants are encouraged to submit plots that were used during the course of research, rather than merely being hypothetical. * SciPy reserves the right to display the entry at the conference, use in any materials or on its website, providing attribution to the original author(s). Important dates: * April 3rd: Plotting submissions due * Monday-Tuesday, June 24 - 25: SciPy 2013 Tutorials, Austin TX * Wednesday-Thursday, June 26 - 27: SciPy 2013 Conference, Austin TX * Winners will be announced during the conference days * Friday-Saturday, June 27 - 28: SciPy 2013 Sprints, Austin TX & remote We look forward to exciting submissions that push the boundaries of plotting, in this, our first attempt at this kind of competition. The SciPy Plotting Contest Organizer -Michael Droettboom, Space Telescope Science Institute You are receiving this email because you subscribed to the mailing list or registered for the SciPy 2010 or SciPy 2011 conference in Austin, TX. Unsubscribe mdboom at gmail.com from this list | Forward to a friend | Update your profile *Our mailing address is:* Enthought, Inc. 515 Congress Ave. Austin, TX 78701 Add us to your address book /Copyright (C) 2013 Enthought, Inc. All rights reserved./ -- Michael Droettboom http://www.droettboom.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdroe at stsci.edu Fri Mar 8 09:33:43 2013 From: mdroe at stsci.edu (Michael Droettboom) Date: Fri, 8 Mar 2013 09:33:43 -0500 Subject: [SciPy-User] Fwd: SciPy John Hunter Excellence in Plotting Contest In-Reply-To: <513928AF.7010201@stsci.edu> References: <513928AF.7010201@stsci.edu> Message-ID: <5139F6C7.7020005@stsci.edu> Apologies for any accidental cross-posting. Email not displaying correctly? View it in your browser. Scientific Computing with Python-Austin, Texas-June 24-29, 2013 SciPy John Hunter Excellence in Plotting Contest In memory of John Hunter, we are pleased to announce the first SciPy John Hunter Excellence in Plotting Competition. This open competition aims to highlight the importance of quality plotting to scientific progress and showcase the capabilities of the current generation of plotting software. Participants are invited to submit scientific plots to be judged by a panel. The winning entries will be announced and displayed at the conference. NumFOCUS is graciously sponsoring cash prizes for the winners in the following amounts: * 1st prize: $500 * 2nd prize: $200 * 3rd prize: $100 Instructions * Entries must be submitted by April 3 via e-mail . * Plots may be produced with any combination of Python-based tools (it is not required that they use matplotlib, for example). * Source code for the plot must be provided, along with a rendering of the plot in a vector format (PDF, PS, etc.). If the data can not be shared for reasons of size or licensing, "fake" data may be substituted, along with an image of the plot using real data. * Entries will be judged on their clarity, innovation and aesthetics, but most importantly for their effectiveness in illuminating real scientific work. Entrants are encouraged to submit plots that were used during the course of research, rather than merely being hypothetical. * SciPy reserves the right to display the entry at the conference, use in any materials or on its website, providing attribution to the original author(s). Important dates: * April 3rd: Plotting submissions due * Monday-Tuesday, June 24 - 25: SciPy 2013 Tutorials, Austin TX * Wednesday-Thursday, June 26 - 27: SciPy 2013 Conference, Austin TX * Winners will be announced during the conference days * Friday-Saturday, June 27 - 28: SciPy 2013 Sprints, Austin TX & remote We look forward to exciting submissions that push the boundaries of plotting, in this, our first attempt at this kind of competition. The SciPy Plotting Contest Organizer -Michael Droettboom, Space Telescope Science Institute You are receiving this email because you subscribed to the mailing list or registered for the SciPy 2010 or SciPy 2011 conference in Austin, TX. Unsubscribe mdboom at gmail.com from this list | Forward to a friend | Update your profile *Our mailing address is:* Enthought, Inc. 515 Congress Ave. Austin, TX 78701 Add us to your address book /Copyright (C) 2013 Enthought, Inc. All rights reserved./ -------------- next part -------------- An HTML attachment was scrubbed... URL: From member at linkedin.com Sun Mar 10 00:53:31 2013 From: member at linkedin.com (Harshad Surdi via LinkedIn) Date: Sun, 10 Mar 2013 05:53:31 +0000 (UTC) Subject: [SciPy-User] Invitation to connect on LinkedIn Message-ID: <1511225485.15765842.1362894811338.JavaMail.app@ela4-app2318.prod> LinkedIn ------------ Harshad Surdi requested to add you as a connection on LinkedIn: ------------------------------------------ Jose, I'd like to add you to my professional network on LinkedIn. - Harshad Accept invitation from Harshad Surdi http://www.linkedin.com/e/-3wy1w2-he3sdqrd-33/Q6WKH0LACopGJkAw_6fSqajo6R7VMvIz/blk/I706575952_20/3wOtCVFbmdxnSVFbm8JrnpKqlZJrmZzbmNJpjRQnOpBtn9QfmhBt71BoSd1p65Lr6lOfP0OnP8RejkTdjoMdQALiRBmiz1qul0Lc34VdjkTcj0Qcj4LrCBxbOYWrSlI/eml-comm_invm-b-in_ac-inv28/?hs=false&tok=0BgZf2x1QxCBE1 View profile of Harshad Surdi http://www.linkedin.com/e/-3wy1w2-he3sdqrd-33/rso/235918465/oHlG/name/21555906_I706575952_20/?hs=false&tok=0AgWCVt_wxCBE1 ------------------------------------------ You are receiving Invitation emails. This email was intended for Jose Gomez-Dans. Learn why this is included: http://www.linkedin.com/e/-3wy1w2-he3sdqrd-33/plh/http%3A%2F%2Fhelp%2Elinkedin%2Ecom%2Fapp%2Fanswers%2Fdetail%2Fa_id%2F4788/-GXI/?hs=false&tok=1cxyVpv1UxCBE1 (c) 2012, LinkedIn Corporation. 2029 Stierlin Ct, Mountain View, CA 94043, USA. -------------- next part -------------- An HTML attachment was scrubbed... URL: From imrehg at gmail.com Sun Mar 10 22:39:11 2013 From: imrehg at gmail.com (Gergely Imreh) Date: Mon, 11 Mar 2013 10:39:11 +0800 Subject: [SciPy-User] Allan Variance In-Reply-To: References: <5134A07F.4050907@grinta.net> <5135E441.80504@grinta.net> Message-ID: Hey, I have written such code before for frequency time series, the (not very clean) source code is here: allan function on https://github.com/imrehg/physicscalc/blob/master/allan/allanmulti.py partallan function in https://github.com/imrehg/physicscalc/blob/master/allan/allanmulti3.py There are some caveats, though, it's been a while and I remember that one has to be careful how you calculate things if comparing to Allan variance from instruments. I think the first mentioned function is the simple way. The second mentioned function is replicating the calculation of one our instruments in the lab, that uses overlapping time windows, thus the calculation needs to be different too. But it's better if you check. Probably will still adjust to your need, though. Cheers, Greg On 5 March 2013 21:29, Neal Becker wrote: > Daniele Nicolodi wrote: > > > On 05/03/2013 13:13, Neal Becker wrote: > >> Daniele Nicolodi wrote: > >> > >>> Hello, > >>> > >>> does anyone know or have written and is willing to share some code to > >>> compute the Allan variance, modified Allan variance and friends, > ideally > >>> both from frequency than from phase data? > >>> > >>> I'm writing my own code, it is not that difficult, but I would prefer > >>> the time required for testing and validation doing something more fun. > >>> > >>> If there is no available code implementing Allan variance, would it be > >>> interesting to include in in SciPy? > >>> > >>> Thank you so much. > >>> > >>> Best, > >>> Daniele > >> > >> Do you mean, given phase noise power spectral density? > > > > I mean given phase or frequency time series. But also code that > > computes Allan variation from power spectral density may come handy for > > me in future. > > > > Cheers, > > Daniele > > I don't have that - but do have code to generate samples of blocks of phase > noise from a given spectral mask. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdekauwe at gmail.com Mon Mar 11 01:30:58 2013 From: mdekauwe at gmail.com (mdekauwe) Date: Sun, 10 Mar 2013 22:30:58 -0700 (PDT) Subject: [SciPy-User] maximising a function Message-ID: <1362979858440-17980.post@n7.nabble.com> Hi, I am trying to use the scipy library to optimise 4 parameters so that they return the maximum for a given function. To do this I am using the COBYLA function and setting a few constraints, essentially that these parameter values are positive and that when summed they are less than a given value (N_p). Can I check what I have is the correct implementation (see below)? The second issue is that part of the function depends on the solution to a quadratic. If I don't resolve positive roots I want to be able to return some value to let the minimiser know that these are bad values. I can't work out how to do this? I tried returning a very large value but this didn't see to work, so then I tried to use NaNs. What is the correct way to do this? thanks, Code (this will run). #!/usr/bin/env python """ Optimise the distribution of nitrogen within the photosynthetic system to maximise photosynthesis for a given PAR and CO2 concentration. Reference: ========= * Medlyn (1996) The Optimal Allocation of Nitrogen within the C3 Photsynthetic System at Elevated CO2. Australian Journal of Plant Physiology, 23, 593-603. """ import sys import numpy as np import scipy.optimize class PhotosynthesisModel(object): """ Photosynthesis model based on Evans (1989) References: ----------- * Evans, 1989 """ def __init__(self, alpha=0.425, beta=0.7, g_m=0.4, theta=0.7, K_cat=24.0, K_s=1.25E-04, MAXIMISE=False): """ Parameters ---------- alpha : float quantum yield of electron transport (mol mol-1) beta : float constant of proportionality between atmospheric and intercellular CO2 concentrations (-) g_m : float Conductance to CO2 transfer between intercellular spaces and sites of carboxylation (mol m-2 s-1 bar-1) theta : float curvature of the light response (-) K_cat : float specific activitt of Rubisco (mol CO2 mol-1 Rubisco s-1) K_s : float constant of proportionality between N_s and Jmax (g N m2 s umol-1) MAXIMISE : logical if we want to minimise a function we need to return f(x), if we want to maximise a function we return -f(x) """ self.alpha = alpha self.beta = beta self.g_m = g_m self.theta = theta self.K_cat = K_cat self.K_s = K_s self.sign = 1.0 if MAXIMISE: self.sign = -1.0 else: self.sign = 1.0 def calc_photosynthesis(self, N_pools=None, par=None, Km=None, Rd=None, gamma_star=None, Ca=None, N_p=None, error=False): """ Leaf temperature is assumed to be constant = 25 deg C. Parameters ---------- N_pools : list of floats list containing the 5 N pools: N_c - amount of N in chlorophyll (mol m-2) N_s - amount of N in soluble protein other than Rubisco (mol m-2) N_e - amount of N in electron transport components (mol m-2) N_r - amount of N in Rubisco (mol m-2) N_other - amount of leaf N not involved in photosynthesis (mol m-2) par : float incident PAR (umol m-2 s-1) Km : float effective Michalis-Menten coefficent of Rubisco (umol mol-1) Rd : float rate of dark respiration (umol m2 s-1) gamma_star : float leaf CO2 compensation point (umol mol-1) Ca : float atmospheric CO2 concentration (umol mol-1) N_p : float total amount of leaf N to be distrubuted (mol m-2) Returns: -------- An : float Net leaf assimilation rate [umol m-2 s-1] """ # unpack...we need to pass as a list for the optimisation step (N_e, N_c, N_r, N_other) = N_pools # Leaf absorptance depends on the chlorophyll protein complexes (N_c) absorptance = self.calc_absorptance(N_c) # Max rate of electron transport (umol m-2 s-1) Jmax = self.calc_jmax(N_e, N_c) # Max rate of carboxylation velocity (umol m-2 s-1) Vcmax = self.calc_vcmax(N_r) N_s = self.calc_n_alloc_soluble_protein(Jmax) # store values so I can get them all later self.N_pool_store = np.array([N_e, N_c, N_r, N_s, N_other]) # rate of electron transport, a saturating function of absorbed PAR J, error = self.quadratic(a=self.theta, b=-(self.alpha * absorptance * par + Jmax), c=self.alpha * absorptance * par * Jmax) if error: return None, error # CO2 concentration in the intercellular air spaces Ci = self.beta * Ca # Photosynthesis when Rubisco is limiting a = 1. / self.g_m b = (Rd - Vcmax) / self.g_m - Ci - Km c = Vcmax * (Ci - gamma_star) - Rd * (Ci + Km) Wc, error = self.quadratic(a=a, b=b, c=c) if error: return None, error # Photosynthesis when electron transport is limiting a = -4. / self.g_m b = (Rd - J / 4.0) / self.g_m - Ci - 2.0 * gamma_star c = J / 4.0 * (Ci - gamma_star) - Rd * (Ci + 2.0 * gamma_star) Wj, error = self.quadratic(a=a, b=b, c=c) if error: return None, error Aleaf = np.minimum(Wc, Wj) Cc = Ci - (Aleaf / self.g_m); An = (1.0 - (gamma_star / Cc)) * Aleaf - Rd return An, error def assim(self, Cc, a1, a2): """Photosynthesis with the limitation defined by the variables passed as a1 and a2, i.e. if we are calculating vcmax or jmax limited. Parameters: ---------- Cc : float CO2 concentration at the site of of carboxylation gamma_star : float CO2 compensation point in the abscence of mitochondrial respiration a1 : float variable depends on whether the calculation is light or rubisco limited. a2 : float variable depends on whether the calculation is light or rubisco limited. Returns: ------- assimilation_rate : float assimilation rate assuming either light or rubisco limitation. """ return a1 * (Cc / (Cc + a2)) def quadratic(self, a=None, b=None, c=None, error=False): """ minimilist quadratic solution as root for J solution should always be positive, so I have excluded other quadratic solution steps. If roots for J are negative or equal to zero then the data is bogus Parameters: ---------- a : float co-efficient b : float co-efficient c : float co-efficient Returns: ------- val : float positive root """ d = b**2 - 4.0 * a * c # discriminant if np.any(d <= 0.0) or np.any(np.isnan(d)): error = True root1 = -999.9 root2 = -999.9 else: root1 = (-b + np.sqrt(d)) / (2.0 * a) root2 = (-b - np.sqrt(d)) / (2.0 * a) return np.minimum(root1, root2), error def calc_jmax(self, N_e, N_c, a_j=15870.0, b_j=2775.0): """ Evans (1989( found a linear reln between the rate of electron transport and the total amount of nitrogen in the thylakoids per unit chlorophyll Parameters: ---------- N_e : float amount of N in electron transport components (mol m-2) N_c : float amount of N in chlorophyll (mol m-2) a_j : float umol (mol N)-1 s-1 b_j : float umol (mol N)-1 s-1 Returns: -------- Jmax : float Max rate of electron transport (umol m-2 s-1) """ Jmax = a_j * N_e + b_j * N_c return Jmax def calc_vcmax(self, N_r): """ Calculate Rubisco activity Parameters: ---------- N_r : float amount of N in Rubisco (mol m-2) Returns: -------- Vcmax : float Max rate of carboxylation velocity (umol m-2 s-1) """ conv = 7000.0 / 44.0 # mol N to mol Rubiso return self.K_cat * conv * N_r def calc_absorptance(self, N_c): """ Calculate absorptance in the leaf based on N in the chlorophyll protein complexes Parameters: ---------- N_c : float amount of N in chlorophyll (mol m-2) Returns: -------- absorptance : float Leaf absorptance (-) """ return (25.0 * N_c) / (25.0 * N_c + 0.076) def calc_n_alloc_soluble_protein(self, Jmax): """ Calculate N allocated to soluble protein Parameters: -------- Jmax : float Max rate of electron transport (umol m-2 s-1) Returns: -------- N_s : float amount of N in soluble protein other than Rubisco (mol m-2) """ return self.K_s * Jmax def optimise_func(self, x0, par=None, Km=None, Rd=None, gamma_star=None, Ca=None, N_p=None): """ Minimisation and maximisation are equivalent, to get the global maximium we just need to return -f(x) instead of f(x) """ An, error = self.calc_photosynthesis(x0, par, Km, Rd, gamma_star, Ca, N_p) # I want only positive values for xopt if np.any(x0<0.0): return np.nan (N_e, N_c, N_r, N_other) = x0 # Leaf absorptance depends on the chlorophyll protein complexes (N_c) absorptance = self.calc_absorptance(N_c) # Max rate of electron transport (umol m-2 s-1) Jmax = self.calc_jmax(N_e, N_c) # Max rate of carboxylation velocity (umol m-2 s-1) Vcmax = self.calc_vcmax(N_r) N_s = self.calc_n_alloc_soluble_protein(Jmax) # xopt can't be > N_p if (N_s + N_e + N_r + N_c + N_other) > N_p: return np.nan An, err = self.calc_photosynthesis(x0, par, Km, Rd, gamma_star, Ca, N_p) if err: return np.nan else: return self.sign * np.mean(An) def constraint1(self, x, *args): """ N_s + N_e + N_r + N_c < N_p returns a positive number if within bound and 0.0 it is exactly on the edge of the bound """ (par, Km, Rd, gamma_star, Ca, N_p) = args # unpack...we need to pass as a list for the optimisation step (N_e, N_c, N_r, N_other) = x # Leaf absorptance depends on the chlorophyll protein complexes (N_c) absorptance = self.calc_absorptance(N_c) # Max rate of electron transport (umol m-2 s-1) Jmax = self.calc_jmax(N_e, N_c) # Max rate of carboxylation velocity (umol m-2 s-1) Vcmax = self.calc_vcmax(N_r) N_s = self.calc_n_alloc_soluble_protein(Jmax) return N_p - (N_s + N_e + N_r + N_c + N_other) def constraint2(self, x, *args): """ N_e > 0.0 returns a positive number if within bound and 0.0 it is exactly on the edge of the bound """ return x[0] def constraint3(self, x, *args): """ N_c > 0.0 returns a positive number if within bound and 0.0 it is exactly on the edge of the bound """ return x[1] def constraint4(self, x, *args): """ N_r > 0.0 returns a positive number if within bound and 0.0 it is exactly on the edge of the bound """ return x[2] def constraint5(self, x, *args): """ N_other > 0.0 returns a positive number if within bound and 0.0 it is exactly on the edge of the bound """ return x[3] def optimise_test(): # default values Km = 544. Rd = 0.5 gamma_star = 38.6 par = np.arange(10.0, 1500.0, 20.0) Ca = np.ones(len(par)) * 350.0 P = PhotosynthesisModel(MAXIMISE=True) func = P.optimise_func # short name for func # initial guess # Testing, just equally divide up the total N available N_p = 0.09 x0 = np.array([N_p/5.0, N_p/5.0, N_p/5.0, N_p/5.0]) args = (par, Km, Rd, gamma_star, Ca, N_p) cons = ({'type': 'ineq', 'fun': P.constraint1, 'args': args}, {'type': 'ineq', 'fun': P.constraint2, 'args': args}, {'type': 'ineq', 'fun': P.constraint3, 'args': args}, {'type': 'ineq', 'fun': P.constraint4, 'args': args}, {'type': 'ineq', 'fun': P.constraint5, 'args': args}) result = scipy.optimize.minimize(func, x0, method="COBYLA", constraints=cons, args=args) if result["success"]: print result.x if __name__ == "__main__": optimise_test() -- View this message in context: http://scipy-user.10969.n7.nabble.com/maximising-a-function-tp17980.html Sent from the Scipy-User mailing list archive at Nabble.com. From mdekauwe at gmail.com Mon Mar 11 02:21:21 2013 From: mdekauwe at gmail.com (mdekauwe) Date: Sun, 10 Mar 2013 23:21:21 -0700 (PDT) Subject: [SciPy-User] maximising a function In-Reply-To: <1362979858440-17980.post@n7.nabble.com> References: <1362979858440-17980.post@n7.nabble.com> Message-ID: <1362982881134-17981.post@n7.nabble.com> Apologies I just realised np.minimum did not do what I thought, which was messing up the quadratic calculation. I think this seems like it is working. I am still not sure if there is a better way to set up the constraints? But I guess this seems OK. thanks. #!/usr/bin/env python """ Belinda's optimal N allocation model. Optimise the distribution of nitrogen within the photosynthetic system to maximise photosynthesis for a given PAR and CO2 concentration. Reference: ========= * Medlyn (1996) The Optimal Allocation of Nitrogen within the C3 Photsynthetic System at Elevated CO2. Australian Journal of Plant Physiology, 23, 593-603. """ __author__ = "Martin De Kauwe" __version__ = "1.0 (05.03.2013)" __email__ = "mdekauwe at gmail.com" import sys import numpy as np import matplotlib.pyplot as plt from scipy.optimize import fsolve import scipy.optimize import warnings class PhotosynthesisModel(object): """ Photosynthesis model based on Evans (1989) References: ----------- * Evans, 1989 """ def __init__(self, alpha=0.425, beta=0.7, g_m=0.4, theta=0.7, K_cat=24.0, K_s=1.25E-04, MAXIMISE=False): """ Parameters ---------- alpha : float quantum yield of electron transport (mol mol-1) beta : float constant of proportionality between atmospheric and intercellular CO2 concentrations (-) g_m : float Conductance to CO2 transfer between intercellular spaces and sites of carboxylation (mol m-2 s-1 bar-1) theta : float curvature of the light response (-) K_cat : float specific activitt of Rubisco (mol CO2 mol-1 Rubisco s-1) K_s : float constant of proportionality between N_s and Jmax (g N m2 s umol-1) MAXIMISE : logical if we want to minimise a function we need to return f(x), if we want to maximise a function we return -f(x) """ self.alpha = alpha self.beta = beta self.g_m = g_m self.theta = theta self.K_cat = K_cat self.K_s = K_s self.sign = 1.0 if MAXIMISE: self.sign = -1.0 else: self.sign = 1.0 def calc_photosynthesis(self, N_pools=None, par=None, Km=None, Rd=None, gamma_star=None, Ca=None, N_p=None, error=False): """ Leaf temperature is assumed to be constant = 25 deg C. Parameters ---------- N_pools : list of floats list containing the 5 N pools: N_c - amount of N in chlorophyll (mol m-2) N_s - amount of N in soluble protein other than Rubisco (mol m-2) N_e - amount of N in electron transport components (mol m-2) N_r - amount of N in Rubisco (mol m-2) N_other - amount of leaf N not involved in photosynthesis (mol m-2) par : float incident PAR (umol m-2 s-1) Km : float effective Michalis-Menten coefficent of Rubisco (umol mol-1) Rd : float rate of dark respiration (umol m2 s-1) gamma_star : float leaf CO2 compensation point (umol mol-1) Ca : float atmospheric CO2 concentration (umol mol-1) N_p : float total amount of leaf N to be distrubuted (mol m-2) Returns: -------- An : float Net leaf assimilation rate [umol m-2 s-1] """ # unpack...we need to pass as a list for the optimisation step (N_e, N_c, N_r, N_other) = N_pools # Leaf absorptance depends on the chlorophyll protein complexes (N_c) absorptance = self.calc_absorptance(N_c) # Max rate of electron transport (umol m-2 s-1) Jmax = self.calc_jmax(N_e, N_c) # Max rate of carboxylation velocity (umol m-2 s-1) Vcmax = self.calc_vcmax(N_r) N_s = self.calc_n_alloc_soluble_protein(Jmax) self.N_pool_store = np.array([N_e, N_c, N_r, N_s, N_other]) # rate of electron transport, a saturating function of absorbed PAR J, error = self.quadratic(a=self.theta, b=-(self.alpha * absorptance * par + Jmax), c=self.alpha * absorptance * par * Jmax) if error: return J, error # CO2 concentration in the intercellular air spaces Ci = self.beta * Ca # Photosynthesis when Rubisco is limiting a = 1. / self.g_m b = (Rd - Vcmax) / self.g_m - Ci - Km c = Vcmax * (Ci - gamma_star) - Rd * (Ci + Km) Wc, error = self.quadratic(a=a, b=b, c=c) if error: return Wc, error # Photosynthesis when electron transport is limiting a = -4. / self.g_m b = (Rd - J / 4.0) / self.g_m - Ci - 2.0 * gamma_star c = J / 4.0 * (Ci - gamma_star) - Rd * (Ci + 2.0 * gamma_star) Wj, error = self.quadratic(a=a, b=b, c=c) if error: return Wj, error Aleaf = np.minimum(Wc, Wj) Cc = Ci - (Aleaf / self.g_m); An = (1.0 - (gamma_star / Cc)) * Aleaf - Rd return An, error def assim(self, Cc, a1, a2): """Photosynthesis with the limitation defined by the variables passed as a1 and a2, i.e. if we are calculating vcmax or jmax limited. Parameters: ---------- Cc : float CO2 concentration at the site of of carboxylation gamma_star : float CO2 compensation point in the abscence of mitochondrial respiration a1 : float variable depends on whether the calculation is light or rubisco limited. a2 : float variable depends on whether the calculation is light or rubisco limited. Returns: ------- assimilation_rate : float assimilation rate assuming either light or rubisco limitation. """ return a1 * (Cc / (Cc + a2)) def quadratic(self, a=None, b=None, c=None, error=False): """ minimilist quadratic solution as root for J solution should always be positive, so I have excluded other quadratic solution steps. I am only returning the smallest of the two roots Parameters: ---------- a : float co-efficient b : float co-efficient c : float co-efficient Returns: ------- val : float positive root """ d = b**2 - 4.0 * a * c # discriminant if np.any(d <= 0.0) or np.any(np.isnan(d)): error = True root1 = 0.0 root2 = 0.0 warnings.warn("Imaginary root found") else: root1 = (-b - np.sqrt(d)) / (2.0 * a) #root2 = (-b + np.sqrt(d)) / (2.0 * a) # don't want to return this return root1, error def calc_jmax(self, N_e, N_c, a_j=15870.0, b_j=2775.0): """ Evans (1989( found a linear reln between the rate of electron transport and the total amount of nitrogen in the thylakoids per unit chlorophyll Parameters: ---------- N_e : float amount of N in electron transport components (mol m-2) N_c : float amount of N in chlorophyll (mol m-2) a_j : float umol (mol N)-1 s-1 b_j : float umol (mol N)-1 s-1 Returns: -------- Jmax : float Max rate of electron transport (umol m-2 s-1) """ Jmax = a_j * N_e + b_j * N_c return Jmax def calc_vcmax(self, N_r): """ Calculate Rubisco activity Parameters: ---------- N_r : float amount of N in Rubisco (mol m-2) Returns: -------- Vcmax : float Max rate of carboxylation velocity (umol m-2 s-1) """ conv = 7000.0 / 44.0 # mol N to mol Rubiso return self.K_cat * conv * N_r def calc_absorptance(self, N_c): """ Calculate absorptance in the leaf based on N in the chlorophyll protein complexes Parameters: ---------- N_c : float amount of N in chlorophyll (mol m-2) Returns: -------- absorptance : float Leaf absorptance (-) """ return (25.0 * N_c) / (25.0 * N_c + 0.076) def calc_n_alloc_soluble_protein(self, Jmax): """ Calculate N allocated to soluble protein Parameters: -------- Jmax : float Max rate of electron transport (umol m-2 s-1) Returns: -------- N_s : float amount of N in soluble protein other than Rubisco (mol m-2) """ return self.K_s * Jmax def optimise_func(self, x0, par=None, Km=None, Rd=None, gamma_star=None, Ca=None, N_p=None): """ Minimisation and maximisation are equivalent, to get the global maximium we just need to return -f(x) instead of f(x) """ An, err = self.calc_photosynthesis(x0, par, Km, Rd, gamma_star, Ca, N_p) return self.sign * np.mean(An) def penalty(self, x0, par=None, Km=None, Rd=None, gamma_star=None, Ca=None, N_p=None): An, error = self.calc_photosynthesis(x0, par, Km, Rd, gamma_star, Ca, N_p) if np.any(x0<0.0): return np.nan (N_e, N_c, N_r, N_other) = x0 # Leaf absorptance depends on the chlorophyll protein complexes (N_c) absorptance = self.calc_absorptance(N_c) # Max rate of electron transport (umol m-2 s-1) Jmax = self.calc_jmax(N_e, N_c) # Max rate of carboxylation velocity (umol m-2 s-1) Vcmax = self.calc_vcmax(N_r) N_s = self.calc_n_alloc_soluble_protein(Jmax) if (N_s + N_e + N_r + N_c + N_other) > N_p: return np.nan An, error = self.calc_photosynthesis(x0, par, Km, Rd, gamma_star, Ca, N_p) if error: return np.nan else: return self.sign * np.mean(An) #return self.sign * np.mean(An) def constraint1(self, x, *args): """ N_s + N_e + N_r + N_c < N_p returns a positive number if within bound and 0.0 it is exactly on the edge of the bound """ (par, Km, Rd, gamma_star, Ca, N_p) = args # unpack...we need to pass as a list for the optimisation step (N_e, N_c, N_r, N_other) = x # Leaf absorptance depends on the chlorophyll protein complexes (N_c) absorptance = self.calc_absorptance(N_c) # Max rate of electron transport (umol m-2 s-1) Jmax = self.calc_jmax(N_e, N_c) # Max rate of carboxylation velocity (umol m-2 s-1) Vcmax = self.calc_vcmax(N_r) N_s = self.calc_n_alloc_soluble_protein(Jmax) return N_p - (N_s + N_e + N_r + N_c + N_other) def constraint2(self, x, *args): """ N_e > 0.0 returns a positive number if within bound and 0.0 it is exactly on the edge of the bound """ return x[0] def constraint3(self, x, *args): """ N_c > 0.0 returns a positive number if within bound and 0.0 it is exactly on the edge of the bound """ return x[1] def constraint4(self, x, *args): """ N_r > 0.0 returns a positive number if within bound and 0.0 it is exactly on the edge of the bound """ return x[2] def constraint5(self, x, *args): """ N_other > 0.0 returns a positive number if within bound and 0.0 it is exactly on the edge of the bound """ return x[3] def optimise_test(): # default values Km = 544. Rd = 0.5 gamma_star = 38.6 par = np.arange(10.0, 1500.0, 20.0) Ca = np.ones(len(par)) * 350.0 P = PhotosynthesisModel(MAXIMISE=True) func = P.calc_photosynthesis #F = P.penalty # short name for func F = P.optimise_func # initial guess # Testing, just equally divide up the total N available N_p = 0.09 x0 = np.array([N_p/5.0, N_p/5.0, N_p/5.0, N_p/5.0]) args = (par, Km, Rd, gamma_star, Ca, N_p) cons = ({'type': 'ineq', 'fun': P.constraint1, 'args': args}, {'type': 'ineq', 'fun': P.constraint2, 'args': args}, {'type': 'ineq', 'fun': P.constraint3, 'args': args}, {'type': 'ineq', 'fun': P.constraint4, 'args': args}, {'type': 'ineq', 'fun': P.constraint5, 'args': args}) result = scipy.optimize.minimize(F, x0, method="COBYLA", constraints=cons, args=args) if result["success"]: print result (N_c, N_e, N_r, N_s, N_other) = P.N_pool_store fitted_x0 = np.array([N_c, N_e, N_r, N_other]) # bit of a hack as there can be small rounding errors, # not sure of a better solution? diff = np.sum(P.N_pool_store) - N_p print "Diff", diff print fitted_x0 An, err = func(fitted_x0, par, Km, Rd, gamma_star, Ca, N_p) print err print np.mean(An) plt.plot(par, An, label="Cobyla") plt.legend(loc="best", numpoints=1) plt.show() if __name__ == "__main__": optimise_test() -- View this message in context: http://scipy-user.10969.n7.nabble.com/maximising-a-function-tp17980p17981.html Sent from the Scipy-User mailing list archive at Nabble.com. From lorenzo.isella at gmail.com Mon Mar 11 02:54:22 2013 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Mon, 11 Mar 2013 07:54:22 +0100 Subject: [SciPy-User] Python for Card Game Modeling Message-ID: Dear All, Unfortunately, I cannot come up with a script or a concrete example at this stage. I am trying to investigate statistically a card game (not a game like poker, but something more like Magic). There are various kinds of cards (agents, missions, victory points) each obeying a given set of rules. The purpose of the statistical analysis is to come up with a measure of the strength of each card. Any idea if "out there" there is any Python (better if using SciPy/Numpy) package/script which can help me? Any suggestion is welcome Lorenzo From subhabangalore at gmail.com Mon Mar 11 15:18:10 2013 From: subhabangalore at gmail.com (Subhabrata Banerjee) Date: Mon, 11 Mar 2013 12:18:10 -0700 (PDT) Subject: [SciPy-User] HMM in Python Message-ID: <20896bfc-7b82-423b-baff-8dfa91f39d80@googlegroups.com> Dear Group, I do not know whether it is an out of box question but as you are great scientific computing people so I thought to post it here. I am looking a simple example work out for forward probability of Hidden Markov Model(HMM) on the problem of sequence labelling. If any one can kindly help me out. Regards, Subhabrata. -------------- next part -------------- An HTML attachment was scrubbed... URL: From afraser at lanl.gov Mon Mar 11 16:28:20 2013 From: afraser at lanl.gov (Andrew Fraser) Date: Mon, 11 Mar 2013 14:28:20 -0600 Subject: [SciPy-User] HMM in Python In-Reply-To: <20896bfc-7b82-423b-baff-8dfa91f39d80@googlegroups.com> References: <20896bfc-7b82-423b-baff-8dfa91f39d80@googlegroups.com> Message-ID: <513E3E64.6020502@lanl.gov> I am rewriting the code for my book "Hidden Markov Models and Dynamical Systems" (see http://www.siam.org/books/ot107/) in python3. The code for the basic algorithms (including what you ask for) is written. I am in the process of getting permission from my employer to distribute the code under GPL. I hope to get permission this week. Andy On 03/11/2013 01:18 PM, Subhabrata Banerjee wrote: > Dear Group, > > I do not know whether it is an out of box question but as you are > great scientific computing people so I thought to post it here. > > I am looking a simple example work out for forward probability of > Hidden Markov Model(HMM) on the problem of sequence labelling. > > If any one can kindly help me out. > > Regards, > Subhabrata. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From subhabangalore at gmail.com Mon Mar 11 18:39:36 2013 From: subhabangalore at gmail.com (Subhabrata Banerjee) Date: Tue, 12 Mar 2013 04:09:36 +0530 Subject: [SciPy-User] HMM in Python In-Reply-To: <513E3E64.6020502@lanl.gov> References: <20896bfc-7b82-423b-baff-8dfa91f39d80@googlegroups.com> <513E3E64.6020502@lanl.gov> Message-ID: Dear Sir, Thank you for your kind note. I read a material from Devert Alexandre and problem is generally more or less solved. But knowing a great peer is always helpful. We are expecting your code a great lunch and lot of mind blowing discussion. Regards, Subhabrata. On Tue, Mar 12, 2013 at 1:58 AM, Andrew Fraser wrote: > ** > I am rewriting the code for my book "Hidden Markov Models and Dynamical > Systems" (see http://www.siam.org/books/ot107/) in python3. The code for > the basic algorithms (including what you ask for) is written. I am in the > process of getting permission from my employer to distribute the code under > GPL. I hope to get permission this week. > > Andy > > On 03/11/2013 01:18 PM, Subhabrata Banerjee wrote: > > Dear Group, > > I do not know whether it is an out of box question but as you are great > scientific computing people so I thought to post it here. > > I am looking a simple example work out for forward probability of Hidden > Markov Model(HMM) on the problem of sequence labelling. > > If any one can kindly help me out. > > Regards, > Subhabrata. > > > _______________________________________________ > SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- *Sree Subhabrata Banerjee* *PDF(2007-2010)[IISc, Bangalore]* Call:09873237945 Member, IEEE(USA), ACM(USA). -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis-bz-py at t-online.de Tue Mar 12 13:14:08 2013 From: denis-bz-py at t-online.de (denis) Date: Tue, 12 Mar 2013 17:14:08 +0000 (UTC) Subject: [SciPy-User] fast N-d interpolators Intergrid and Barypol Message-ID: Folks, two small fast interpolators: [Intergrid](http://denis-bz.github.com/docs/intergrid.html): interpolate in an N-d box grid, uniform or non-uniform. This is just a wrapper for scipy.ndimage.map_coordinates and numpy.interp. [Barypol](http://denis-bz.github.com/docs/barypol.html): interpolate in a uniform N-d box grid, using d + 1 corners of a simplex (triangle, tetrahedron ...) around each query point. >From Munos and Moore, "Variable Resolution Discretization in Optimal Control", 1999, 24p; see the pictures and example on pp. 4-5. It's implemented in a C++ header barypol.h, with a Cython wrapper (both of which more knowledgeable people could certainly improve). In 4d, 5d, 6d, Intergrid does around 3M, 2M, .8M interpolations / second. Barypol is ~ 5 times faster, but not as smooth. (Both will of course become cache-bound for large grids.) See interpol/test/*.log and http://github.com/denis-bz/interpol . Comments are welcome, testcases most welcome. cheers -- denis From sunnychugh87 at gmail.com Wed Mar 13 04:08:51 2013 From: sunnychugh87 at gmail.com (sunnychugh87) Date: Wed, 13 Mar 2013 01:08:51 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Installing Polymode In-Reply-To: <51099FAA.3010604@uci.edu> References: <51099FAA.3010604@uci.edu> Message-ID: <35168533.post@talk.nabble.com> i have installed numpy mkl and scipy but when i import polymode in program,,it gives this error--- Traceback (most recent call last): File "C:/Documents and Settings/student/Desktop/a1.py", line 1, in import Polymode File "C:\Python26\lib\site-packages\Polymode\__init__.py", line 47, in from .mathlink import coordinates, constants File "C:\Python26\lib\site-packages\Polymode\mathlink\__init__.py", line 37, in from .bessel_ratios import besselj_ratio, hankel1_ratio File "numpy.pxd", line 119, in init Polymode.mathlink.bessel_ratios (Polymode\mathlink\bessel_ratios.cpp:5165) ValueError: numpy.dtype does not appear to be the correct type object can u help...i need it bad Christoph Gohlke wrote: > > On 1/29/2013 8:33 AM, Manders, Mark wrote: >> I was trying to install polymode into my Python 2.7 (32bit), for some >> reason the following errors come up. It can?t seem to find the numpy >> ?lapack? libraries even though I have numpy & scipy installed. I also >> have boost_python installed but it can?t find anything from that either. >> > > Try . It requires > Numpy-MKL and Scipy from the same website. > > Christoph > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/Installing-Polymode-tp34958853p35168533.html Sent from the Scipy-User mailing list archive at Nabble.com. From sunnychugh87 at gmail.com Wed Mar 13 04:10:11 2013 From: sunnychugh87 at gmail.com (sunnychugh87) Date: Wed, 13 Mar 2013 01:10:11 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Installing Polymode In-Reply-To: <51099FAA.3010604@uci.edu> References: <51099FAA.3010604@uci.edu> Message-ID: <35168539.post@talk.nabble.com> i have installed numpy mkl and scipy but when i import polymode in program,,it gives this error--- Traceback (most recent call last): File "C:/Documents and Settings/student/Desktop/a1.py", line 1, in import Polymode File "C:\Python26\lib\site-packages\Polymode\__init__.py", line 47, in from .mathlink import coordinates, constants File "C:\Python26\lib\site-packages\Polymode\mathlink\__init__.py", line 37, in from .bessel_ratios import besselj_ratio, hankel1_ratio File "numpy.pxd", line 119, in init Polymode.mathlink.bessel_ratios (Polymode\mathlink\bessel_ratios.cpp:5165) ValueError: numpy.dtype does not appear to be the correct type object can u help...i need it bad Christoph Gohlke wrote: > > On 1/29/2013 8:33 AM, Manders, Mark wrote: >> I was trying to install polymode into my Python 2.7 (32bit), for some >> reason the following errors come up. It can?t seem to find the numpy >> ?lapack? libraries even though I have numpy & scipy installed. I also >> have boost_python installed but it can?t find anything from that either. >> > > Try . It requires > Numpy-MKL and Scipy from the same website. > > Christoph > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/Installing-Polymode-tp34958853p35168539.html Sent from the Scipy-User mailing list archive at Nabble.com. From sunnychugh87 at gmail.com Wed Mar 13 04:10:56 2013 From: sunnychugh87 at gmail.com (sunnychugh87) Date: Wed, 13 Mar 2013 01:10:56 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Installing Polymode In-Reply-To: References: Message-ID: <35168541.post@talk.nabble.com> i have installed numpy mkl and scipy but when i import polymode in program,,it gives this error--- Traceback (most recent call last): File "C:/Documents and Settings/student/Desktop/a1.py", line 1, in import Polymode File "C:\Python26\lib\site-packages\Polymode\__init__.py", line 47, in from .mathlink import coordinates, constants File "C:\Python26\lib\site-packages\Polymode\mathlink\__init__.py", line 37, in from .bessel_ratios import besselj_ratio, hankel1_ratio File "numpy.pxd", line 119, in init Polymode.mathlink.bessel_ratios (Polymode\mathlink\bessel_ratios.cpp:5165) ValueError: numpy.dtype does not appear to be the correct type object can u help...i need it bad Ralf Gommers-3 wrote: > > On Tue, Jan 29, 2013 at 5:33 PM, Manders, Mark > wrote: > >> I was trying to install polymode into my Python 2.7 (32bit), for some >> reason the following errors come up. It can?t seem to find the numpy >> ?lapack? libraries even though I have numpy & scipy installed. I also >> have >> boost_python installed but it can?t find anything from that either.**** >> > > Did you install numpy/scipy from source or with a binary install? If the > latter, you probably don't have a separate BLAS/LAPACK on your system. > Either way, you need a setup.cfg to point setup.py to whereever your > BLAS/LAPACK is on your system. > > Note that Polymode is provided as Python(x,y) plugin as well. So if you > use > Python(x,y) that would be the way to go. > > Ralf > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/Installing-Polymode-tp34958853p35168541.html Sent from the Scipy-User mailing list archive at Nabble.com. From cgohlke at uci.edu Wed Mar 13 11:49:29 2013 From: cgohlke at uci.edu (Christoph Gohlke) Date: Wed, 13 Mar 2013 08:49:29 -0700 Subject: [SciPy-User] [SciPy-user] Installing Polymode In-Reply-To: <35168533.post@talk.nabble.com> References: <51099FAA.3010604@uci.edu> <35168533.post@talk.nabble.com> Message-ID: <5140A009.6040302@uci.edu> On 3/13/2013 1:08 AM, sunnychugh87 wrote: > > i have installed numpy mkl and scipy > > but when i import polymode in program,,it gives this error--- > > Traceback (most recent call last): > File "C:/Documents and Settings/student/Desktop/a1.py", line 1, in > > import Polymode > File "C:\Python26\lib\site-packages\Polymode\__init__.py", line 47, in > > from .mathlink import coordinates, constants > File "C:\Python26\lib\site-packages\Polymode\mathlink\__init__.py", line > 37, in > from .bessel_ratios import besselj_ratio, hankel1_ratio > File "numpy.pxd", line 119, in init Polymode.mathlink.bessel_ratios > (Polymode\mathlink\bessel_ratios.cpp:5165) > ValueError: numpy.dtype does not appear to be the correct type object > > > can u help...i need it bad > > > > > Christoph Gohlke wrote: >> >> On 1/29/2013 8:33 AM, Manders, Mark wrote: >>> I was trying to install polymode into my Python 2.7 (32bit), for some >>> reason the following errors come up. It can?t seem to find the numpy >>> ?lapack? libraries even though I have numpy & scipy installed. I also >>> have boost_python installed but it can?t find anything from that either. >>> >> >> Try . It requires >> Numpy-MKL and Scipy from the same website. >> Either downgrade to numpy 1.6 or use the updated builds at with numpy 1.7. Christoph From a.klein at science-applied.nl Wed Mar 13 16:31:57 2013 From: a.klein at science-applied.nl (Almar Klein) Date: Wed, 13 Mar 2013 21:31:57 +0100 Subject: [SciPy-User] ANN: IEP 3.2 - the Interactive Editor for Python Message-ID: Dear all, We're pleased to announce version 3.2 of the Interactive Editor for Python. IEP is a cross-platform Python IDE focused on interactivity and introspection, which makes it very suitable for scientific computing. Its practical design is aimed at simplicity and efficiency. Binaries are available for Windows, Linux, and Mac. A selection of the changes since 3.1: - This is the first release for which all binaries are build with Python 3.3 and PySide? . - New file browser tool, which replaces the old file browser and project manager tools. It combines the power of both in one simple interface. It also has functionality for peeking inside python modules. Since its design uses a generalization of the file system, implementing alternative file systems (like zip files or remote machines) should be quite easy in the future. - IEP now also comes with two fonts: 'DejaVu? Sans Mono' (the default font) and Adobes new 'Source code pro'. - IEP now supports multiple languages. Translations for Dutch, French, Spanish and Catalan are available. Website: http://code.google.com/p/iep/ Discussion group: http://groups.google.com/group/iep_ Release notes: https://code.google.com/p/iep/wiki/Release Happy coding! Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.emsellem at eso.org Thu Mar 14 04:34:52 2013 From: eric.emsellem at eso.org (Eric Emsellem) Date: Thu, 14 Mar 2013 09:34:52 +0100 Subject: [SciPy-User] Problem with ndimage.interpolation.zoom Message-ID: <51418BAC.70504@eso.org> Hi, I have a problem with the zoom function in the ndimage module of scipy. I am trying to "expand/zoom" an array by a factor of e.g., 3 or 4. I noticed that when the input array has a shape with odd number of e.g. lines/rows etc, something odd happens. When you take a random 3d array of 2x2x2 it works, and with a times-3 zoom, it tri-plicates each value in the output array. However, with the following example you'll see that the border values are replicated only twice and the central value 4 times... What is happening and how can I just get an array with just n times each input value? # Try: test = random.random(9) test3 = test.reshape((3,3)) test3z = scipy.ndimage.interpolation.zoom(test3, 3, mode='constant', prefilter=False, order=0) # and look at the output test3z... the border values are only there twice. Not three times... The central value is there 4 times... Thanks for any input. Eric From denis-bz-py at t-online.de Thu Mar 14 07:33:11 2013 From: denis-bz-py at t-online.de (denis) Date: Thu, 14 Mar 2013 11:33:11 +0000 (UTC) Subject: [SciPy-User] Problem with ndimage.interpolation.zoom References: <51418BAC.70504@eso.org> Message-ID: Eric Emsellem eso.org> writes: > I have a problem with the zoom function in the ndimage module of scipy. > I am trying to "expand/zoom" an array by a factor of e.g., 3 or 4. > I noticed that when the input array has a shape with odd number of e.g. Hi Eric, 1) try it with floats, as below; `zoom` uses floats internally, then converts to (in your case) ints when done, confusing. 2) consider zoom=2 of an arange in 1d: 0 1 2 y0 y1 y2 y3 y4 y5 To get y0 = 0 and y5 = 2, yj must be (5/2) * xj, *not* 2 * xj. (For zooming by ints just np.repeat or maybe you want a http://en.wikipedia.org/wiki/Reconstruction_filter ?) cheers -- denis # zoom.py: print ndimage.zoom from __future__ import division import sys import numpy as np from scipy.ndimage import zoom # $scipy/ndimage/interpolation.py side = 3 exec( "\n".join( sys.argv[1:] )) # run this.py side= ... from sh or ipytho np.set_printoptions( 1, threshold=100, edgeitems=10, suppress=True ) for dim in [1, 2]: shape = [side] * dim A = np.arange( np.prod(shape), dtype=float ).reshape(shape) print "in:\n", A print "\nzoom:\n", zoom( A, zoom=2 ) From sunnychugh87 at gmail.com Thu Mar 14 02:15:15 2013 From: sunnychugh87 at gmail.com (sunnychugh87) Date: Wed, 13 Mar 2013 23:15:15 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Installing Polymode In-Reply-To: <5140A009.6040302@uci.edu> References: <51099FAA.3010604@uci.edu> <35168533.post@talk.nabble.com> <5140A009.6040302@uci.edu> Message-ID: <35172793.post@talk.nabble.com> appreciate ur reply do i have to use numpy mkl-1.6(as it is not available at ur site) or normal numpy 1.6 from main site will work... also i don't understand updated builds with numpy 1.7??? Christoph Gohlke wrote: > > On 3/13/2013 1:08 AM, sunnychugh87 wrote: >> >> i have installed numpy mkl and scipy >> >> but when i import polymode in program,,it gives this error--- >> >> Traceback (most recent call last): >> File "C:/Documents and Settings/student/Desktop/a1.py", line 1, in >> >> import Polymode >> File "C:\Python26\lib\site-packages\Polymode\__init__.py", line 47, in >> >> from .mathlink import coordinates, constants >> File "C:\Python26\lib\site-packages\Polymode\mathlink\__init__.py", >> line >> 37, in >> from .bessel_ratios import besselj_ratio, hankel1_ratio >> File "numpy.pxd", line 119, in init Polymode.mathlink.bessel_ratios >> (Polymode\mathlink\bessel_ratios.cpp:5165) >> ValueError: numpy.dtype does not appear to be the correct type object >> >> >> can u help...i need it bad >> >> >> >> >> Christoph Gohlke wrote: >>> >>> On 1/29/2013 8:33 AM, Manders, Mark wrote: >>>> I was trying to install polymode into my Python 2.7 (32bit), for some >>>> reason the following errors come up. It can?t seem to find the numpy >>>> ?lapack? libraries even though I have numpy & scipy installed. I also >>>> have boost_python installed but it can?t find anything from that >>>> either. >>>> >>> >>> Try . It requires >>> Numpy-MKL and Scipy from the same website. >>> > > Either downgrade to numpy 1.6 or use the updated builds at > with numpy 1.7. > > Christoph > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/Installing-Polymode-tp34958853p35172793.html Sent from the Scipy-User mailing list archive at Nabble.com. From tmp50 at ukr.net Fri Mar 15 09:21:21 2013 From: tmp50 at ukr.net (Dmitrey) Date: Fri, 15 Mar 2013 15:21:21 +0200 Subject: [SciPy-User] OpenOpt Suite release 0.45 Message-ID: <1991.1363353681.6942528722304958464@ffe16.ukr.net> Hi all, > I'm glad to inform you about new OpenOpt Suite release 0.45 (2013-March-15): * Essential improvements for FuncDesigner interval analysis (thus affect interalg) * Temporary walkaround for a serious bug in FuncDesigner automatic differentiation kernel due to a bug in some versions of Python or NumPy, may affect optimization problems, including (MI)LP, (MI)NLP, TSP etc * Some other minor bugfixes and improvements > > --------------------------- > Regards, D. > http://openopt.org/Dmitrey -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.klein at science-applied.nl Sat Mar 16 18:31:15 2013 From: a.klein at science-applied.nl (Almar Klein) Date: Sat, 16 Mar 2013 23:31:15 +0100 Subject: [SciPy-User] ANN: Pyzo distro 2013b Message-ID: Dear all, We're pleased to announce release 2013b of Pyzo distro, a Python distribution for scientific computing based on Python 3. For more information, and to give it a try, visit http://www.pyzo.org. The most notable changes since the last release: - Pyzo is also build for Mac. - Python 3.3 is now used. - Added an executable to launch IPython notebook. - New versions of several packages, including the IDE. More information: Pyzo is available for Windows, Mac and Linux. The distribution is portable, thus providing a way to install a scientific Python stack on computers without the need for admin rights. Naturally, Pyzo distro is complient with the scipy-stack, and comes with additional packages such as scikit-image and scikit-learn. With Pyzo we want to make scientific computing in Python easier accessible. We especially hope to make it easier for newcomers (such as Matlab converts) to join our awesome community. Pyzo uses IEP as the default front-end IDE, and IPython (with notebook) is also available. Happy coding, Almar -------------- next part -------------- An HTML attachment was scrubbed... URL: From lanceboyle at qwest.net Sat Mar 16 23:22:52 2013 From: lanceboyle at qwest.net (Jerry) Date: Sat, 16 Mar 2013 20:22:52 -0700 Subject: [SciPy-User] ANN: Pyzo distro 2013b In-Reply-To: References: Message-ID: <152D9C59-D775-4217-9A98-79F3CD6E71AE@qwest.net> Pyzo crashed on OS X 10.7.5 in under three minutes while trying to run a hello world program. Jerry On Mar 16, 2013, at 3:31 PM, Almar Klein wrote: > Dear all, > > We're pleased to announce release 2013b of Pyzo distro, a Python distribution for scientific computing based on Python 3. For more information, and to give it a try, visit http://www.pyzo.org. > > The most notable changes since the last release: > Pyzo is also build for Mac. > Python 3.3 is now used. > Added an executable to launch IPython notebook. > New versions of several packages, including the IDE. > More information: > > Pyzo is available for Windows, Mac and Linux. The distribution is portable, thus providing a way to install a scientific Python stack on computers without the need for admin rights. > > Naturally, Pyzo distro is complient with the scipy-stack, and comes with additional packages such as scikit-image and scikit-learn. > > With Pyzo we want to make scientific computing in Python easier accessible. We especially hope to make it easier for newcomers (such as Matlab converts) to join our awesome community. Pyzo uses IEP as the default front-end IDE, and IPython (with notebook) is also available. > > Happy coding, > Almar > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.klein at science-applied.nl Sun Mar 17 12:14:48 2013 From: a.klein at science-applied.nl (Almar Klein) Date: Sun, 17 Mar 2013 17:14:48 +0100 Subject: [SciPy-User] ANN: Pyzo distro 2013b In-Reply-To: <152D9C59-D775-4217-9A98-79F3CD6E71AE@qwest.net> References: <152D9C59-D775-4217-9A98-79F3CD6E71AE@qwest.net> Message-ID: Were there any errors reported? - Almar On 17 March 2013 04:22, Jerry wrote: > Pyzo crashed on OS X 10.7.5 in under three minutes while trying to run a > hello world program. > Jerry > > On Mar 16, 2013, at 3:31 PM, Almar Klein wrote: > > Dear all, > > We're pleased to announce release 2013b of Pyzo distro, a Python > distribution for scientific computing based on Python 3. For more > information, and to give it a try, visit http://www.pyzo.org. > > The most notable changes since the last release: > > - Pyzo is also build for Mac. > - Python 3.3 is now used. > - Added an executable to launch IPython notebook. > - New versions of several packages, including the IDE. > > More information: > > Pyzo is available for Windows, Mac and Linux. The distribution is > portable, thus providing a way to install a scientific Python stack on > computers without the need for admin rights. > > Naturally, Pyzo distro is complient with the scipy-stack, and comes with > additional packages such as scikit-image and scikit-learn. > > With Pyzo we want to make scientific computing in Python easier > accessible. We especially hope to make it easier for newcomers (such as > Matlab converts) to join our awesome community. Pyzo uses IEP as the > default front-end IDE, and IPython (with notebook) is also available. > > Happy coding, > Almar > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Almar Klein, PhD Science Applied phone: +31 6 19268652 e-mail: a.klein at science-applied.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From lanceboyle at qwest.net Sun Mar 17 19:47:13 2013 From: lanceboyle at qwest.net (Jerry) Date: Sun, 17 Mar 2013 16:47:13 -0700 Subject: [SciPy-User] ANN: Pyzo distro 2013b In-Reply-To: References: <152D9C59-D775-4217-9A98-79F3CD6E71AE@qwest.net> Message-ID: <2CFCF0FC-8668-4F24-8D4A-AB1A794E11CA@qwest.net> Here's the crash log. Process: pyzo [14506] Path: /Applications/Programming/*/pyzo.app/Contents/MacOS/pyzo Identifier: pyzo Version: ??? (???) Code Type: X86 (Native) Parent Process: launchd [295] Date/Time: 2013-03-16 20:20:50.983 -0700 OS Version: Mac OS X 10.7.5 (11G63b) Report Version: 9 Crashed Thread: 4 Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x0000000000000038 VM Regions Near 0x38: --> __PAGEZERO 0000000000000000-0000000000001000 [ 4K] ---/--- SM=NUL /Applications/Programming/*/pyzo.app/Contents/MacOS/pyzo __TEXT 0000000000001000-0000000000004000 [ 12K] r-x/rwx SM=COW /Applications/Programming/*/pyzo.app/Contents/MacOS/pyzo Application Specific Information: objc[14506]: garbage collection is OFF Thread 0:: Dispatch queue: com.apple.main-thread 0 QtCore 0x01e1f870 qGetCharAttributes(unsigned short const*, unsigned int, HB_ScriptItem const*, unsigned int, HB_CharAttributes*) + 0 1 QtGui 0x03fd7a99 QTextEngine::attributes() const + 633 2 QtGui 0x03fe30e8 QTextLine::layout_helper(int) + 584 3 QtGui 0x04238bc6 QPlainTextDocumentLayout::layoutBlock(QTextBlock const&) + 486 4 QtGui 0x0423a554 QPlainTextDocumentLayout::blockBoundingRect(QTextBlock const&) const + 420 5 QtGui 0x0423ba0c QPlainTextEditPrivate::_q_adjustScrollbars() + 988 6 QtCore 0x01ece63f QMetaObject::activate(QObject*, QMetaObject const*, int, void**) + 1407 7 QtGui 0x03fc063e QTextControl::documentSizeChanged(QSizeF const&) + 62 8 QtGui 0x03fc70d0 QTextControl::qt_static_metacall(QObject*, QMetaObject::Call, int, void**) + 480 9 QtCore 0x01ec962f QMetaCallEvent::placeMetaCall(QObject*) + 47 10 QtCore 0x01ecabd7 QObject::event(QEvent*) + 983 11 QtGui 0x03fc0398 QTextControl::event(QEvent*) + 24 12 QtGui 0x03d94c5c QApplicationPrivate::notify_helper(QObject*, QEvent*) + 188 13 QtGui 0x03d99863 QApplication::notify(QObject*, QEvent*) + 1187 14 QtGui.so 0x03076c14 QApplicationWrapper::notify(QObject*, QEvent*) + 292 15 QtCore 0x01eb69dc QCoreApplication::notifyInternal(QObject*, QEvent*) + 108 16 QtCore 0x01eb7df8 QCoreApplicationPrivate::sendPostedEvents(QObject*, int, QThreadData*) + 664 17 QtGui 0x03d4b66f QEventDispatcherMacPrivate::postedEventsSourcePerformCallback(void*) + 175 18 com.apple.CoreFoundation 0x94a3c13f __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 15 19 com.apple.CoreFoundation 0x94a3baf6 __CFRunLoopDoSources0 + 246 20 com.apple.CoreFoundation 0x94a659c8 __CFRunLoopRun + 1112 21 com.apple.CoreFoundation 0x94a651dc CFRunLoopRunSpecific + 332 22 com.apple.CoreFoundation 0x94a65088 CFRunLoopRunInMode + 120 23 com.apple.HIToolbox 0x90c10543 RunCurrentEventLoopInMode + 318 24 com.apple.HIToolbox 0x90c178ab ReceiveNextEventCommon + 381 25 com.apple.HIToolbox 0x90c1771a BlockUntilNextEventMatchingListInMode + 88 26 com.apple.AppKit 0x990e8ee8 _DPSNextEvent + 678 27 com.apple.AppKit 0x990e8752 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 113 28 com.apple.AppKit 0x990e4ac1 -[NSApplication run] + 911 29 QtGui 0x03d4b0ca QEventDispatcherMac::processEvents(QFlags) + 1786 30 QtCore 0x01eb5891 QEventLoop::processEvents(QFlags) + 65 31 QtCore 0x01eb5c6a QEventLoop::exec(QFlags) + 314 32 QtCore 0x01eb8346 QCoreApplication::exec() + 182 33 QtGui.so 0x03075923 Sbk_QApplicationFunc_exec_ + 83 34 libpython 0x0027293a PyEval_EvalFrameEx + 28042 35 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 36 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 37 libpython 0x00273ecf PyEval_EvalCode + 95 38 libpython 0x00269050 builtin_exec + 224 39 libpython 0x00272d48 PyEval_EvalFrameEx + 29080 40 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 41 libpython 0x00273ecf PyEval_EvalCode + 95 42 pyzo 0x00002c55 main + 2261 43 pyzo 0x000021f5 start + 53 Thread 1:: Dispatch queue: com.apple.libdispatch-manager 0 libsystem_kernel.dylib 0x9bad290a kevent + 10 1 libdispatch.dylib 0x975dee04 _dispatch_mgr_invoke + 969 2 libdispatch.dylib 0x975dd853 _dispatch_mgr_thread + 53 Thread 2: 0 libsystem_kernel.dylib 0x9bad183e __psynch_cvwait + 10 1 libsystem_c.dylib 0x9b222e21 _pthread_cond_wait + 827 2 libsystem_c.dylib 0x9b1d33e0 pthread_cond_timedwait$UNIX2003 + 70 3 libpython 0x002a2b56 PyThread_acquire_lock_timed + 342 4 libpython 0x002a8315 acquire_timed + 229 5 libpython 0x002a86cb lock_PyThread_acquire_lock + 251 6 libpython 0x00272d48 PyEval_EvalFrameEx + 29080 7 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 8 libpython 0x00272ece PyEval_EvalFrameEx + 29470 9 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 10 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 11 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 12 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 13 libpython 0x001e693e function_call + 158 14 libpython 0x001bb6c9 PyObject_Call + 89 15 libpython 0x001d332c method_call + 140 16 libpython 0x001bb6c9 PyObject_Call + 89 17 libpython 0x0026a20e PyEval_CallObjectWithKeywords + 78 18 libpython 0x002a7e2c t_bootstrap + 76 19 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 20 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 3: 0 libsystem_kernel.dylib 0x9bad1b42 __select + 10 1 time.so 0x007dc1f3 time_sleep + 211 2 libpython 0x00272d48 PyEval_EvalFrameEx + 29080 3 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 4 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 5 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 6 libpython 0x001e693e function_call + 158 7 libpython 0x001bb6c9 PyObject_Call + 89 8 libpython 0x001d332c method_call + 140 9 libpython 0x001bb6c9 PyObject_Call + 89 10 libpython 0x0026a20e PyEval_CallObjectWithKeywords + 78 11 libpython 0x002a7e2c t_bootstrap + 76 12 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 13 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 4 Crashed: 0 QtCore 0x01def84a QString::operator=(QString const&) + 26 1 QtGui 0x03fd254b QTextEngine::validate() const + 123 2 QtGui 0x03fd5ff0 QTextEngine::itemize() const + 48 3 QtGui 0x03fde668 QTextLayout::beginLayout() + 40 4 QtGui 0x04238b02 QPlainTextDocumentLayout::layoutBlock(QTextBlock const&) + 290 5 QtGui 0x0423a554 QPlainTextDocumentLayout::blockBoundingRect(QTextBlock const&) const + 420 6 QtGui 0x04031488 QTextCursorPrivate::blockLayout(QTextBlock&) const + 72 7 QtGui 0x0403337b QTextCursorPrivate::setX() + 123 8 QtGui 0x0403644e QTextCursor::insertText(QString const&, QTextCharFormat const&) + 846 9 QtGui.so 0x03568390 Sbk_QTextCursorFunc_insertText + 624 10 libpython 0x00272d48 PyEval_EvalFrameEx + 29080 11 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 12 libpython 0x00272ece PyEval_EvalFrameEx + 29470 13 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 14 libpython 0x001e693e function_call + 158 15 libpython 0x001bb6c9 PyObject_Call + 89 16 libpython 0x001d332c method_call + 140 17 libpython 0x001bb6c9 PyObject_Call + 89 18 libpython 0x0026a20e PyEval_CallObjectWithKeywords + 78 19 libpython 0x001e0309 PyFile_WriteObject + 137 20 libpython 0x00267821 builtin_print + 305 21 libpython 0x00272d48 PyEval_EvalFrameEx + 29080 22 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 23 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 24 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 25 libpython 0x00272ece PyEval_EvalFrameEx + 29470 26 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 27 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 28 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 29 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 30 libpython 0x001e693e function_call + 158 31 libpython 0x001bb6c9 PyObject_Call + 89 32 libpython 0x001d332c method_call + 140 33 libpython 0x001bb6c9 PyObject_Call + 89 34 libpython 0x0026a20e PyEval_CallObjectWithKeywords + 78 35 libpython 0x002a7e2c t_bootstrap + 76 36 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 37 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 5:: QFileInfoGatherer 0 libsystem_kernel.dylib 0x9bad183e __psynch_cvwait + 10 1 libsystem_c.dylib 0x9b222e21 _pthread_cond_wait + 827 2 libsystem_c.dylib 0x9b1d342c pthread_cond_wait$UNIX2003 + 71 3 QtCore 0x01d97976 QWaitCondition::wait(QMutex*, unsigned long) + 294 4 QtGui 0x0428d9da QFileInfoGatherer::run() + 650 5 QtCore 0x01d969ea QThreadPrivate::start(void*) + 346 6 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 7 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 6:: QKqueueFileSystemWatcherEngine 0 libsystem_kernel.dylib 0x9bad290a kevent + 10 1 QtCore 0x01e98a55 QKqueueFileSystemWatcherEngine::run() + 117 2 QtCore 0x01d969ea QThreadPrivate::start(void*) + 346 3 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 4 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 7: 0 libsystem_kernel.dylib 0x9bad183e __psynch_cvwait + 10 1 libsystem_c.dylib 0x9b222e21 _pthread_cond_wait + 827 2 libsystem_c.dylib 0x9b1d33e0 pthread_cond_timedwait$UNIX2003 + 70 3 libpython 0x002a2b56 PyThread_acquire_lock_timed + 342 4 libpython 0x002a8315 acquire_timed + 229 5 libpython 0x002a86cb lock_PyThread_acquire_lock + 251 6 libpython 0x00272d48 PyEval_EvalFrameEx + 29080 7 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 8 libpython 0x00272ece PyEval_EvalFrameEx + 29470 9 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 10 libpython 0x00272ece PyEval_EvalFrameEx + 29470 11 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 12 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 13 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 14 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 15 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 16 libpython 0x001e693e function_call + 158 17 libpython 0x001bb6c9 PyObject_Call + 89 18 libpython 0x001d332c method_call + 140 19 libpython 0x001bb6c9 PyObject_Call + 89 20 libpython 0x0026a20e PyEval_CallObjectWithKeywords + 78 21 libpython 0x002a7e2c t_bootstrap + 76 22 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 23 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 8: 0 libsystem_kernel.dylib 0x9bad1b42 __select + 10 1 select.so 0x0279733d select_select + 461 2 libpython 0x00272d48 PyEval_EvalFrameEx + 29080 3 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 4 libpython 0x00272ece PyEval_EvalFrameEx + 29470 5 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 6 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 7 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 8 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 9 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 10 libpython 0x001e693e function_call + 158 11 libpython 0x001bb6c9 PyObject_Call + 89 12 libpython 0x001d332c method_call + 140 13 libpython 0x001bb6c9 PyObject_Call + 89 14 libpython 0x0026a20e PyEval_CallObjectWithKeywords + 78 15 libpython 0x002a7e2c t_bootstrap + 76 16 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 17 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 9: 0 libsystem_kernel.dylib 0x9bad183e __psynch_cvwait + 10 1 libsystem_c.dylib 0x9b222e21 _pthread_cond_wait + 827 2 libsystem_c.dylib 0x9b1d33e0 pthread_cond_timedwait$UNIX2003 + 70 3 libpython 0x002a2b56 PyThread_acquire_lock_timed + 342 4 libpython 0x002a8315 acquire_timed + 229 5 libpython 0x002a86cb lock_PyThread_acquire_lock + 251 6 libpython 0x00272d48 PyEval_EvalFrameEx + 29080 7 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 8 libpython 0x00272ece PyEval_EvalFrameEx + 29470 9 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 10 libpython 0x00272ece PyEval_EvalFrameEx + 29470 11 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 12 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 13 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 14 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 15 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 16 libpython 0x001e693e function_call + 158 17 libpython 0x001bb6c9 PyObject_Call + 89 18 libpython 0x001d332c method_call + 140 19 libpython 0x001bb6c9 PyObject_Call + 89 20 libpython 0x0026a20e PyEval_CallObjectWithKeywords + 78 21 libpython 0x002a7e2c t_bootstrap + 76 22 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 23 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 10: 0 libsystem_kernel.dylib 0x9bad1b42 __select + 10 1 select.so 0x0279733d select_select + 461 2 libpython 0x00272d48 PyEval_EvalFrameEx + 29080 3 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 4 libpython 0x00272ece PyEval_EvalFrameEx + 29470 5 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 6 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 7 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 8 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 9 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 10 libpython 0x001e693e function_call + 158 11 libpython 0x001bb6c9 PyObject_Call + 89 12 libpython 0x001d332c method_call + 140 13 libpython 0x001bb6c9 PyObject_Call + 89 14 libpython 0x0026a20e PyEval_CallObjectWithKeywords + 78 15 libpython 0x002a7e2c t_bootstrap + 76 16 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 17 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 11:: com.apple.CFSocket.private 0 libsystem_kernel.dylib 0x9bad1b42 __select + 10 1 com.apple.CoreFoundation 0x94ab3e15 __CFSocketManager + 1557 2 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 3 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 12: 0 libsystem_kernel.dylib 0x9bad2d4e __read + 10 1 libpython 0x002d5de4 fileio_read + 228 2 libpython 0x001bb6c9 PyObject_Call + 89 3 libpython 0x001bbdc2 call_function_tail + 66 4 libpython 0x001bbf9a callmethod + 74 5 libpython 0x001bc073 _PyObject_CallMethodId_SizeT + 67 6 libpython 0x002d447d iobase_readline + 429 7 libpython 0x00272d48 PyEval_EvalFrameEx + 29080 8 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 9 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 10 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 11 libpython 0x001e693e function_call + 158 12 libpython 0x001bb6c9 PyObject_Call + 89 13 libpython 0x001d332c method_call + 140 14 libpython 0x001bb6c9 PyObject_Call + 89 15 libpython 0x0026a20e PyEval_CallObjectWithKeywords + 78 16 libpython 0x002a7e2c t_bootstrap + 76 17 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 18 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 13: 0 libsystem_kernel.dylib 0x9bad183e __psynch_cvwait + 10 1 libsystem_c.dylib 0x9b222e21 _pthread_cond_wait + 827 2 libsystem_c.dylib 0x9b1d33e0 pthread_cond_timedwait$UNIX2003 + 70 3 libpython 0x002a2b56 PyThread_acquire_lock_timed + 342 4 libpython 0x002a8315 acquire_timed + 229 5 libpython 0x002a86cb lock_PyThread_acquire_lock + 251 6 libpython 0x00272d48 PyEval_EvalFrameEx + 29080 7 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 8 libpython 0x00272ece PyEval_EvalFrameEx + 29470 9 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 10 libpython 0x00272ece PyEval_EvalFrameEx + 29470 11 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 12 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 13 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 14 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 15 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 16 libpython 0x001e693e function_call + 158 17 libpython 0x001bb6c9 PyObject_Call + 89 18 libpython 0x001d332c method_call + 140 19 libpython 0x001bb6c9 PyObject_Call + 89 20 libpython 0x0026a20e PyEval_CallObjectWithKeywords + 78 21 libpython 0x002a7e2c t_bootstrap + 76 22 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 23 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 14: 0 libsystem_kernel.dylib 0x9bad1b42 __select + 10 1 select.so 0x0279733d select_select + 461 2 libpython 0x00272d48 PyEval_EvalFrameEx + 29080 3 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 4 libpython 0x00272ece PyEval_EvalFrameEx + 29470 5 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 6 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 7 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 8 libpython 0x002730e7 PyEval_EvalFrameEx + 30007 9 libpython 0x00273de6 PyEval_EvalCodeEx + 2246 10 libpython 0x001e693e function_call + 158 11 libpython 0x001bb6c9 PyObject_Call + 89 12 libpython 0x001d332c method_call + 140 13 libpython 0x001bb6c9 PyObject_Call + 89 14 libpython 0x0026a20e PyEval_CallObjectWithKeywords + 78 15 libpython 0x002a7e2c t_bootstrap + 76 16 libsystem_c.dylib 0x9b21eed9 _pthread_start + 335 17 libsystem_c.dylib 0x9b2226de thread_start + 34 Thread 4 crashed with X86 Thread State (32-bit): eax: 0x0c2c5930 ebx: 0x00000038 ecx: 0x009d8500 edx: 0xb118b001 edi: 0x06ee1fa4 esi: 0xb118b0ec ebp: 0xb118b0b8 esp: 0xb118b0a0 ss: 0x00000023 efl: 0x00010202 eip: 0x01def84a cs: 0x0000001b ds: 0x00000023 es: 0x00000023 fs: 0x00000023 gs: 0x0000000f cr2: 0x00000038 Logical CPU: 1 Binary Images: 0x1000 - 0x3ff7 +pyzo (??? - ???) <5879C298-5373-085A-1B72-99C5D7FE5B37> /Applications/Programming/*/pyzo.app/Contents/MacOS/pyzo 0x1b2000 - 0x31cfff +libpython (3.3.0 - compatibility 3.3.0) /Applications/Programming/*/pyzo.app/Contents/MacOS/libpython 0x40b000 - 0x413ff7 +libintl.8.dylib (10.2.0 - compatibility 10.0.0) <9FA0F148-5E4F-ACFD-26E5-60D8E6AC1D98> /Applications/Programming/*/pyzo.app/Contents/MacOS/libintl.8.dylib 0x418000 - 0x512fe7 +libiconv.2.dylib (8.1.0 - compatibility 8.0.0) <741383C6-EC9F-1FFA-4FB9-753659915948> /Applications/Programming/*/pyzo.app/Contents/MacOS/libiconv.2.dylib 0x7da000 - 0x7dcff7 +time.so (??? - ???) <63E345CF-0A99-CA87-A4B2-83AC282087E8> /Applications/Programming/*/time.so 0x7e2000 - 0x7e6fe7 +math.so (??? - ???) <68272721-A98A-215E-D8C8-7143B7497E56> /Applications/Programming/*/math.so 0x7ed000 - 0x7f1ff7 +_dotblas.so (??? - ???) <42248344-6FE7-98E0-12FE-FCA6C0287718> /Applications/Programming/*/_dotblas.so 0x7f5000 - 0x7f8ff7 +_struct.so (??? - ???) /Applications/Programming/*/_struct.so 0x10c6000 - 0x10d0fe7 +_datetime.so (??? - ???) <0988B993-0126-0477-7BBE-6F1E42E8A425> /Applications/Programming/*/_datetime.so 0x10da000 - 0x10e5ff7 +_pickle.so (??? - ???) /Applications/Programming/*/_pickle.so 0x10f0000 - 0x10f4ff7 +_compiled_base.so (??? - ???) /Applications/Programming/*/_compiled_base.so 0x10f8000 - 0x10fbff7 +lapack_lite.so (??? - ???) <02C11DC3-3319-7EA9-26B2-676DD8D5EE93> /Applications/Programming/*/lapack_lite.so 0x1240000 - 0x1339fe7 +multiarray.so (??? - ???) /Applications/Programming/*/multiarray.so 0x13bd000 - 0x13ffff7 +umath.so (??? - ???) /Applications/Programming/*/umath.so 0x1463000 - 0x1481ff7 +scalarmath.so (??? - ???) <5D3D1EEA-B4EB-E609-6F53-1C9A51F2D35E> /Applications/Programming/*/scalarmath.so 0x158f000 - 0x158fff7 +grp.so (??? - ???) <24FDD17F-84B6-6DAB-63AE-D908EE600B34> /Applications/Programming/*/grp.so 0x1592000 - 0x1593ff7 +_bz2.so (??? - ???) /Applications/Programming/*/_bz2.so 0x1597000 - 0x15a5ff7 +libbz2.1.0.dylib (1.0.6 - compatibility 1.0.0) <5C1D59DF-FCAB-54EA-9C48-AC09BA9B3FBE> /Applications/Programming/*/libbz2.1.0.dylib 0x15e9000 - 0x15f2ff7 +fftpack_lite.so (??? - ???) <32FE8F56-8236-CF93-FFBA-846CF0DD96D4> /Applications/Programming/*/fftpack_lite.so 0x1676000 - 0x16b5fe7 +mtrand.so (??? - ???) /Applications/Programming/*/mtrand.so 0x16fd000 - 0x170dffb +_ctypes.so (??? - ???) /Applications/Programming/*/_ctypes.so 0x179a000 - 0x179dfe7 +binascii.so (??? - ???) /Applications/Programming/*/binascii.so 0x17a1000 - 0x17b2ff7 +libz.1.dylib (1.2.7 - compatibility 1.0.0) /Applications/Programming/*/libz.1.dylib 0x17b6000 - 0x17b9ff7 +zlib.so (??? - ???) <2D8B4D07-EA2C-A970-D79E-1BCAF427E136> /Applications/Programming/*/zlib.so 0x17fe000 - 0x17ffff7 +_hashlib.so (??? - ???) <0B45B52D-B666-5157-2B08-2040FE7842F1> /Applications/Programming/*/_hashlib.so 0x1803000 - 0x1850fff +libssl.1.0.0.dylib (??? - ???) /Applications/Programming/*/libssl.1.0.0.dylib 0x1868000 - 0x199efff +libcrypto.1.0.0.dylib (??? - ???) <67227348-A839-FE70-8E8C-A8F6861AEB70> /Applications/Programming/*/libcrypto.1.0.0.dylib 0x19fb000 - 0x19fcff7 +_random.so (??? - ???) <07ED5D23-4BE7-7695-AC71-E73104C7D141> /Applications/Programming/*/_random.so 0x19ff000 - 0x1a00ff7 +fcntl.so (??? - ???) <3B704363-2293-EC64-B84C-BEC4B200FD9B> /Applications/Programming/*/fcntl.so 0x1a03000 - 0x1c59ff7 +QtCore.so (??? - ???) /Applications/Programming/*/QtCore.so 0x1d0f000 - 0x1d2efe7 +libpyside.cpython-33m.1.1.dylib (1.1.2 - compatibility 1.1.0) /Applications/Programming/*/libpyside.cpython-33m.1.1.dylib 0x1d3d000 - 0x1d5eff3 +libshiboken.cpython-33m.1.1.dylib (1.1.2 - compatibility 1.1.0) /Applications/Programming/*/libshiboken.cpython-33m.1.1.dylib 0x1d70000 - 0x2021fe7 +QtCore (4.8.4 - compatibility 4.8.0) <20DF77AF-5BA0-7CDD-55AC-25599FDB4893> /Applications/Programming/*/QtCore 0x20d0000 - 0x20d0ff7 +atexit.so (??? - ???) <16387E5C-9A89-45B3-D888-78E7226D19B6> /Applications/Programming/*/atexit.so 0x20d3000 - 0x20f4fe7 +libpng15.15.dylib (30.0.0 - compatibility 30.0.0) <219E2FD8-7D46-F66C-FD17-4C52BDAFEC13> /Applications/Programming/*/libpng15.15.dylib 0x24fb000 - 0x24fcff7 +_posixsubprocess.so (??? - ???) <420C9ED4-D388-B528-0280-9E90B0D3B4FC> /Applications/Programming/*/_posixsubprocess.so 0x2740000 - 0x274aff7 +_socket.so (??? - ???) <471A4BB4-FE39-369B-69F4-F5BD8C271993> /Applications/Programming/*/_socket.so 0x2795000 - 0x2797ff7 +select.so (??? - ???) /Applications/Programming/*/select.so 0x3000000 - 0x3a4efef +QtGui.so (??? - ???) <6FC4A5E0-DBB7-68E0-19B9-99CA4078AB7F> /Applications/Programming/*/QtGui.so 0x3d07000 - 0x469afe7 +QtGui (4.8.4 - compatibility 4.8.0) /Applications/Programming/*/QtGui 0x491a000 - 0x494affb com.apple.security.csparser (3.0 - 55148.6) /System/Library/Frameworks/Security.framework/PlugIns/csparser.bundle/Contents/MacOS/csparser 0x6d5d000 - 0x6d62ff7 +array.so (??? - ???) /Applications/Programming/*/array.so 0x7256000 - 0x7256ff7 +cl_kernels (??? - ???) <1CE74C09-AF9D-40C2-97E9-93A80583A9DC> cl_kernels 0x725c000 - 0x725dff5 +cl_kernels (??? - ???) <13D08A2C-EC89-45BF-928B-46E9B985CA43> cl_kernels 0x7274000 - 0x7275ff1 +cl_kernels (??? - ???) <55021CFE-FD25-486D-A73F-7A572E682E0C> cl_kernels 0x74cf000 - 0x74d2ffb +com.stclairsoft.DefaultFolderX.osax (4.5.1 - 4.5.1) <0B1B0FB4-B782-D01B-5109-517C9A1AA949> /Library/ScriptingAdditions/Default Folder X Addition.osax/Contents/MacOS/Default Folder X Addition 0x74d9000 - 0x7508ff7 +com.stclairsoft.DefaultFolderX.CarbonPatcher (4.5.1 - 4.5.1) <2F1C473C-A672-3E1F-653A-B770E49300B5> /Users/USER/Library/PreferencePanes/Default Folder X.prefPane/Contents/Resources/Default Folder X.bundle/Contents/Resources/Carbon Patcher.bundle/Contents/MacOS/Carbon Patcher 0x7521000 - 0x7546ff3 +com.stclairsoft.DefaultFolderX.CocoaPatcher (4.5.1d5 - 4.5.1d5) <24A0D1ED-E4D6-1EA1-0F4F-43FB2186863F> /Users/USER/Library/PreferencePanes/Default Folder X.prefPane/Contents/Resources/Default Folder X.bundle/Contents/Resources/Cocoa Patcher.bundle/Contents/MacOS/Cocoa Patcher 0x8d4a000 - 0x8decfe7 +unicodedata.so (??? - ???) /Applications/Programming/*/unicodedata.so 0x8f07000 - 0x8f10ff6 libcldcpuengine.dylib (2.0.19 - compatibility 1.0.0) <95A88DC8-E5EE-363F-9275-214D5AB7A2EF> /System/Library/Frameworks/OpenCL.framework/Libraries/libcldcpuengine.dylib 0x8f17000 - 0x8f19fff libCoreFSCache.dylib (??? - ???) <9E7CBE71-566C-36E9-A49F-C5FF6956D76F> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCoreFSCache.dylib 0x8f28000 - 0x8f28ffd +cl_kernels (??? - ???) cl_kernels 0x8f2a000 - 0x8fd2ff7 unorm8_bgra.dylib (2.0.19 - compatibility 1.0.0) <99A967D2-5577-396B-BD11-56EAFF962AB2> /System/Library/Frameworks/OpenCL.framework/Libraries/ImageFormats/unorm8_bgra.dylib 0x90ce000 - 0x90cfffd +cl_kernels (??? - ???) <3535F911-E8EE-47F7-84F6-C1F8B876FBB1> cl_kernels 0xc0fa000 - 0xc0faff1 +cl_kernels (??? - ???) cl_kernels 0xfc42000 - 0xfceaff7 unorm8_argb.dylib (2.0.19 - compatibility 1.0.0) <1C5CBAF6-9739-340F-9CD6-10D08FEF554F> /System/Library/Frameworks/OpenCL.framework/Libraries/ImageFormats/unorm8_argb.dylib 0x8fee8000 - 0x8ff1aaa7 dyld (195.6 - ???) <3A866A34-4CDD-35A4-B26E-F145B05F3644> /usr/lib/dyld 0x90005000 - 0x90043fff libRIP.A.dylib (600.0.0 - compatibility 64.0.0) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libRIP.A.dylib 0x90081000 - 0x9008cffb com.apple.speech.recognition.framework (4.0.21 - 4.0.21) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SpeechRecognition.framework/Versions/A/SpeechRecognition 0x90094000 - 0x900abff8 com.apple.CoreMediaAuthoring (2.0 - 891) <69D569FD-670C-3BD0-94BF-7A8954AA2953> /System/Library/PrivateFrameworks/CoreMediaAuthoring.framework/Versions/A/CoreMediaAuthoring 0x900ac000 - 0x900c1ff7 com.apple.ImageCapture (7.1.0 - 7.1.0) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/ImageCapture.framework/Versions/A/ImageCapture 0x900c2000 - 0x900ffff7 libcups.2.dylib (2.9.0 - compatibility 2.0.0) <007A1877-E981-3007-A8FA-9B179F4ED6D1> /usr/lib/libcups.2.dylib 0x9011e000 - 0x901deffb com.apple.ColorSync (4.7.4 - 4.7.4) <0A68AF35-15DF-3A0A-9B17-70CE2A106A6C> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ColorSync.framework/Versions/A/ColorSync 0x901df000 - 0x901ffff7 com.apple.RemoteViewServices (1.5 - 44.2) <11C87337-FF29-3976-A230-6387D96563C5> /System/Library/PrivateFrameworks/RemoteViewServices.framework/Versions/A/RemoteViewServices 0x90200000 - 0x90259fff com.apple.HIServices (1.21 - ???) <91EC636D-AC27-3332-BA1C-FD7301917429> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/HIServices.framework/Versions/A/HIServices 0x9045c000 - 0x9055bffb com.apple.DiskImagesFramework (10.7.4 - 331.7) <31A74A7E-E2AE-313D-A7C4-6DFCF0F22C9A> /System/Library/PrivateFrameworks/DiskImages.framework/Versions/A/DiskImages 0x9055c000 - 0x9055ffff com.apple.AppleSystemInfo (1.0 - 1) <0E02BA66-4EA6-3EA1-8D81-3D0DE36F1CE8> /System/Library/PrivateFrameworks/AppleSystemInfo.framework/Versions/A/AppleSystemInfo 0x90560000 - 0x90637ff3 com.apple.avfoundation (2.0 - 180.50) <7B7FDF30-AC40-3715-A409-B5A27F7B5585> /System/Library/Frameworks/AVFoundation.framework/Versions/A/AVFoundation 0x9074a000 - 0x90780ff4 com.apple.LDAPFramework (3.2 - 120.2) /System/Library/Frameworks/LDAP.framework/Versions/A/LDAP 0x90787000 - 0x907fdfff com.apple.Metadata (10.7.0 - 627.37) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/Metadata.framework/Versions/A/Metadata 0x90846000 - 0x9084aff3 libsystem_network.dylib (??? - ???) <62EBADDA-FC72-3275-AAB3-5EDD949FEFAF> /usr/lib/system/libsystem_network.dylib 0x9084b000 - 0x90879fe7 libSystem.B.dylib (159.1.0 - compatibility 1.0.0) <30189C33-6ADD-3142-83F3-6114B1FC152E> /usr/lib/libSystem.B.dylib 0x9087a000 - 0x9087dff7 libcompiler_rt.dylib (6.0.0 - compatibility 1.0.0) <7F6C14CC-0169-3F1B-B89C-372F67F1F3B5> /usr/lib/system/libcompiler_rt.dylib 0x9087e000 - 0x90914ff7 com.apple.LaunchServices (480.40 - 480.40) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/LaunchServices.framework/Versions/A/LaunchServices 0x90949000 - 0x90bcefe3 com.apple.QuickTime (7.7.1 - 2339) /System/Library/Frameworks/QuickTime.framework/Versions/A/QuickTime 0x90c04000 - 0x90c0dfff libc++abi.dylib (14.0.0 - compatibility 1.0.0) /usr/lib/libc++abi.dylib 0x90c0e000 - 0x90f54ff3 com.apple.HIToolbox (1.9 - ???) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox 0x90f96000 - 0x90f9cffd com.apple.CommerceCore (1.0 - 17) /System/Library/PrivateFrameworks/CommerceKit.framework/Versions/A/Frameworks/CommerceCore.framework/Versions/A/CommerceCore 0x90f9d000 - 0x90fffff3 libstdc++.6.dylib (52.0.0 - compatibility 7.0.0) <266CE9B3-526A-3C41-BA58-7AE66A3B15FD> /usr/lib/libstdc++.6.dylib 0x91000000 - 0x91001fff libDiagnosticMessagesClient.dylib (??? - ???) /usr/lib/libDiagnosticMessagesClient.dylib 0x910a4000 - 0x91131fe7 libvMisc.dylib (325.4.0 - compatibility 1.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib 0x91177000 - 0x912a3ff9 com.apple.CFNetwork (520.5.1 - 520.5.1) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CFNetwork.framework/Versions/A/CFNetwork 0x912a4000 - 0x912a8ff7 com.apple.OpenDirectory (10.7 - 146) <4986A382-8FEF-3392-8CE9-CF6A5EE4E365> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/OpenDirectory 0x912a9000 - 0x9138cff7 libcrypto.0.9.8.dylib (44.0.0 - compatibility 0.9.8) /usr/lib/libcrypto.0.9.8.dylib 0x913b6000 - 0x913e3ff9 com.apple.securityinterface (5.0 - 55022.6) <0FA3E84B-B5FF-3A58-A408-46280982CACC> /System/Library/Frameworks/SecurityInterface.framework/Versions/A/SecurityInterface 0x913e4000 - 0x913f1fff com.apple.HelpData (2.1.2 - 72.2) <330C6B7F-2512-37B7-B2FF-24E1804E9426> /System/Library/PrivateFrameworks/HelpData.framework/Versions/A/HelpData 0x9143f000 - 0x91440ff7 libquarantine.dylib (36.7.0 - compatibility 1.0.0) <46980EC2-149D-3CF7-B29A-401FB89C275D> /usr/lib/system/libquarantine.dylib 0x91441000 - 0x914a6ff7 libvDSP.dylib (325.4.0 - compatibility 1.0.0) <4B4B32D2-4F66-3B0D-BD61-FA8429FF8507> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib 0x914df000 - 0x914e2ffc libpam.2.dylib (3.0.0 - compatibility 3.0.0) <6FFDBD60-5EC6-3EFA-996B-EE030443C16C> /usr/lib/libpam.2.dylib 0x914e3000 - 0x91540ffb com.apple.htmlrendering (76 - 1.1.4) <409EF0CB-2997-369A-9326-BE12436B9EE1> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HTMLRendering.framework/Versions/A/HTMLRendering 0x91541000 - 0x915a3ffb com.apple.datadetectorscore (3.0 - 179.4) <3A418498-C189-37A1-9B86-F0ECB33AD91C> /System/Library/PrivateFrameworks/DataDetectorsCore.framework/Versions/A/DataDetectorsCore 0x915a4000 - 0x915dbfef com.apple.DebugSymbols (2.1 - 87) /System/Library/PrivateFrameworks/DebugSymbols.framework/Versions/A/DebugSymbols 0x915dc000 - 0x915e2ffb com.apple.print.framework.Print (7.4 - 247.3) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Print.framework/Versions/A/Print 0x915e6000 - 0x915f6fff com.apple.LangAnalysis (1.7.0 - 1.7.0) <6D6F0C9D-2EEA-3578-AF3D-E2A09BCECAF3> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/LangAnalysis.framework/Versions/A/LangAnalysis 0x91667000 - 0x91668ff4 libremovefile.dylib (21.1.0 - compatibility 1.0.0) <6DE3FDC7-0BE0-3791-B6F5-C15422A8AFB8> /usr/lib/system/libremovefile.dylib 0x916a0000 - 0x916e2ff7 com.apple.CoreMedia (1.0 - 705.94) <10D5D25F-9BCB-3406-B737-23D9FDF2CC71> /System/Library/Frameworks/CoreMedia.framework/Versions/A/CoreMedia 0x916e3000 - 0x91724ff9 libcurl.4.dylib (7.0.0 - compatibility 7.0.0) <9FD420FB-7984-3A07-8914-BB19E687D38B> /usr/lib/libcurl.4.dylib 0x91738000 - 0x9178bff3 com.apple.ImageCaptureCore (3.1.0 - 3.1.0) /System/Library/Frameworks/ImageCaptureCore.framework/Versions/A/ImageCaptureCore 0x9178c000 - 0x91819ff7 com.apple.CoreText (220.22.0 - ???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreText.framework/Versions/A/CoreText 0x9187b000 - 0x918c5ff2 com.apple.Suggestions (1.1 - 85.1) <1057087C-AC51-3C3B-BECD-BF97426B2372> /System/Library/PrivateFrameworks/Suggestions.framework/Versions/A/Suggestions 0x918d9000 - 0x919d1ff7 libFontParser.dylib (??? - ???) <71B33EB1-27F8-3C68-B940-FC61A3CFE275> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontParser.dylib 0x91a27000 - 0x91a55ff7 com.apple.DictionaryServices (1.2.1 - 158.3) <8D03D180-D834-39F3-A106-78E0B22A7893> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/DictionaryServices.framework/Versions/A/DictionaryServices 0x91a6f000 - 0x91c8bff7 com.apple.imageKit (2.1.2 - 1.0) <71FC9A62-4E07-307C-8E6B-4DE7661DC0A3> /System/Library/Frameworks/Quartz.framework/Versions/A/Frameworks/ImageKit.framework/Versions/A/ImageKit 0x91c8c000 - 0x91c8cfff com.apple.audio.units.AudioUnit (1.7.3 - 1.7.3) <2E71E880-25D1-3210-8D26-21EC47ED810C> /System/Library/Frameworks/AudioUnit.framework/Versions/A/AudioUnit 0x91c8d000 - 0x92102ff7 FaceCoreLight (1.4.7 - compatibility 1.0.0) <3E2BF587-5168-3FC5-9D8D-183A9C7C1DED> /System/Library/PrivateFrameworks/FaceCoreLight.framework/Versions/A/FaceCoreLight 0x92103000 - 0x9210bfff com.apple.DiskArbitration (2.4.1 - 2.4.1) <28D5D8B5-14E8-3DA1-9085-B9BC96835ACF> /System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration 0x9210c000 - 0x921eefff com.apple.backup.framework (1.3.5 - 1.3.5) <1FAE91F2-BCEF-387D-B5C4-412C464DA1BE> /System/Library/PrivateFrameworks/Backup.framework/Versions/A/Backup 0x921ef000 - 0x921efff2 com.apple.CoreServices (53 - 53) <7CB7AA95-D5A7-366A-BB8A-035AA9E582F8> /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices 0x9225f000 - 0x9248affb com.apple.QuartzComposer (5.0 - 236.10) <416993F4-2868-35FF-90DE-34C93D83574F> /System/Library/Frameworks/Quartz.framework/Versions/A/Frameworks/QuartzComposer.framework/Versions/A/QuartzComposer 0x9248b000 - 0x924cffff com.apple.MediaKit (12 - 602) <6E429DD7-8829-37DE-94AF-940FB70F2FB9> /System/Library/PrivateFrameworks/MediaKit.framework/Versions/A/MediaKit 0x924d0000 - 0x924d2ff9 com.apple.securityhi (4.0 - 1) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SecurityHI.framework/Versions/A/SecurityHI 0x924d3000 - 0x92547fff com.apple.CoreSymbolication (2.2 - 73.2) /System/Library/PrivateFrameworks/CoreSymbolication.framework/Versions/A/CoreSymbolication 0x92632000 - 0x92642ff7 libCRFSuite.dylib (??? - ???) <94E040D2-2769-359A-A21B-DB85FCB73BDC> /usr/lib/libCRFSuite.dylib 0x92669000 - 0x926baff9 com.apple.ScalableUserInterface (1.0 - 1) <3C39DF4D-5CAE-373A-BE08-8CD16E514337> /System/Library/Frameworks/QuartzCore.framework/Versions/A/Frameworks/ScalableUserInterface.framework/Versions/A/ScalableUserInterface 0x926bb000 - 0x926bfffa libcache.dylib (47.0.0 - compatibility 1.0.0) <56256537-6538-3522-BCB6-2C79DA6AC8CD> /usr/lib/system/libcache.dylib 0x94153000 - 0x94196ffd libcommonCrypto.dylib (55010.0.0 - compatibility 1.0.0) <6B35F203-5D72-335A-A4BC-CC89FEC0E14F> /usr/lib/system/libcommonCrypto.dylib 0x94197000 - 0x941beff3 com.apple.framework.Apple80211 (7.4.1 - 741.1) <7F29673A-B030-34AF-B8CA-AB30DD63FFAB> /System/Library/PrivateFrameworks/Apple80211.framework/Versions/A/Apple80211 0x941bf000 - 0x94208ff7 libGLU.dylib (??? - ???) <9AF7AD51-16E3-3674-B60E-30EE499D7B46> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.dylib 0x94209000 - 0x94249ff7 com.apple.NavigationServices (3.7 - 193) <16A8BCC8-7343-3A90-88B3-AAA334DF615F> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/NavigationServices.framework/Versions/A/NavigationServices 0x9424a000 - 0x94255ffe libbz2.1.0.dylib (1.0.5 - compatibility 1.0.0) /usr/lib/libbz2.1.0.dylib 0x9425f000 - 0x942e9ffb com.apple.SearchKit (1.4.0 - 1.4.0) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SearchKit.framework/Versions/A/SearchKit 0x942ea000 - 0x942edff9 libCGXType.A.dylib (600.0.0 - compatibility 64.0.0) <16DCE20A-9790-369A-94C1-B7954B418C77> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCGXType.A.dylib 0x942ee000 - 0x946f0ff6 libLAPACK.dylib (??? - ???) <00BE0221-8564-3F87-9F6B-8A910CF2F141> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib 0x946fc000 - 0x946fdfff libsystem_blocks.dylib (53.0.0 - compatibility 1.0.0) /usr/lib/system/libsystem_blocks.dylib 0x947a5000 - 0x947b8ffc com.apple.FileSync.framework (6.0.1 - 502.2) /System/Library/PrivateFrameworks/FileSync.framework/Versions/A/FileSync 0x947b9000 - 0x948c9fe7 libsqlite3.dylib (9.6.0 - compatibility 9.0.0) <34E1E3CC-7B6A-3B37-8D07-1258D11E16CB> /usr/lib/libsqlite3.dylib 0x948ca000 - 0x948cfffb com.apple.phonenumbers (1.0 - 47) <1830301D-5409-3949-9614-C43C62B39DDA> /System/Library/PrivateFrameworks/PhoneNumbers.framework/Versions/A/PhoneNumbers 0x948d0000 - 0x948ecff1 libPng.dylib (??? - ???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libPng.dylib 0x948ed000 - 0x94a0bfec com.apple.vImage (5.1 - 5.1) <7757F253-B281-3612-89D4-F2B04061CBE1> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage 0x94a0c000 - 0x94a29fff libresolv.9.dylib (46.1.0 - compatibility 1.0.0) <2870320A-28DA-3B44-9D82-D56E0036F6BB> /usr/lib/libresolv.9.dylib 0x94a2a000 - 0x94c01fe7 com.apple.CoreFoundation (6.7.2 - 635.21) <4D1D2BAF-1332-32DF-A81B-7E79D4F0A6CB> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x94c02000 - 0x94c53ff9 com.apple.QuickLookFramework (3.2 - 500.18) /System/Library/Frameworks/QuickLook.framework/Versions/A/QuickLook 0x94c54000 - 0x94c58fff com.apple.CommonPanels (1.2.5 - 94) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/CommonPanels.framework/Versions/A/CommonPanels 0x94c5a000 - 0x94c5dffd libCoreVMClient.dylib (??? - ???) /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCoreVMClient.dylib 0x94c5e000 - 0x94c65ff8 libCGXCoreImage.A.dylib (600.0.0 - compatibility 64.0.0) <4F9DD9D1-F251-3661-A3C6-B1F550B084B0> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCGXCoreImage.A.dylib 0x94c8a000 - 0x94d26fff com.apple.ink.framework (10.7.5 - 113) <05CAFB64-D3B8-3973-87EA-CB8BBE580F6B> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Ink.framework/Versions/A/Ink 0x94d27000 - 0x94d2dfff libGFXShared.dylib (??? - ???) <9C9834EB-B794-38C8-9B90-31D8CB234F86> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGFXShared.dylib 0x94d70000 - 0x94dd4fff com.apple.framework.IOKit (2.0 - ???) <94827954-5906-36C4-819B-24CDAFD85C72> /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit 0x94dd5000 - 0x94de3ff7 com.apple.AppleFSCompression (37 - 1.0) /System/Library/PrivateFrameworks/AppleFSCompression.framework/Versions/A/AppleFSCompression 0x94de4000 - 0x94deeff0 com.apple.DirectoryService.Framework (10.7 - 146) <59061A4B-D743-3A34-B142-7BE2472BBC2D> /System/Library/Frameworks/DirectoryService.framework/Versions/A/DirectoryService 0x94e08000 - 0x94e0bffb com.apple.help (1.3.2 - 42) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Help.framework/Versions/A/Help 0x94e0c000 - 0x94e11ff7 libmacho.dylib (800.0.0 - compatibility 1.0.0) <943213F3-CC9B-328E-8A6F-16D85C4274C7> /usr/lib/system/libmacho.dylib 0x95192000 - 0x9543ffff com.apple.JavaScriptCore (7534.57 - 7534.57.3) <5AE5C3B8-D807-356B-80D9-4D0A706A10D1> /System/Library/Frameworks/JavaScriptCore.framework/Versions/A/JavaScriptCore 0x9544c000 - 0x954d3fff com.apple.print.framework.PrintCore (7.1 - 366.3) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/PrintCore.framework/Versions/A/PrintCore 0x954d4000 - 0x95637fff com.apple.QTKit (7.7.1 - 2339) <163FBDDD-0458-378F-84DD-CB0F603A259E> /System/Library/Frameworks/QTKit.framework/Versions/A/QTKit 0x95638000 - 0x9563aff7 libdyld.dylib (195.6.0 - compatibility 1.0.0) <1F865C73-5803-3B08-988C-65B8D86CB7BE> /usr/lib/system/libdyld.dylib 0x9563b000 - 0x95650fff com.apple.speech.synthesis.framework (4.0.74 - 4.0.74) <92AADDB0-BADF-3B00-8941-B8390EDC931B> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/SpeechSynthesis.framework/Versions/A/SpeechSynthesis 0x95684000 - 0x956a7fff com.apple.CoreVideo (1.7 - 70.3) <4234C11C-E8E9-309A-9465-27D6D7458895> /System/Library/Frameworks/CoreVideo.framework/Versions/A/CoreVideo 0x956a8000 - 0x956c9fff com.apple.framework.internetaccounts (1.2 - 3) /System/Library/PrivateFrameworks/InternetAccounts.framework/Versions/A/InternetAccounts 0x956ca000 - 0x95711ff5 com.apple.opencl (2.0.19 - 2.0.19) <7689E7B9-EE5A-3F74-8699-4CDED9162260> /System/Library/Frameworks/OpenCL.framework/Versions/A/OpenCL 0x95712000 - 0x9573bfff com.apple.shortcut (2.1 - 2.1) <60000F49-6309-34B1-88A1-588DC566C8DF> /System/Library/PrivateFrameworks/Shortcut.framework/Versions/A/Shortcut 0x9573c000 - 0x95743ff9 libsystem_dnssd.dylib (??? - ???) /usr/lib/system/libsystem_dnssd.dylib 0x95744000 - 0x9576cff7 libxslt.1.dylib (3.24.0 - compatibility 3.0.0) /usr/lib/libxslt.1.dylib 0x9576d000 - 0x957dcfff com.apple.Heimdal (2.2 - 2.0) <2E1B8779-36D4-3C62-A67E-0034D77D7707> /System/Library/PrivateFrameworks/Heimdal.framework/Versions/A/Heimdal 0x9581b000 - 0x9581cfff com.apple.TrustEvaluationAgent (2.0 - 1) <4BB39578-2F5E-3A50-AD59-9C0AB99472EB> /System/Library/PrivateFrameworks/TrustEvaluationAgent.framework/Versions/A/TrustEvaluationAgent 0x9581d000 - 0x95cf9ff6 libBLAS.dylib (??? - ???) <134ABFC6-F29E-3DC5-8E57-E13CB6EF7B41> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib 0x95cfc000 - 0x95d30ff8 libssl.0.9.8.dylib (44.0.0 - compatibility 0.9.8) <567E922C-E64F-321B-9A47-6B18BF481625> /usr/lib/libssl.0.9.8.dylib 0x95d31000 - 0x95d31ffe libkeymgr.dylib (23.0.0 - compatibility 1.0.0) <7F0E8EE2-9E8F-366F-9988-E2F119DB9A82> /usr/lib/system/libkeymgr.dylib 0x95d32000 - 0x95d3bffc com.apple.DisplayServicesFW (2.5.4 - 323.3) <820C4B45-814A-3101-A1FA-044CA6D2FBC8> /System/Library/PrivateFrameworks/DisplayServices.framework/Versions/A/DisplayServices 0x95d3c000 - 0x95d4cfff libsasl2.2.dylib (3.15.0 - compatibility 3.0.0) /usr/lib/libsasl2.2.dylib 0x95e85000 - 0x95e89ffd IOSurface (??? - ???) /System/Library/Frameworks/IOSurface.framework/Versions/A/IOSurface 0x95e9a000 - 0x95ecdfef libtidy.A.dylib (??? - ???) /usr/lib/libtidy.A.dylib 0x95ece000 - 0x95f72fff com.apple.QD (3.40 - ???) <3881BEC6-0908-3073-BA44-346356E1CDF9> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/QD.framework/Versions/A/QD 0x95f73000 - 0x95fceff3 com.apple.Symbolication (1.3 - 91) <4D12D2EC-5010-3958-A205-9A67E972C76A> /System/Library/PrivateFrameworks/Symbolication.framework/Versions/A/Symbolication 0x95fcf000 - 0x9602dff7 com.apple.coreui (1.2.2 - 165.11) <340B0B83-1407-3AB4-BCAB-505C29303EE2> /System/Library/PrivateFrameworks/CoreUI.framework/Versions/A/CoreUI 0x9602e000 - 0x962e0ff7 com.apple.AddressBook.framework (6.1.3 - 1091) /System/Library/Frameworks/AddressBook.framework/Versions/A/AddressBook 0x962e1000 - 0x96317ff7 com.apple.AE (527.7 - 527.7) <7BAFBF18-3997-3656-9823-FD3B455056A4> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/AE.framework/Versions/A/AE 0x96318000 - 0x9632bff8 com.apple.MultitouchSupport.framework (231.4 - 231.4) <083F7787-4C3B-31DA-B5BB-1993D9A9723D> /System/Library/PrivateFrameworks/MultitouchSupport.framework/Versions/A/MultitouchSupport 0x9632c000 - 0x96720feb com.apple.VideoToolbox (1.0 - 705.94) <8FCC2C08-2D4C-3A96-B57A-CAA56911120F> /System/Library/PrivateFrameworks/VideoToolbox.framework/Versions/A/VideoToolbox 0x96721000 - 0x96809fff libxml2.2.dylib (10.3.0 - compatibility 10.0.0) <1841196F-68B5-309F-8ED1-6714B1DFEC83> /usr/lib/libxml2.2.dylib 0x9680a000 - 0x9680afff com.apple.Accelerate (1.7 - Accelerate 1.7) <4192CE7A-BCE0-3D3C-AAF7-6F1B3C607386> /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate 0x9680b000 - 0x9680ffff libGIF.dylib (??? - ???) <2ADFED97-2228-343D-9A53-207CBFDE7984> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libGIF.dylib 0x96810000 - 0x96810fff libOpenScriptingUtil.dylib (??? - ???) /usr/lib/libOpenScriptingUtil.dylib 0x9685e000 - 0x968a2ff3 com.apple.framework.CoreWLAN (2.1.3 - 213.1) <8A99ADB8-4A3E-3B8E-A0E4-A39398C288EC> /System/Library/Frameworks/CoreWLAN.framework/Versions/A/CoreWLAN 0x968a3000 - 0x96a9bff7 com.apple.CoreData (104.1 - 358.14) /System/Library/Frameworks/CoreData.framework/Versions/A/CoreData 0x96a9c000 - 0x96d76ff7 com.apple.RawCamera.bundle (4.00 - 658) /System/Library/CoreServices/RawCamera.bundle/Contents/MacOS/RawCamera 0x97393000 - 0x973f4ffb com.apple.audio.CoreAudio (4.0.3 - 4.0.3) <7A14BE52-6789-3CE3-9AE9-B733F4903EB1> /System/Library/Frameworks/CoreAudio.framework/Versions/A/CoreAudio 0x973f5000 - 0x973f5fff com.apple.vecLib (3.7 - vecLib 3.7) <8CCF99BF-A4B7-3C01-9219-B83D2AE5F82A> /System/Library/Frameworks/vecLib.framework/Versions/A/vecLib 0x973f6000 - 0x9745efff libc++.1.dylib (28.4.0 - compatibility 1.0.0) /usr/lib/libc++.1.dylib 0x9745f000 - 0x97460fff liblangid.dylib (??? - ???) /usr/lib/liblangid.dylib 0x97461000 - 0x97477ffe libxpc.dylib (77.19.0 - compatibility 1.0.0) <0585AA94-F4FD-32C1-B586-22E7184B781A> /usr/lib/system/libxpc.dylib 0x97478000 - 0x97478fff libdnsinfo.dylib (395.11.0 - compatibility 1.0.0) <7EFAD88C-AFBC-3D48-BE14-60B8EACC68D7> /usr/lib/system/libdnsinfo.dylib 0x97479000 - 0x975dbffb com.apple.QuartzCore (1.7 - 270.5) <6D0EC7FC-11E5-35FB-A08A-3B438E89FBDB> /System/Library/Frameworks/QuartzCore.framework/Versions/A/QuartzCore 0x975dc000 - 0x975eafff libdispatch.dylib (187.10.0 - compatibility 1.0.0) <1B857064-288D-3919-B81A-38E9F4D19B54> /usr/lib/system/libdispatch.dylib 0x975eb000 - 0x976bbffb com.apple.ImageIO.framework (3.1.2 - 3.1.2) <2092785C-795A-3CDF-A1B4-6C80BA3726DD> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/ImageIO 0x976bc000 - 0x979ebff7 com.apple.FinderKit (1.0.5 - 1.0.5) <5F2FB244-8734-31FA-A957-0F4B603E02BB> /System/Library/PrivateFrameworks/FinderKit.framework/Versions/A/FinderKit 0x979ec000 - 0x9831772b com.apple.CoreGraphics (1.600.0 - ???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/CoreGraphics 0x98324000 - 0x98346ffe com.apple.framework.familycontrols (3.0 - 300) <6735D7ED-7053-3AB8-B144-E7F70A124CCD> /System/Library/PrivateFrameworks/FamilyControls.framework/Versions/A/FamilyControls 0x98347000 - 0x9834effd com.apple.NetFS (4.0 - 4.0) /System/Library/Frameworks/NetFS.framework/Versions/A/NetFS 0x9839b000 - 0x983a3ff5 libcopyfile.dylib (85.1.0 - compatibility 1.0.0) /usr/lib/system/libcopyfile.dylib 0x98481000 - 0x984c1ff7 libauto.dylib (??? - ???) <984C81BE-FA1C-3228-8F7E-2965E7E5EB85> /usr/lib/libauto.dylib 0x984c6000 - 0x9873aff3 com.apple.CoreImage (7.99.1 - 1.0.1) /System/Library/Frameworks/QuartzCore.framework/Versions/A/Frameworks/CoreImage.framework/Versions/A/CoreImage 0x9873b000 - 0x98a81fff com.apple.MediaToolbox (1.0 - 705.94) <89D37021-C389-3CC5-A158-620ADCBD99EF> /System/Library/PrivateFrameworks/MediaToolbox.framework/Versions/A/MediaToolbox 0x98a82000 - 0x98aadfff com.apple.GSS (2.2 - 2.0) <2C468B23-FA87-30B5-B9A6-8C5D1373AA30> /System/Library/Frameworks/GSS.framework/Versions/A/GSS 0x98aae000 - 0x98c04fff com.apple.audio.toolbox.AudioToolbox (1.7.3 - 1.7.3) /System/Library/Frameworks/AudioToolbox.framework/Versions/A/AudioToolbox 0x98c05000 - 0x98c12fff libGL.dylib (??? - ???) /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.dylib 0x98eec000 - 0x98f34ff7 com.apple.SystemConfiguration (1.11.3 - 1.11) <68B92FEA-F754-3E7E-B5E6-D512E26144E7> /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration 0x98f35000 - 0x98f64ff7 libsystem_info.dylib (??? - ???) <37640811-445B-3BB7-9934-A7C99848250D> /usr/lib/system/libsystem_info.dylib 0x990b9000 - 0x990d1fff com.apple.frameworks.preferencepanes (15.0 - 15.0) /System/Library/Frameworks/PreferencePanes.framework/Versions/A/PreferencePanes 0x990d3000 - 0x990deffe com.apple.NetAuth (3.2 - 3.2) <4377FB18-A550-35C6-BCD2-71C42134EEA6> /System/Library/PrivateFrameworks/NetAuth.framework/Versions/A/NetAuth 0x990df000 - 0x99b74ff6 com.apple.AppKit (6.7.5 - 1138.51) /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit 0x99b75000 - 0x99b91ff5 com.apple.GenerationalStorage (1.0 - 126.1) /System/Library/PrivateFrameworks/GenerationalStorage.framework/Versions/A/GenerationalStorage 0x99b92000 - 0x99bcffef libGLImage.dylib (??? - ???) /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.dylib 0x99d05000 - 0x99d05fff com.apple.Carbon (153 - 153) <63603A0C-723B-3968-B302-EBEEE6A14E97> /System/Library/Frameworks/Carbon.framework/Versions/A/Carbon 0x99d11000 - 0x99d61ffa libTIFF.dylib (??? - ???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib 0x99d62000 - 0x99e1fff3 ColorSyncDeprecated.dylib (4.6.0 - compatibility 1.0.0) <726898F5-E718-3F27-B415-D6FDCDE09174> /System/Library/Frameworks/ApplicationServices.framework/Frameworks/ColorSync.framework/Versions/A/Resources/ColorSyncDeprecated.dylib 0x99f6b000 - 0x99f85fff com.apple.Kerberos (1.0 - 1) /System/Library/Frameworks/Kerberos.framework/Versions/A/Kerberos 0x99f86000 - 0x9a612ff5 com.apple.CoreAUC (6.16.12 - 6.16.12) <9D51400F-B827-3BA7-87F5-954A1CDDAEA9> /System/Library/PrivateFrameworks/CoreAUC.framework/Versions/A/CoreAUC 0x9a613000 - 0x9a724ff7 libJP2.dylib (??? - ???) <2B5EB147-F845-30DF-87C4-D2D3C3D0680A> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libJP2.dylib 0x9a725000 - 0x9aa27fff com.apple.CoreServices.CarbonCore (960.25 - 960.25) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore 0x9aa28000 - 0x9aa29ffd libCVMSPluginSupport.dylib (??? - ???) <4B0476F9-950D-3EB7-BD83-F65AF0B05F0E> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCVMSPluginSupport.dylib 0x9aa2a000 - 0x9aa38ff7 libxar-nossl.dylib (??? - ???) <5BF4DA8E-C319-354A-967E-A0C725DC8BA3> /usr/lib/libxar-nossl.dylib 0x9aa39000 - 0x9aa43ff2 com.apple.audio.SoundManager (3.9.4.1 - 3.9.4.1) <2A089CE8-9760-3F0F-B77D-29A78940EA17> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/CarbonSound.framework/Versions/A/CarbonSound 0x9aa44000 - 0x9aa45ff7 libsystem_sandbox.dylib (??? - ???) <5CFCCFB7-CF29-3E04-801D-8532AE004768> /usr/lib/system/libsystem_sandbox.dylib 0x9aaa0000 - 0x9aaaefff com.apple.opengl (1.8.1 - 1.8.1) <766AFB12-A2CB-3A55-B662-FC9FFCAE0008> /System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL 0x9aaaf000 - 0x9aab7ff3 libunwind.dylib (30.0.0 - compatibility 1.0.0) /usr/lib/system/libunwind.dylib 0x9aab8000 - 0x9ab35fff com.apple.PDFKit (2.6.4 - 2.6.4) /System/Library/Frameworks/Quartz.framework/Versions/A/Frameworks/PDFKit.framework/Versions/A/PDFKit 0x9ab36000 - 0x9ab9eff3 com.apple.ISSupport (1.9.8 - 56) <59225A65-41C1-35CA-9F29-229AC427B728> /System/Library/PrivateFrameworks/ISSupport.framework/Versions/A/ISSupport 0x9ab9f000 - 0x9abadfff libz.1.dylib (1.2.5 - compatibility 1.0.0) /usr/lib/libz.1.dylib 0x9adcb000 - 0x9af7fff3 libicucore.A.dylib (46.1.0 - compatibility 1.0.0) <4AFF6FC3-6283-3934-8EFC-CA227CA11164> /usr/lib/libicucore.A.dylib 0x9af80000 - 0x9afcfffb com.apple.AppleVAFramework (5.0.16 - 5.0.16) <1188E7AB-76FE-343F-9108-30CD67E5A37B> /System/Library/PrivateFrameworks/AppleVA.framework/Versions/A/AppleVA 0x9afd0000 - 0x9b0dfff7 com.apple.DesktopServices (1.6.5 - 1.6.5) /System/Library/PrivateFrameworks/DesktopServicesPriv.framework/Versions/A/DesktopServicesPriv 0x9b0e0000 - 0x9b1c1ff7 com.apple.DiscRecording (6.0.4 - 6040.4.1) <08BADDAD-FA79-3872-9387-EEE2A9FAA2F0> /System/Library/Frameworks/DiscRecording.framework/Versions/A/DiscRecording 0x9b1c2000 - 0x9b28dfff libsystem_c.dylib (763.13.0 - compatibility 1.0.0) <52421B00-79C8-3727-94DE-62F6820B9C31> /usr/lib/system/libsystem_c.dylib 0x9b3bd000 - 0x9b3beff0 libunc.dylib (24.0.0 - compatibility 1.0.0) <2F4B35B2-706C-3383-AA86-DABA409FAE45> /usr/lib/system/libunc.dylib 0x9b3bf000 - 0x9b3bffff com.apple.Accelerate.vecLib (3.7 - vecLib 3.7) <22997C20-BEB7-301D-86C5-5BFB3B06D212> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/vecLib 0x9b3c0000 - 0x9b3cbff3 libCSync.A.dylib (600.0.0 - compatibility 64.0.0) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCSync.A.dylib 0x9b3cc000 - 0x9b3cfff7 libmathCommon.A.dylib (2026.0.0 - compatibility 1.0.0) <69357047-7BE0-3360-A36D-000F55E39336> /usr/lib/system/libmathCommon.A.dylib 0x9b3d0000 - 0x9b3faff0 libpcre.0.dylib (1.1.0 - compatibility 1.0.0) <5CAA1478-97E0-31EA-8F50-BF09D665DD84> /usr/lib/libpcre.0.dylib 0x9b3fb000 - 0x9b3fbfff com.apple.Cocoa (6.6 - ???) <5FAFE73E-6AF5-3D09-9191-0BDC8C6875CB> /System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa 0x9b3fc000 - 0x9b410fff com.apple.CFOpenDirectory (10.7 - 146) <9149C1FE-865E-3A8D-B9D9-547384F553C8> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/Frameworks/CFOpenDirectory.framework/Versions/A/CFOpenDirectory 0x9b411000 - 0x9b7ccffb com.apple.SceneKit (125.3 - 125.8) <89008B87-87E7-3972-A274-30311497EE32> /System/Library/PrivateFrameworks/SceneKit.framework/Versions/A/SceneKit 0x9b915000 - 0x9b93aff9 libJPEG.dylib (??? - ???) <743578F6-8C0C-39CC-9F15-3A01E1616EAE> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libJPEG.dylib 0x9b93b000 - 0x9ba13ff6 com.apple.QuickLookUIFramework (3.2 - 500.18) /System/Library/Frameworks/Quartz.framework/Versions/A/Frameworks/QuickLookUI.framework/Versions/A/QuickLookUI 0x9ba14000 - 0x9ba8fffb com.apple.ApplicationServices.ATS (317.12.0 - ???) <4D124B65-3D43-32E9-B296-3671347BB888> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/ATS 0x9ba9b000 - 0x9bab8ff3 com.apple.openscripting (1.3.3 - ???) <0579A4CB-FD6F-3D7F-A17B-AC0F2CF11FC7> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/OpenScripting.framework/Versions/A/OpenScripting 0x9bab9000 - 0x9bad7ff7 libsystem_kernel.dylib (1699.32.7 - compatibility 1.0.0) <79179F83-457A-3539-A76B-E960D2108109> /usr/lib/system/libsystem_kernel.dylib 0x9bad8000 - 0x9bb2affb com.apple.CoreMediaIO (216.0 - 3199.8) /System/Library/Frameworks/CoreMediaIO.framework/Versions/A/CoreMediaIO 0x9bb2b000 - 0x9bdfaffb com.apple.security (7.0 - 55148.6) <8DF67BDD-C98F-3B7E-AC63-D468407FA82D> /System/Library/Frameworks/Security.framework/Versions/A/Security 0x9be54000 - 0x9bf2aaab libobjc.A.dylib (228.0.0 - compatibility 1.0.0) <2E272DCA-38A0-3530-BBF4-47AE678D20D4> /usr/lib/libobjc.A.dylib 0x9bf2b000 - 0x9bfa3ff8 com.apple.CorePDF (3.1 - 3.1) <0074267B-F74A-30FC-8508-A14C821F0771> /System/Library/PrivateFrameworks/CorePDF.framework/Versions/A/CorePDF 0x9bfa4000 - 0x9c03bff3 com.apple.securityfoundation (5.0 - 55116) /System/Library/Frameworks/SecurityFoundation.framework/Versions/A/SecurityFoundation 0x9c03c000 - 0x9c08eff7 libFontRegistry.dylib (??? - ???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontRegistry.dylib 0x9c08f000 - 0x9c08ffff com.apple.quartzframework (1.5 - 1.5) <49B5CA00-083A-3D4A-9A68-4759A5CC35A6> /System/Library/Frameworks/Quartz.framework/Versions/A/Quartz 0x9c090000 - 0x9c0b2ff8 com.apple.PerformanceAnalysis (1.11 - 11) <453463FF-7C42-3526-8C96-A9971EE07154> /System/Library/PrivateFrameworks/PerformanceAnalysis.framework/Versions/A/PerformanceAnalysis 0x9c0b3000 - 0x9c0b5ffb libRadiance.dylib (??? - ???) <4721057E-5A1F-3083-911B-200ED1CE7678> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libRadiance.dylib 0x9c0b6000 - 0x9c3c0ff3 com.apple.Foundation (6.7.2 - 833.25) <4C52ED74-A1FD-3087-A2E1-035AB3CF9610> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation 0x9c3c1000 - 0x9c3caff3 com.apple.CommonAuth (2.2 - 2.0) /System/Library/PrivateFrameworks/CommonAuth.framework/Versions/A/CommonAuth 0x9c3cb000 - 0x9c3d2ff7 libsystem_notify.dylib (80.1.0 - compatibility 1.0.0) <47DB9E1B-A7D1-3818-A747-382B2C5D9E1B> /usr/lib/system/libsystem_notify.dylib 0x9c3d3000 - 0x9c3e4fff libbsm.0.dylib (??? - ???) <54ACF696-87C6-3652-808A-17BE7275C230> /usr/lib/libbsm.0.dylib 0x9c3e5000 - 0x9c4d5ff1 libiconv.2.dylib (7.0.0 - compatibility 7.0.0) <9E5F86A3-8405-3774-9E0C-3A074273C96D> /usr/lib/libiconv.2.dylib 0x9c4e4000 - 0x9c50affb com.apple.quartzfilters (1.7.0 - 1.7.0) <64AB163E-7E91-3028-8730-BE11BC1F5237> /System/Library/Frameworks/Quartz.framework/Versions/A/Frameworks/QuartzFilters.framework/Versions/A/QuartzFilters 0x9cda7000 - 0x9cdb2fff libkxld.dylib (??? - ???) <14E79D7A-B6C2-35C5-B56D-D343BEC2A106> /usr/lib/system/libkxld.dylib 0x9cdb3000 - 0x9cdb3ff0 com.apple.ApplicationServices (41 - 41) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/ApplicationServices 0x9cdd2000 - 0x9ce0dfff com.apple.bom (11.0 - 183) <8E3F690E-4D23-3379-828C-5AB0453B9226> /System/Library/PrivateFrameworks/Bom.framework/Versions/A/Bom 0x9ce0e000 - 0x9ce38ff1 com.apple.CoreServicesInternal (113.19 - 113.19) /System/Library/PrivateFrameworks/CoreServicesInternal.framework/Versions/A/CoreServicesInternal 0x9ce39000 - 0x9cefcfff com.apple.CoreServices.OSServices (478.49 - 478.49) <5AF33605-C893-3F60-89CF-1BC9C0BC35AF> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/OSServices.framework/Versions/A/OSServices 0x9cefd000 - 0x9cf05ff3 liblaunch.dylib (392.39.0 - compatibility 1.0.0) <9E6135FF-C2B1-3BC9-A160-B32D71BFA77C> /usr/lib/system/liblaunch.dylib 0xba900000 - 0xba91bffd libJapaneseConverter.dylib (54.0.0 - compatibility 1.0.0) <473499F3-8CB8-3372-98B0-8E3BCC1A3D80> /System/Library/CoreServices/Encodings/libJapaneseConverter.dylib External Modification Summary: Calls made by other processes targeting this process: task_for_pid: 73 thread_create: 0 thread_set_state: 0 Calls made by this process: task_for_pid: 0 thread_create: 0 thread_set_state: 0 Calls made by all processes on this machine: task_for_pid: 52593 thread_create: 0 thread_set_state: 0 VM Region Summary: ReadOnly portion of Libraries: Total=222.2M resident=108.1M(49%) swapped_out_or_unallocated=114.1M(51%) Writable regions: Total=211.2M written=47.4M(22%) resident=84.6M(40%) swapped_out=108K(0%) unallocated=126.7M(60%) REGION TYPE VIRTUAL =========== ======= ATS (font support) 32.4M CG backing stores 9712K CG image 424K CG raster data 348K CG shared images 3576K CoreGraphics 8K CoreServices 3156K MALLOC 86.6M MALLOC guard page 64K MALLOC_LARGE (reserved) 256K reserved VM address space (unallocated) Memory tag=240 4K Memory tag=242 12K OpenCL 60K SQLite page cache 96K Stack 115.7M VM_ALLOCATE 16.4M __CI_BITMAP 80K __DATA 11.8M __DATA/__OBJC 224K __IMAGE 528K __IMPORT 4K __LINKEDIT 51.0M __OBJC 2936K __OBJC/__DATA 16K __PAGEZERO 4K __TEXT 171.2M __UNICODE 544K mapped file 117.5M shared memory 37.3M shared pmap 7296K =========== ======= TOTAL 668.6M TOTAL, minus reserved VM space 668.3M On Mar 17, 2013, at 9:14 AM, Almar Klein wrote: > Were there any errors reported? > > - Almar > > > On 17 March 2013 04:22, Jerry wrote: > Pyzo crashed on OS X 10.7.5 in under three minutes while trying to run a hello world program. > Jerry > > On Mar 16, 2013, at 3:31 PM, Almar Klein wrote: > >> Dear all, >> >> We're pleased to announce release 2013b of Pyzo distro, a Python distribution for scientific computing based on Python 3. For more information, and to give it a try, visit http://www.pyzo.org. >> >> The most notable changes since the last release: >> ? Pyzo is also build for Mac. >> ? Python 3.3 is now used. >> ? Added an executable to launch IPython notebook. >> ? New versions of several packages, including the IDE. >> More information: >> >> Pyzo is available for Windows, Mac and Linux. The distribution is portable, thus providing a way to install a scientific Python stack on computers without the need for admin rights. >> >> Naturally, Pyzo distro is complient with the scipy-stack, and comes with additional packages such as scikit-image and scikit-learn. >> >> With Pyzo we want to make scientific computing in Python easier accessible. We especially hope to make it easier for newcomers (such as Matlab converts) to join our awesome community. Pyzo uses IEP as the default front-end IDE, and IPython (with notebook) is also available. >> >> Happy coding, >> Almar >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -- > Almar Klein, PhD > Science Applied > phone: +31 6 19268652 > e-mail: a.klein at science-applied.nl > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From guziy.sasha at gmail.com Sun Mar 17 22:40:33 2013 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Sun, 17 Mar 2013 22:40:33 -0400 Subject: [SciPy-User] ANN: Pyzo distro 2013b In-Reply-To: References: <152D9C59-D775-4217-9A98-79F3CD6E71AE@qwest.net> Message-ID: Hi, here's the error I get on linux(Linux san-laptop 3.5.0-26-generic #42-Ubuntu SMP Fri Mar 8 23:18:20 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux): >./pyzo Cannot mix incompatible Qt library (version 0x40803) with this library (version 0x40804) Aborted (core dumped) cheers -- Oleksandr (Sasha) Huziy 2013/3/17 Almar Klein > Were there any errors reported? > > - Almar > > > On 17 March 2013 04:22, Jerry wrote: > >> Pyzo crashed on OS X 10.7.5 in under three minutes while trying to run a >> hello world program. >> Jerry >> >> On Mar 16, 2013, at 3:31 PM, Almar Klein wrote: >> >> Dear all, >> >> We're pleased to announce release 2013b of Pyzo distro, a Python >> distribution for scientific computing based on Python 3. For more >> information, and to give it a try, visit http://www.pyzo.org. >> >> The most notable changes since the last release: >> >> - Pyzo is also build for Mac. >> - Python 3.3 is now used. >> - Added an executable to launch IPython notebook. >> - New versions of several packages, including the IDE. >> >> More information: >> >> Pyzo is available for Windows, Mac and Linux. The distribution is >> portable, thus providing a way to install a scientific Python stack on >> computers without the need for admin rights. >> >> Naturally, Pyzo distro is complient with the scipy-stack, and comes with >> additional packages such as scikit-image and scikit-learn. >> >> With Pyzo we want to make scientific computing in Python easier >> accessible. We especially hope to make it easier for newcomers (such as >> Matlab converts) to join our awesome community. Pyzo uses IEP as the >> default front-end IDE, and IPython (with notebook) is also available. >> >> Happy coding, >> Almar >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > > -- > Almar Klein, PhD > Science Applied > phone: +31 6 19268652 > e-mail: a.klein at science-applied.nl > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.klein at science-applied.nl Mon Mar 18 06:56:55 2013 From: a.klein at science-applied.nl (Almar Klein) Date: Mon, 18 Mar 2013 11:56:55 +0100 Subject: [SciPy-User] ANN: Pyzo distro 2013b In-Reply-To: <152D9C59-D775-4217-9A98-79F3CD6E71AE@qwest.net> References: <152D9C59-D775-4217-9A98-79F3CD6E71AE@qwest.net> Message-ID: > Pyzo crashed on OS X 10.7.5 in under three minutes while trying to run a > hello world program. > I just found: "Mac OS X 10.7 "Lion" was the first version of OS X to drop support for 32-bit Intel processors and run exclusively on 64-bit Intel CPUs." So, I'm sorry, but you'll have to wait for the next release. The reason that the binaries are 32 bit is that I forgot to make the VM 64bit when I created it. Woops. I only found out much later (after all the hard work of setting up the machine was done). So the blame's on me. - Almar > > Dear all, > > We're pleased to announce release 2013b of Pyzo distro, a Python > distribution for scientific computing based on Python 3. For more > information, and to give it a try, visit http://www.pyzo.org. > > The most notable changes since the last release: > > - Pyzo is also build for Mac. > - Python 3.3 is now used. > - Added an executable to launch IPython notebook. > - New versions of several packages, including the IDE. > > More information: > > Pyzo is available for Windows, Mac and Linux. The distribution is > portable, thus providing a way to install a scientific Python stack on > computers without the need for admin rights. > > Naturally, Pyzo distro is complient with the scipy-stack, and comes with > additional packages such as scikit-image and scikit-learn. > > With Pyzo we want to make scientific computing in Python easier > accessible. We especially hope to make it easier for newcomers (such as > Matlab converts) to join our awesome community. Pyzo uses IEP as the > default front-end IDE, and IPython (with notebook) is also available. > > Happy coding, > Almar > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Almar Klein, PhD Science Applied phone: +31 6 19268652 e-mail: a.klein at science-applied.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.klein at science-applied.nl Mon Mar 18 07:15:53 2013 From: a.klein at science-applied.nl (Almar Klein) Date: Mon, 18 Mar 2013 12:15:53 +0100 Subject: [SciPy-User] ANN: Pyzo distro 2013b In-Reply-To: References: <152D9C59-D775-4217-9A98-79F3CD6E71AE@qwest.net> Message-ID: > here's the error I get on linux(Linux san-laptop 3.5.0-26-generic > #42-Ubuntu SMP Fri Mar 8 23:18:20 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux): > > >./pyzo > Cannot mix incompatible Qt library (version 0x40803) with this library > (version 0x40804) > Aborted (core dumped) > You can fix this by creating a file qt.conf in the root of the Pyzo directory with these two lines: [Paths] Plugins = '.' (Next Pyzo release will probably have this file by default). - Almar > > -- > Oleksandr (Sasha) Huziy > > > > 2013/3/17 Almar Klein > >> Were there any errors reported? >> >> - Almar >> >> >> On 17 March 2013 04:22, Jerry wrote: >> >>> Pyzo crashed on OS X 10.7.5 in under three minutes while trying to run a >>> hello world program. >>> Jerry >>> >>> On Mar 16, 2013, at 3:31 PM, Almar Klein wrote: >>> >>> Dear all, >>> >>> We're pleased to announce release 2013b of Pyzo distro, a Python >>> distribution for scientific computing based on Python 3. For more >>> information, and to give it a try, visit http://www.pyzo.org. >>> >>> The most notable changes since the last release: >>> >>> - Pyzo is also build for Mac. >>> - Python 3.3 is now used. >>> - Added an executable to launch IPython notebook. >>> - New versions of several packages, including the IDE. >>> >>> More information: >>> >>> Pyzo is available for Windows, Mac and Linux. The distribution is >>> portable, thus providing a way to install a scientific Python stack on >>> computers without the need for admin rights. >>> >>> Naturally, Pyzo distro is complient with the scipy-stack, and comes >>> with additional packages such as scikit-image and scikit-learn. >>> >>> With Pyzo we want to make scientific computing in Python easier >>> accessible. We especially hope to make it easier for newcomers (such as >>> Matlab converts) to join our awesome community. Pyzo uses IEP as the >>> default front-end IDE, and IPython (with notebook) is also available. >>> >>> Happy coding, >>> Almar >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> >> -- >> Almar Klein, PhD >> Science Applied >> phone: +31 6 19268652 >> e-mail: a.klein at science-applied.nl >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Almar Klein, PhD Science Applied phone: +31 6 19268652 e-mail: a.klein at science-applied.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Mar 18 13:13:33 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 18 Mar 2013 13:13:33 -0400 Subject: [SciPy-User] expanding optimize.brentq Message-ID: Just a thought, given the problems with fsolve In scipy.stats.distribution, we use an expanding brentq, which looks like it works very well. https://github.com/scipy/scipy/pull/216/files#L0R1175 A question given that I have not much experience with root finders: Is there a general algorithm that has similar properties? In these application we know that the function is strictly monotonic, but we don't have a bound for brentq without checking. For some cases I know one of the bounds (e.g. 0 for non-negative solutions). (I could use that right now as a replacement for fsolve.) Josef From t.arvind at gmail.com Mon Mar 18 14:46:06 2013 From: t.arvind at gmail.com (Arvind Thiagarajan) Date: Mon, 18 Mar 2013 11:46:06 -0700 Subject: [SciPy-User] scipy.stats beginner: help with fitting distribution Message-ID: Hi all, I'm a beginner with SciPy so this may be a basic question. I am trying to fit a Rice distribution to some data using scipy.stats. However, I first tried some test code which doesn't seem to give me a very good fit. I tried the following code: >>> b = [0.3,] >>> samples = rice.rvs(b, loc=0, scale=1, size=1000) >>> rice.fit(samples) (0.0012012190480231357, -0.0023216862043629813, 1.024758538166374) Though the loc and scale seem ok (close to 0 and 1 respectively), I was hoping the first return value would be closer to 0.3 since the shape parameter I supplied while generating random samples was 0.3 (in the call to rvs). I initially thought 1000 samples was perhaps too few, but get similarly poor results with 10,000 samples as well. Concluded that I must be doing something wrong, or that I'm misinterpreting the usage for the rvs() or fit() functions. Is the fit function supposed to try to approximately find the same shape parameter I passed in to rvs? Or should I be interpreting the fit result differently? Wondering if anyone has any ideas? Thanks a lot, Arvind From charlesr.harris at gmail.com Mon Mar 18 14:58:20 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 18 Mar 2013 12:58:20 -0600 Subject: [SciPy-User] expanding optimize.brentq In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 11:13 AM, wrote: > Just a thought, given the problems with fsolve > > In scipy.stats.distribution, we use an expanding brentq, which looks > like it works very well. > https://github.com/scipy/scipy/pull/216/files#L0R1175 > > A question given that I have not much experience with root finders: > > Is there a general algorithm that has similar properties? > > > In these application we know that the function is strictly monotonic, > but we don't have a bound for brentq without checking. For some cases > I know one of the bounds (e.g. 0 for non-negative solutions). > > (I could use that right now as a replacement for fsolve.) > > I assume you are talking about the 1-D case where you don't have bounding intervals to start with. There are various search methods that can be used to find such intervals, but they can't be guaranteed to work. They all come down to making trials of various points looking for sign reversals, and the searches are either subdividing a larger interval or creating larger and larger intervals by, say, doubling each time around. The addition of such methods would be useful, but they need a failure mode :( Your application looks to be finding points on a monotone function on the interval [0, inf]. For that you could use the interval [0, 1] and map it to [0, inf] with the substitution x/1-x, although the point x == 1 will be a bit tricky unless your function handles inf gracefully. The accuracy will also tend to be relative rather than absolute, but that seems to be a given in any case. I suppose this approach falls under the change of variables method. Those tend to be problem specific, so I don't know if they would be a good fit for scipy, but certainly they could work well in specialized domains. It would also seem to be a good idea to match the asymptotic behavior if possible, which will tend to linearize the problem. So other options would be functions like log, erf, arctan, etc, for the substitution, but in those cases you probably already have the the inverse functions. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Mon Mar 18 15:09:32 2013 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 18 Mar 2013 15:09:32 -0400 Subject: [SciPy-User] expanding optimize.brentq In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 2:58 PM, Charles R Harris wrote: > > > On Mon, Mar 18, 2013 at 11:13 AM, wrote: > >> Just a thought, given the problems with fsolve >> >> In scipy.stats.distribution, we use an expanding brentq, which looks >> like it works very well. >> https://github.com/scipy/scipy/pull/216/files#L0R1175 >> >> A question given that I have not much experience with root finders: >> >> Is there a general algorithm that has similar properties? >> >> >> In these application we know that the function is strictly monotonic, >> but we don't have a bound for brentq without checking. For some cases >> I know one of the bounds (e.g. 0 for non-negative solutions). >> >> (I could use that right now as a replacement for fsolve.) >> > > Your application looks to be finding points on a monotone function on the > interval [0, inf]. For that you could use the interval [0, 1] and map it to > [0, inf] with the substitution x/1-x, although the point x == 1 will be a > bit tricky unless your function handles inf gracefully. The accuracy will > also tend to be relative rather than absolute, but that seems to be a given > in any case. I suppose this approach falls under the change of variables > method. > Ah, right. Yes, I think this will work well for our current use case (if I understand the scope of the problem sufficiently). At least it seems to do the trick with the sticking points I was seeing. It also improves the perfomance without the bounding interval methods - fsolve, etc. Thanks, Chuck. Skipper > Those tend to be problem specific, so I don't know if they would be a good > fit for scipy, but certainly they could work well in specialized domains. > It would also seem to be a good idea to match the asymptotic behavior if > possible, which will tend to linearize the problem. So other options would > be functions like log, erf, arctan, etc, for the substitution, but in those > cases you probably already have the the inverse functions. > > Chuck > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Mar 18 15:23:19 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 18 Mar 2013 15:23:19 -0400 Subject: [SciPy-User] expanding optimize.brentq In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 2:58 PM, Charles R Harris wrote: > > > On Mon, Mar 18, 2013 at 11:13 AM, wrote: >> >> Just a thought, given the problems with fsolve >> >> In scipy.stats.distribution, we use an expanding brentq, which looks >> like it works very well. >> https://github.com/scipy/scipy/pull/216/files#L0R1175 >> >> A question given that I have not much experience with root finders: >> >> Is there a general algorithm that has similar properties? >> >> >> In these application we know that the function is strictly monotonic, >> but we don't have a bound for brentq without checking. For some cases >> I know one of the bounds (e.g. 0 for non-negative solutions). >> >> (I could use that right now as a replacement for fsolve.) >> > > I assume you are talking about the 1-D case where you don't have bounding > intervals to start with. There are various search methods that can be used > to find such intervals, but they can't be guaranteed to work. They all come > down to making trials of various points looking for sign reversals, and the > searches are either subdividing a larger interval or creating larger and > larger intervals by, say, doubling each time around. The addition of such > methods would be useful, but they need a failure mode :( Yes that's pretty much what I would like, and use in the rewritten scipy.stats.distribution function. But I was hoping there might be something more systematic than "home-made". > > Your application looks to be finding points on a monotone function on the > interval [0, inf]. For that you could use the interval [0, 1] and map it to > [0, inf] with the substitution x/1-x, although the point x == 1 will be a > bit tricky unless your function handles inf gracefully. The accuracy will > also tend to be relative rather than absolute, but that seems to be a given > in any case. I suppose this approach falls under the change of variables > method. Those tend to be problem specific, so I don't know if they would be > a good fit for scipy, but certainly they could work well in specialized > domains. It would also seem to be a good idea to match the asymptotic > behavior if possible, which will tend to linearize the problem. So other > options would be functions like log, erf, arctan, etc, for the substitution, > but in those cases you probably already have the the inverse functions. That might be a good idea to get brentq to influence the selection of subdivisions, exponential instead of linear. One problem with brentq is that I would have to give one huge bound, but the most likely case is much smaller. (example find sample size for power identity, we should get an answer below 1000 but in some cases it could be 100.000) One related problem for the bounded case in (0,1) is the open interval, I use 1e-8, some packages use 1e-6 to stay away from the undefined boundary points. But again, there could be some extreme cases where the solution is closer to 0 or 1, than the 1e-8. (I don't remember if we found a solution to a similar problem in the stats.distributions, or if I mix up a case there.) Josef > > Chuck > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Mon Mar 18 15:28:32 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 18 Mar 2013 15:28:32 -0400 Subject: [SciPy-User] scipy.stats beginner: help with fitting distribution In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 2:46 PM, Arvind Thiagarajan wrote: > Hi all, > > I'm a beginner with SciPy so this may be a basic question. I am trying > to fit a Rice distribution to some data using scipy.stats. > > However, I first tried some test code which doesn't seem to give me a > very good fit. I tried the following code: > >>>> b = [0.3,] >>>> samples = rice.rvs(b, loc=0, scale=1, size=1000) >>>> rice.fit(samples) > (0.0012012190480231357, -0.0023216862043629813, 1.024758538166374) > > Though the loc and scale seem ok (close to 0 and 1 respectively), I > was hoping the first return value would be closer to 0.3 since the > shape parameter I supplied while generating random samples was 0.3 (in > the call to rvs). > > I initially thought 1000 samples was perhaps too few, but get > similarly poor results with 10,000 samples as well. Concluded that I > must be doing something wrong, or that I'm misinterpreting the usage > for the rvs() or fit() functions. Is the fit function supposed to try > to approximately find the same shape parameter I passed in to rvs? Or > should I be interpreting the fit result differently? > > Wondering if anyone has any ideas? general answer: http://stackoverflow.com/questions/15468215/python-scipy-stats-pareto-fit-how-does-it-work For some distributions it helps to try out different starting values. For some distributions, the maximum likelihood estimation does not work (like pareto if we want to estimate also loc), either curvature problems or likelihood can go to infinite. I don't know anything about the rice distributions, and don't know how difficult it is to estimate the parameters. Josef > > Thanks a lot, > Arvind > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From t.arvind at gmail.com Mon Mar 18 15:57:13 2013 From: t.arvind at gmail.com (Arvind Thiagarajan) Date: Mon, 18 Mar 2013 12:57:13 -0700 Subject: [SciPy-User] scipy.stats beginner: help with fitting distribution In-Reply-To: References: Message-ID: Thanks Josef. I played around a bit more and in the Rice case, it appears the estimation is accurate only if loc and scale are supplied --- the fit is unable to estimate all three together accurately. Also, the estimation seems only accurate for larger values of the shape parameter. Just mentioning this for anyone else who may see similar problems. Thanks, Arvind On Mon, Mar 18, 2013 at 12:28 PM, wrote: > On Mon, Mar 18, 2013 at 2:46 PM, Arvind Thiagarajan wrote: >> Hi all, >> >> I'm a beginner with SciPy so this may be a basic question. I am trying >> to fit a Rice distribution to some data using scipy.stats. >> >> However, I first tried some test code which doesn't seem to give me a >> very good fit. I tried the following code: >> >>>>> b = [0.3,] >>>>> samples = rice.rvs(b, loc=0, scale=1, size=1000) >>>>> rice.fit(samples) >> (0.0012012190480231357, -0.0023216862043629813, 1.024758538166374) >> >> Though the loc and scale seem ok (close to 0 and 1 respectively), I >> was hoping the first return value would be closer to 0.3 since the >> shape parameter I supplied while generating random samples was 0.3 (in >> the call to rvs). >> >> I initially thought 1000 samples was perhaps too few, but get >> similarly poor results with 10,000 samples as well. Concluded that I >> must be doing something wrong, or that I'm misinterpreting the usage >> for the rvs() or fit() functions. Is the fit function supposed to try >> to approximately find the same shape parameter I passed in to rvs? Or >> should I be interpreting the fit result differently? >> >> Wondering if anyone has any ideas? > > general answer: > http://stackoverflow.com/questions/15468215/python-scipy-stats-pareto-fit-how-does-it-work > > For some distributions it helps to try out different starting values. > For some distributions, the maximum likelihood estimation does not > work (like pareto if we want to estimate also loc), either curvature > problems or likelihood can go to infinite. > > I don't know anything about the rice distributions, and don't know how > difficult it is to estimate the parameters. > > Josef > >> >> Thanks a lot, >> Arvind >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Mon Mar 18 16:37:28 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 18 Mar 2013 16:37:28 -0400 Subject: [SciPy-User] scipy.stats beginner: help with fitting distribution In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 3:57 PM, Arvind Thiagarajan wrote: > Thanks Josef. I played around a bit more and in the Rice case, it > appears the estimation is accurate only if loc and scale are supplied > --- the fit is unable to estimate all three together accurately. Also, > the estimation seems only accurate for larger values of the shape > parameter. Just mentioning this for anyone else who may see similar > problems. quick look: http://en.wikipedia.org/wiki/Rice_distribution#Parameter_estimation_.28the_Koay_inversion_technique.29 maybe it should work if loc is fixed. Josef > > Thanks, > Arvind > > On Mon, Mar 18, 2013 at 12:28 PM, wrote: >> On Mon, Mar 18, 2013 at 2:46 PM, Arvind Thiagarajan wrote: >>> Hi all, >>> >>> I'm a beginner with SciPy so this may be a basic question. I am trying >>> to fit a Rice distribution to some data using scipy.stats. >>> >>> However, I first tried some test code which doesn't seem to give me a >>> very good fit. I tried the following code: >>> >>>>>> b = [0.3,] >>>>>> samples = rice.rvs(b, loc=0, scale=1, size=1000) >>>>>> rice.fit(samples) >>> (0.0012012190480231357, -0.0023216862043629813, 1.024758538166374) >>> >>> Though the loc and scale seem ok (close to 0 and 1 respectively), I >>> was hoping the first return value would be closer to 0.3 since the >>> shape parameter I supplied while generating random samples was 0.3 (in >>> the call to rvs). >>> >>> I initially thought 1000 samples was perhaps too few, but get >>> similarly poor results with 10,000 samples as well. Concluded that I >>> must be doing something wrong, or that I'm misinterpreting the usage >>> for the rvs() or fit() functions. Is the fit function supposed to try >>> to approximately find the same shape parameter I passed in to rvs? Or >>> should I be interpreting the fit result differently? >>> >>> Wondering if anyone has any ideas? >> >> general answer: >> http://stackoverflow.com/questions/15468215/python-scipy-stats-pareto-fit-how-does-it-work >> >> For some distributions it helps to try out different starting values. >> For some distributions, the maximum likelihood estimation does not >> work (like pareto if we want to estimate also loc), either curvature >> problems or likelihood can go to infinite. >> >> I don't know anything about the rice distributions, and don't know how >> difficult it is to estimate the parameters. >> >> Josef >> >>> >>> Thanks a lot, >>> Arvind >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Mon Mar 18 22:02:04 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 18 Mar 2013 22:02:04 -0400 Subject: [SciPy-User] expanding optimize.brentq In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 3:23 PM, wrote: > On Mon, Mar 18, 2013 at 2:58 PM, Charles R Harris > wrote: >> >> >> On Mon, Mar 18, 2013 at 11:13 AM, wrote: >>> >>> Just a thought, given the problems with fsolve >>> >>> In scipy.stats.distribution, we use an expanding brentq, which looks >>> like it works very well. >>> https://github.com/scipy/scipy/pull/216/files#L0R1175 >>> >>> A question given that I have not much experience with root finders: >>> >>> Is there a general algorithm that has similar properties? >>> >>> >>> In these application we know that the function is strictly monotonic, >>> but we don't have a bound for brentq without checking. For some cases >>> I know one of the bounds (e.g. 0 for non-negative solutions). >>> >>> (I could use that right now as a replacement for fsolve.) >>> >> >> I assume you are talking about the 1-D case where you don't have bounding >> intervals to start with. There are various search methods that can be used >> to find such intervals, but they can't be guaranteed to work. They all come >> down to making trials of various points looking for sign reversals, and the >> searches are either subdividing a larger interval or creating larger and >> larger intervals by, say, doubling each time around. The addition of such >> methods would be useful, but they need a failure mode :( > > Yes that's pretty much what I would like, and use in the rewritten > scipy.stats.distribution function. > But I was hoping there might be something more systematic than "home-made". a "home-made" proof of concept it works, but is still pretty ugly, and can only expand away from zero (since this was the case that we needed in scipy.stats.distribution) I just kept piling on conditions until all my test cases finished without exception. Josef ------------------ # -*- coding: utf-8 -*- """ Created on Mon Mar 18 15:48:23 2013 Author: Josef Perktold """ import numpy as np from scipy import optimize # based on scipy.stats.distributions._ppf_single_call def brentq_expanding(func, low=None, upp=None, args=(), xtol=1e-5, start_low=None, start_upp=None, increasing=None): #assumes monotonic ``func`` left, right = low, upp #alias # start_upp first because of possible sl = -1 > upp if upp is not None: su = upp elif start_upp is not None: su = start_upp if start_upp < 0: print "raise ValueError('start_upp needs to be positive')" else: su = 1 start_upp = 1 if low is not None: sl = low elif start_low is not None: sl = start_low if start_low > 0: print "raise ValueError('start_low needs to be negative')" else: sl = min(-1, su - 1) start_low = sl # need sl < su if upp is None: su = max(su, sl + 1) # increasing or not ? if ((low is None) or (upp is None)) and increasing is None: assert sl < su f_low = func(sl, *args) f_upp = func(su, *args) increasing = (f_low < f_upp) print 'low, upp', low, upp print 'increasing', increasing print 'sl, su', sl, su start_low, start_upp = sl, su if not increasing: start_low, start_upp = start_upp, start_low left, right = right, left max_it = 10 n_it = 0 factor = 10. if left is None: left = start_low #* factor while func(left, *args) > 0: right = left left *= factor if n_it >= max_it: break n_it += 1 # left is now such that cdf(left) < q if right is None: right = start_upp #* factor while func(right, *args) < 0: left = right right *= factor if n_it >= max_it: break n_it += 1 # right is now such that cdf(right) > q # if left > right: # left, right = right, left #swap return optimize.brentq(func, \ left, right, args=args, xtol=xtol) def func(x, a): f = (x - a)**3 print 'evaluating at %f, fval = %f' % (x, f) return f def funcn(x, a): f = -(x - a)**3 print 'evaluating at %f, fval = %f' % (x, f) return f print brentq_expanding(func, args=(0,), increasing=True) print brentq_expanding(funcn, args=(0,), increasing=False) print brentq_expanding(funcn, args=(-50,), increasing=False) print brentq_expanding(func, args=(20,)) print brentq_expanding(funcn, args=(20,)) print brentq_expanding(func, args=(500000,)) # one bound print brentq_expanding(func, args=(500000,), low=10000) print brentq_expanding(func, args=(-50000,), upp=-1000) print brentq_expanding(funcn, args=(500000,), low=10000) print brentq_expanding(funcn, args=(-50000,), upp=-1000) # both bounds # hits maxiter in brentq if bounds too wide print brentq_expanding(func, args=(500000,), low=300000, upp=700000) print brentq_expanding(func, args=(-50000,), low= -70000, upp=-1000) print brentq_expanding(funcn, args=(500000,), low=300000, upp=700000) print brentq_expanding(funcn, args=(-50000,), low= -70000, upp=-10000) ----------------------------- > >> >> Your application looks to be finding points on a monotone function on the >> interval [0, inf]. For that you could use the interval [0, 1] and map it to >> [0, inf] with the substitution x/1-x, although the point x == 1 will be a >> bit tricky unless your function handles inf gracefully. The accuracy will >> also tend to be relative rather than absolute, but that seems to be a given >> in any case. I suppose this approach falls under the change of variables >> method. Those tend to be problem specific, so I don't know if they would be >> a good fit for scipy, but certainly they could work well in specialized >> domains. It would also seem to be a good idea to match the asymptotic >> behavior if possible, which will tend to linearize the problem. So other >> options would be functions like log, erf, arctan, etc, for the substitution, >> but in those cases you probably already have the the inverse functions. > > That might be a good idea to get brentq to influence the selection of > subdivisions, exponential instead of linear. One problem with brentq > is that I would have to give one huge bound, but the most likely case > is much smaller. > (example find sample size for power identity, we should get an answer > below 1000 but in some cases it could be 100.000) > > One related problem for the bounded case in (0,1) is the open > interval, I use 1e-8, some packages use 1e-6 to stay away from the > undefined boundary points. > But again, there could be some extreme cases where the solution is > closer to 0 or 1, than the 1e-8. > (I don't remember if we found a solution to a similar problem in the > stats.distributions, or if I mix up a case there.) > > Josef > >> >> Chuck >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> From josef.pktd at gmail.com Mon Mar 18 22:32:22 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 18 Mar 2013 22:32:22 -0400 Subject: [SciPy-User] expanding optimize.brentq In-Reply-To: References: Message-ID: On Mon, Mar 18, 2013 at 10:02 PM, wrote: > On Mon, Mar 18, 2013 at 3:23 PM, wrote: >> On Mon, Mar 18, 2013 at 2:58 PM, Charles R Harris >> wrote: >>> >>> >>> On Mon, Mar 18, 2013 at 11:13 AM, wrote: >>>> >>>> Just a thought, given the problems with fsolve >>>> >>>> In scipy.stats.distribution, we use an expanding brentq, which looks >>>> like it works very well. >>>> https://github.com/scipy/scipy/pull/216/files#L0R1175 >>>> >>>> A question given that I have not much experience with root finders: >>>> >>>> Is there a general algorithm that has similar properties? >>>> >>>> >>>> In these application we know that the function is strictly monotonic, >>>> but we don't have a bound for brentq without checking. For some cases >>>> I know one of the bounds (e.g. 0 for non-negative solutions). >>>> >>>> (I could use that right now as a replacement for fsolve.) >>>> >>> >>> I assume you are talking about the 1-D case where you don't have bounding >>> intervals to start with. There are various search methods that can be used >>> to find such intervals, but they can't be guaranteed to work. They all come >>> down to making trials of various points looking for sign reversals, and the >>> searches are either subdividing a larger interval or creating larger and >>> larger intervals by, say, doubling each time around. The addition of such >>> methods would be useful, but they need a failure mode :( >> >> Yes that's pretty much what I would like, and use in the rewritten >> scipy.stats.distribution function. >> But I was hoping there might be something more systematic than "home-made". > > a "home-made" proof of concept > > it works, but is still pretty ugly, and can only expand away from zero > (since this was the case that > we needed in scipy.stats.distribution) > > I just kept piling on conditions until all my test cases finished > without exception. "silly" usecase: find the zero of a cubic function (x-a)**3 if the root a is 1.234e30: we need to specify that the function is increasing (and increase both maxiter) print brentq_expanding(func, args=(1.234e30,), increasing=True, maxiter_bq=200) first and last steps of the root-finding: evaluating at -1.000000, fval = -1.87908e+90 evaluating at 1.000000, fval = -1.87908e+90 evaluating at 10.000000, fval = -1.87908e+90 evaluating at 100.000000, fval = -1.87908e+90 evaluating at 1000.000000, fval = -1.87908e+90 evaluating at 10000.000000, fval = -1.87908e+90 evaluating at 100000.000000, fval = -1.87908e+90 evaluating at 1000000.000000, fval = -1.87908e+90 evaluating at 10000000.000000, fval = -1.87908e+90 evaluating at 100000000.000000, fval = -1.87908e+90 ..... evaluating at 1233999999999999400000000000000.000000, fval = -178405961588244990000000000000000000000000000.000000 evaluating at 1233999999999999700000000000000.000000, fval = -22300745198530623000000000000000000000000000.000000 evaluating at 1234000000000000800000000000000.000000, fval = 602120120360326820000000000000000000000000000.000000 evaluating at 1234000000000000000000000000000.000000, fval = 0.000000 1.234e+30 Josef fun > > Josef > > ------------------ > # -*- coding: utf-8 -*- > """ > > Created on Mon Mar 18 15:48:23 2013 > Author: Josef Perktold > > """ > > import numpy as np > from scipy import optimize > > # based on scipy.stats.distributions._ppf_single_call > def brentq_expanding(func, low=None, upp=None, args=(), xtol=1e-5, > start_low=None, start_upp=None, increasing=None): > #assumes monotonic ``func`` > > left, right = low, upp #alias > > # start_upp first because of possible sl = -1 > upp > if upp is not None: > su = upp > elif start_upp is not None: > su = start_upp > if start_upp < 0: > print "raise ValueError('start_upp needs to be positive')" > else: > su = 1 > start_upp = 1 > > > if low is not None: > sl = low > elif start_low is not None: > sl = start_low > if start_low > 0: > print "raise ValueError('start_low needs to be negative')" > else: > sl = min(-1, su - 1) > start_low = sl > > # need sl < su > if upp is None: > su = max(su, sl + 1) > > > # increasing or not ? > if ((low is None) or (upp is None)) and increasing is None: > assert sl < su > f_low = func(sl, *args) > f_upp = func(su, *args) > increasing = (f_low < f_upp) > > print 'low, upp', low, upp > print 'increasing', increasing > print 'sl, su', sl, su > > > start_low, start_upp = sl, su > if not increasing: > start_low, start_upp = start_upp, start_low > left, right = right, left > > max_it = 10 > n_it = 0 > factor = 10. > if left is None: > left = start_low #* factor > while func(left, *args) > 0: > right = left > left *= factor > if n_it >= max_it: > break > n_it += 1 > # left is now such that cdf(left) < q > if right is None: > right = start_upp #* factor > while func(right, *args) < 0: > left = right > right *= factor > if n_it >= max_it: > break > n_it += 1 > # right is now such that cdf(right) > q > > # if left > right: > # left, right = right, left #swap > return optimize.brentq(func, \ > left, right, args=args, xtol=xtol) > > > def func(x, a): > f = (x - a)**3 > print 'evaluating at %f, fval = %f' % (x, f) > return f > > > > def funcn(x, a): > f = -(x - a)**3 > print 'evaluating at %f, fval = %f' % (x, f) > return f > > print brentq_expanding(func, args=(0,), increasing=True) > > print brentq_expanding(funcn, args=(0,), increasing=False) > print brentq_expanding(funcn, args=(-50,), increasing=False) > > print brentq_expanding(func, args=(20,)) > print brentq_expanding(funcn, args=(20,)) > print brentq_expanding(func, args=(500000,)) > > # one bound > print brentq_expanding(func, args=(500000,), low=10000) > print brentq_expanding(func, args=(-50000,), upp=-1000) > > print brentq_expanding(funcn, args=(500000,), low=10000) > print brentq_expanding(funcn, args=(-50000,), upp=-1000) > > # both bounds > # hits maxiter in brentq if bounds too wide > print brentq_expanding(func, args=(500000,), low=300000, upp=700000) > print brentq_expanding(func, args=(-50000,), low= -70000, upp=-1000) > print brentq_expanding(funcn, args=(500000,), low=300000, upp=700000) > print brentq_expanding(funcn, args=(-50000,), low= -70000, upp=-10000) > ----------------------------- > >> >>> >>> Your application looks to be finding points on a monotone function on the >>> interval [0, inf]. For that you could use the interval [0, 1] and map it to >>> [0, inf] with the substitution x/1-x, although the point x == 1 will be a >>> bit tricky unless your function handles inf gracefully. The accuracy will >>> also tend to be relative rather than absolute, but that seems to be a given >>> in any case. I suppose this approach falls under the change of variables >>> method. Those tend to be problem specific, so I don't know if they would be >>> a good fit for scipy, but certainly they could work well in specialized >>> domains. It would also seem to be a good idea to match the asymptotic >>> behavior if possible, which will tend to linearize the problem. So other >>> options would be functions like log, erf, arctan, etc, for the substitution, >>> but in those cases you probably already have the the inverse functions. >> >> That might be a good idea to get brentq to influence the selection of >> subdivisions, exponential instead of linear. One problem with brentq >> is that I would have to give one huge bound, but the most likely case >> is much smaller. >> (example find sample size for power identity, we should get an answer >> below 1000 but in some cases it could be 100.000) >> >> One related problem for the bounded case in (0,1) is the open >> interval, I use 1e-8, some packages use 1e-6 to stay away from the >> undefined boundary points. >> But again, there could be some extreme cases where the solution is >> closer to 0 or 1, than the 1e-8. >> (I don't remember if we found a solution to a similar problem in the >> stats.distributions, or if I mix up a case there.) >> >> Josef >> >>> >>> Chuck >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> From lanceboyle at qwest.net Tue Mar 19 03:02:20 2013 From: lanceboyle at qwest.net (Jerry) Date: Tue, 19 Mar 2013 00:02:20 -0700 Subject: [SciPy-User] ANN: Pyzo distro 2013b In-Reply-To: References: <152D9C59-D775-4217-9A98-79F3CD6E71AE@qwest.net> Message-ID: <0242EE41-9844-4EAB-9F3D-1FEEA4DBF124@qwest.net> On Mar 18, 2013, at 3:56 AM, Almar Klein wrote: > > Pyzo crashed on OS X 10.7.5 in under three minutes while trying to run a hello world program. > > I just found: "Mac OS X 10.7 "Lion" was the first version of OS X to drop support for 32-bit Intel processors and run exclusively on 64-bit Intel CPUs." > So, I'm sorry, but you'll have to wait for the next release. > > The reason that the binaries are 32 bit is that I forgot to make the VM 64bit when I created it. Woops. I only found out much later (after all the hard work of setting up the machine was done). So the blame's on me. > > - Almar Thanks for looking into this. Jerry > > > > >> Dear all, >> >> We're pleased to announce release 2013b of Pyzo distro, a Python distribution for scientific computing based on Python 3. For more information, and to give it a try, visit http://www.pyzo.org. >> >> The most notable changes since the last release: >> Pyzo is also build for Mac. >> Python 3.3 is now used. >> Added an executable to launch IPython notebook. >> New versions of several packages, including the IDE. >> More information: >> >> Pyzo is available for Windows, Mac and Linux. The distribution is portable, thus providing a way to install a scientific Python stack on computers without the need for admin rights. >> >> Naturally, Pyzo distro is complient with the scipy-stack, and comes with additional packages such as scikit-image and scikit-learn. >> >> With Pyzo we want to make scientific computing in Python easier accessible. We especially hope to make it easier for newcomers (such as Matlab converts) to join our awesome community. Pyzo uses IEP as the default front-end IDE, and IPython (with notebook) is also available. >> >> Happy coding, >> Almar >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -- > Almar Klein, PhD > Science Applied > phone: +31 6 19268652 > e-mail: a.klein at science-applied.nl > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From seb.haase at gmail.com Tue Mar 19 03:57:05 2013 From: seb.haase at gmail.com (Sebastian Haase) Date: Tue, 19 Mar 2013 08:57:05 +0100 Subject: [SciPy-User] ANN: Pyzo distro 2013b In-Reply-To: <0242EE41-9844-4EAB-9F3D-1FEEA4DBF124@qwest.net> References: <152D9C59-D775-4217-9A98-79F3CD6E71AE@qwest.net> <0242EE41-9844-4EAB-9F3D-1FEEA4DBF124@qwest.net> Message-ID: I don't think this is correct: shaase at sebs-macbookpro:~: python-32 Python 2.7.2 (v2.7.2:8527427914a2, Jun 11 2011, 15:22:34) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.maxint 2147483647 Are you talking maybe about the default python installation .... ? Cheers, Sebastian Haase On Tue, Mar 19, 2013 at 8:02 AM, Jerry wrote: > > On Mar 18, 2013, at 3:56 AM, Almar Klein wrote: > > >> Pyzo crashed on OS X 10.7.5 in under three minutes while trying to run a >> hello world program. > > > I just found: "Mac OS X 10.7 "Lion" was the first version of OS X to drop > support for 32-bit Intel processors and run exclusively on 64-bit Intel > CPUs." > So, I'm sorry, but you'll have to wait for the next release. > > The reason that the binaries are 32 bit is that I forgot to make the VM > 64bit when I created it. Woops. I only found out much later (after all the > hard work of setting up the machine was done). So the blame's on me. > > - Almar > > > Thanks for looking into this. > Jerry > > > > >> >> >> Dear all, >> >> We're pleased to announce release 2013b of Pyzo distro, a Python >> distribution for scientific computing based on Python 3. For more >> information, and to give it a try, visit http://www.pyzo.org. >> >> The most notable changes since the last release: >> >> Pyzo is also build for Mac. >> Python 3.3 is now used. >> Added an executable to launch IPython notebook. >> New versions of several packages, including the IDE. >> >> More information: >> >> Pyzo is available for Windows, Mac and Linux. The distribution is >> portable, thus providing a way to install a scientific Python stack on >> computers without the need for admin rights. >> >> Naturally, Pyzo distro is complient with the scipy-stack, and comes with >> additional packages such as scikit-image and scikit-learn. >> >> With Pyzo we want to make scientific computing in Python easier >> accessible. We especially hope to make it easier for newcomers (such as >> Matlab converts) to join our awesome community. Pyzo uses IEP as the >> default front-end IDE, and IPython (with notebook) is also available. >> >> Happy coding, >> Almar >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Almar Klein, PhD > Science Applied > phone: +31 6 19268652 > e-mail: a.klein at science-applied.nl > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From seb.haase at gmail.com Tue Mar 19 03:59:38 2013 From: seb.haase at gmail.com (Sebastian Haase) Date: Tue, 19 Mar 2013 08:59:38 +0100 Subject: [SciPy-User] ANN: Pyzo distro 2013b In-Reply-To: References: <152D9C59-D775-4217-9A98-79F3CD6E71AE@qwest.net> <0242EE41-9844-4EAB-9F3D-1FEEA4DBF124@qwest.net> Message-ID: shaase at sebs-macbookpro:~: file /usr/bin/python /usr/bin/python: Mach-O universal binary with 2 architectures /usr/bin/python (for architecture x86_64): Mach-O 64-bit executable x86_64 /usr/bin/python (for architecture i386): Mach-O executable i386 shaase at sebs-macbookpro:~: uname -a Darwin xxxxxxxx.fu-berlin.de 11.4.2 Darwin Kernel Version 11.4.2: Thu Aug 23 16:25:48 PDT 2012; root:xnu-1699.32.7~1/RELEASE_X86_64 x86_64 OS-X: 10.7.5 -S. On Tue, Mar 19, 2013 at 8:57 AM, Sebastian Haase wrote: > I don't think this is correct: > shaase at sebs-macbookpro:~: python-32 > Python 2.7.2 (v2.7.2:8527427914a2, Jun 11 2011, 15:22:34) > [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import sys >>>> sys.maxint > 2147483647 > > > Are you talking maybe about the default python installation .... ? > > Cheers, > Sebastian Haase > > > On Tue, Mar 19, 2013 at 8:02 AM, Jerry wrote: >> >> On Mar 18, 2013, at 3:56 AM, Almar Klein wrote: >> >> >>> Pyzo crashed on OS X 10.7.5 in under three minutes while trying to run a >>> hello world program. >> >> >> I just found: "Mac OS X 10.7 "Lion" was the first version of OS X to drop >> support for 32-bit Intel processors and run exclusively on 64-bit Intel >> CPUs." >> So, I'm sorry, but you'll have to wait for the next release. >> >> The reason that the binaries are 32 bit is that I forgot to make the VM >> 64bit when I created it. Woops. I only found out much later (after all the >> hard work of setting up the machine was done). So the blame's on me. >> >> - Almar >> >> >> Thanks for looking into this. >> Jerry >> >> >> >> >>> >>> >>> Dear all, >>> >>> We're pleased to announce release 2013b of Pyzo distro, a Python >>> distribution for scientific computing based on Python 3. For more >>> information, and to give it a try, visit http://www.pyzo.org. >>> >>> The most notable changes since the last release: >>> >>> Pyzo is also build for Mac. >>> Python 3.3 is now used. >>> Added an executable to launch IPython notebook. >>> New versions of several packages, including the IDE. >>> >>> More information: >>> >>> Pyzo is available for Windows, Mac and Linux. The distribution is >>> portable, thus providing a way to install a scientific Python stack on >>> computers without the need for admin rights. >>> >>> Naturally, Pyzo distro is complient with the scipy-stack, and comes with >>> additional packages such as scikit-image and scikit-learn. >>> >>> With Pyzo we want to make scientific computing in Python easier >>> accessible. We especially hope to make it easier for newcomers (such as >>> Matlab converts) to join our awesome community. Pyzo uses IEP as the >>> default front-end IDE, and IPython (with notebook) is also available. >>> >>> Happy coding, >>> Almar >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> >> -- >> Almar Klein, PhD >> Science Applied >> phone: +31 6 19268652 >> e-mail: a.klein at science-applied.nl >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> From a.klein at science-applied.nl Tue Mar 19 05:48:43 2013 From: a.klein at science-applied.nl (Almar Klein) Date: Tue, 19 Mar 2013 10:48:43 +0100 Subject: [SciPy-User] ANN: Pyzo distro 2013b In-Reply-To: References: <152D9C59-D775-4217-9A98-79F3CD6E71AE@qwest.net> <0242EE41-9844-4EAB-9F3D-1FEEA4DBF124@qwest.net> Message-ID: Mmm, you're right. It indeed says it dropped support for 32bit intel CPU's; not necesarily 32bit libraries. - Almar On 19 March 2013 08:59, Sebastian Haase wrote: > shaase at sebs-macbookpro:~: file /usr/bin/python > /usr/bin/python: Mach-O universal binary with 2 architectures > /usr/bin/python (for architecture x86_64): Mach-O 64-bit executable > x86_64 > /usr/bin/python (for architecture i386): Mach-O executable i386 > > shaase at sebs-macbookpro:~: uname -a > Darwin xxxxxxxx.fu-berlin.de 11.4.2 Darwin Kernel Version 11.4.2: Thu > Aug 23 16:25:48 PDT 2012; root:xnu-1699.32.7~1/RELEASE_X86_64 x86_64 > > OS-X: 10.7.5 > > > -S. > > On Tue, Mar 19, 2013 at 8:57 AM, Sebastian Haase > wrote: > > I don't think this is correct: > > shaase at sebs-macbookpro:~: python-32 > > Python 2.7.2 (v2.7.2:8527427914a2, Jun 11 2011, 15:22:34) > > [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin > > Type "help", "copyright", "credits" or "license" for more information. > >>>> import sys > >>>> sys.maxint > > 2147483647 > > > > > > Are you talking maybe about the default python installation .... ? > > > > Cheers, > > Sebastian Haase > > > > > > On Tue, Mar 19, 2013 at 8:02 AM, Jerry wrote: > >> > >> On Mar 18, 2013, at 3:56 AM, Almar Klein wrote: > >> > >> > >>> Pyzo crashed on OS X 10.7.5 in under three minutes while trying to run > a > >>> hello world program. > >> > >> > >> I just found: "Mac OS X 10.7 "Lion" was the first version of OS X to > drop > >> support for 32-bit Intel processors and run exclusively on 64-bit Intel > >> CPUs." > >> So, I'm sorry, but you'll have to wait for the next release. > >> > >> The reason that the binaries are 32 bit is that I forgot to make the VM > >> 64bit when I created it. Woops. I only found out much later (after all > the > >> hard work of setting up the machine was done). So the blame's on me. > >> > >> - Almar > >> > >> > >> Thanks for looking into this. > >> Jerry > >> > >> > >> > >> > >>> > >>> > >>> Dear all, > >>> > >>> We're pleased to announce release 2013b of Pyzo distro, a Python > >>> distribution for scientific computing based on Python 3. For more > >>> information, and to give it a try, visit http://www.pyzo.org. > >>> > >>> The most notable changes since the last release: > >>> > >>> Pyzo is also build for Mac. > >>> Python 3.3 is now used. > >>> Added an executable to launch IPython notebook. > >>> New versions of several packages, including the IDE. > >>> > >>> More information: > >>> > >>> Pyzo is available for Windows, Mac and Linux. The distribution is > >>> portable, thus providing a way to install a scientific Python stack on > >>> computers without the need for admin rights. > >>> > >>> Naturally, Pyzo distro is complient with the scipy-stack, and comes > with > >>> additional packages such as scikit-image and scikit-learn. > >>> > >>> With Pyzo we want to make scientific computing in Python easier > >>> accessible. We especially hope to make it easier for newcomers (such as > >>> Matlab converts) to join our awesome community. Pyzo uses IEP as the > >>> default front-end IDE, and IPython (with notebook) is also available. > >>> > >>> Happy coding, > >>> Almar > >>> > >>> _______________________________________________ > >>> SciPy-User mailing list > >>> SciPy-User at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> > >>> > >>> > >>> _______________________________________________ > >>> SciPy-User mailing list > >>> SciPy-User at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-user > >>> > >> > >> > >> > >> -- > >> Almar Klein, PhD > >> Science Applied > >> phone: +31 6 19268652 > >> e-mail: a.klein at science-applied.nl > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > >> > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Almar Klein, PhD Science Applied phone: +31 6 19268652 e-mail: a.klein at science-applied.nl -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmhobson at gmail.com Tue Mar 19 15:47:02 2013 From: pmhobson at gmail.com (Paul Hobson) Date: Tue, 19 Mar 2013 12:47:02 -0700 Subject: [SciPy-User] Problems with optimize.leastsq and curve_fit Message-ID: I guess I don't quite understand how to use the functions properly since they're not returning meaningful results on what should be a pretty trivial linear fit. My initial thought is that they're being tripped up by my `x` variable not being uniformly spaced. Consider the following (overly) simplified example: ## start import numpy as np import scipy.optimize as opt x = np.array([ 0. , 0.5, 1. , 2. , 3. , 3.5, 5. , 5.5, 6. , 6.5, 7. , 7.5, 8. , 8.5, 9. , 9.5, 10. , 11. , 11.5, 12.5], dtype=np.float32) y = np.array([ 2.7, 8.6, 9. , 13.8, 16.8, 17.8, 23.24, 25.2, 26.3, 27.1, 27.68, 30.7, 32.9, 32.9, 37.07, 39.5, 40.5, 45.7, 46.76, 50.2], dtype=np.float32) def lsq_line(params, x, y): return y - (params[0]*x + params[1]) def cf_line(x, m, b): return m*x + b np.polyfit(x, y, 1) # works great # Out[6]: array([ 3.54965901, 5.09342384], dtype=float32) opt.leastsq(lsq_line, [3,5], args=(x,y)) # bad # Out[9]: (array([ 3., 5.]), 4) opt.curve_fit(cf_line, x, y, p0=[3,5]) # bad # Out[10]: (array([ 3., 5.]), inf) ## stop Any thoughts or advice on how to better use these functions would be much appreciated. Thanks, -p -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Mar 19 15:56:36 2013 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 19 Mar 2013 21:56:36 +0200 Subject: [SciPy-User] Problems with optimize.leastsq and curve_fit In-Reply-To: References: Message-ID: 19.03.2013 21:47, Paul Hobson kirjoitti: > I guess I don't quite understand how to use the functions properly since > they're not returning meaningful results on what should be a pretty > trivial linear fit. My initial thought is that they're being tripped up > by my `x` variable not being uniformly spaced. The problem is the use of float32 rather than float64. This is a bug, which was fixed in 0.12.0. -- Pauli Virtanen From pmhobson at gmail.com Tue Mar 19 17:39:01 2013 From: pmhobson at gmail.com (Paul Hobson) Date: Tue, 19 Mar 2013 14:39:01 -0700 Subject: [SciPy-User] Problems with optimize.leastsq and curve_fit In-Reply-To: References: Message-ID: On Tue, Mar 19, 2013 at 12:56 PM, Pauli Virtanen wrote: > 19.03.2013 21:47, Paul Hobson kirjoitti: > > I guess I don't quite understand how to use the functions properly since > > they're not returning meaningful results on what should be a pretty > > trivial linear fit. My initial thought is that they're being tripped up > > by my `x` variable not being uniformly spaced. > > The problem is the use of float32 rather than float64. This is a bug, > which was fixed in 0.12.0. > > Thanks for the info and the prompt reply. This is the info I need. -pauil -------------- next part -------------- An HTML attachment was scrubbed... URL: From indiajoe at gmail.com Thu Mar 21 16:10:30 2013 From: indiajoe at gmail.com (Joe Philip Ninan) Date: Fri, 22 Mar 2013 01:40:30 +0530 Subject: [SciPy-User] optimize.leastsq error estimation Message-ID: Hi, I want to confirm whether i understood the documentation of optimize.leastsq correctly. ( http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html) For estimating the error in the fitted parameter. The documentation says : "This matrix (cov_x) must be multiplied by the residual variance to get the covariance of the parameter estimates" i.e. Multiply the output cov_x with the "reduced chi square". Right? and take sqrt of corresponding diagonal element from covariance matrix. Where "residual chi square" is the sum of {[(fitted function - data)/sigma]**2} /N and where N is degrees of freedom [= len(data)-number of parameters] And sigma is the error in each data point. I mainly want to confirm whether the divide by sigma is required or not. (I couldn't find the definition of "residual variance" in documentation, so i assumed it is reduced chi square). Thanking you, Joe -- /--------------------------------------------------------------- "GNU/Linux: because a PC is a terrible thing to waste" - GNU Generation -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Thu Mar 21 22:38:25 2013 From: newville at cars.uchicago.edu (Matt Newville) Date: Thu, 21 Mar 2013 21:38:25 -0500 Subject: [SciPy-User] optimize.leastsq error estimation In-Reply-To: References: Message-ID: Hi Joe, On Thu, Mar 21, 2013 at 3:10 PM, Joe Philip Ninan wrote: > Hi, > I want to confirm whether i understood the documentation of optimize.leastsq > correctly. > ( > http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html > ) > For estimating the error in the fitted parameter. > The documentation says : > "This matrix (cov_x) must be multiplied by the residual variance to get the > covariance of the parameter estimates" > i.e. Multiply the output cov_x with the "reduced chi square". Right? and > take sqrt of corresponding diagonal element from covariance matrix. > Where "residual chi square" is the sum of {[(fitted function - > data)/sigma]**2} /N > and where N is degrees of freedom [= len(data)-number of parameters] > And sigma is the error in each data point. > > I mainly want to confirm whether the divide by sigma is required or not. > (I couldn't find the definition of "residual variance" in documentation, so > i assumed it is reduced chi square). > > Thanking you, > Joe I think you're correct, but there may be some confusion in terminology. The standard errors are the square-root of the diagonal elements of (covar * reduced_chi_square), where reduced_chi_square is (((data-model)/uncertainty)**2).sum() / (len(ydata)- len(variables)) In (untested!) code, it might look like: bestvals, covar, info, errmsg, ier = leastsq(objectivefunc, initvals, args=args, full_output=1, **kws) residual = objectivefunc(bestvals, *args) reduced_chi_square = (residual**2).sum() / (len(data) - len(initvals)) covar = covar * reduced_chi_square std_error = array([sqrt(covar[i, i]) for i in range(len(bestvals))]) Here, objectivefunc is expected to return the scaled misfit array, that is "(data - model)/uncertainty". Cheers, --Matt Newville 630-252-0431 From indiajoe at gmail.com Fri Mar 22 04:58:06 2013 From: indiajoe at gmail.com (Joe Philip Ninan) Date: Fri, 22 Mar 2013 14:28:06 +0530 Subject: [SciPy-User] optimize.leastsq error estimation In-Reply-To: References: Message-ID: Hi Matt, Thanks for clarifying and also for providing an example. I think it will be nice if we add the term "reduced chi square" also along with "reduced variance" in the scipy documentation. And also an example to show how to estimate the error in the documentation page. Thanking you, Joe On 22 March 2013 08:08, Matt Newville wrote: > Hi Joe, > > On Thu, Mar 21, 2013 at 3:10 PM, Joe Philip Ninan > wrote: > > Hi, > > I want to confirm whether i understood the documentation of > optimize.leastsq > > correctly. > > ( > > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html > > ) > > For estimating the error in the fitted parameter. > > The documentation says : > > "This matrix (cov_x) must be multiplied by the residual variance to get > the > > covariance of the parameter estimates" > > i.e. Multiply the output cov_x with the "reduced chi square". Right? and > > take sqrt of corresponding diagonal element from covariance matrix. > > Where "residual chi square" is the sum of {[(fitted function - > > data)/sigma]**2} /N > > and where N is degrees of freedom [= len(data)-number of parameters] > > And sigma is the error in each data point. > > > > I mainly want to confirm whether the divide by sigma is required or not. > > (I couldn't find the definition of "residual variance" in documentation, > so > > i assumed it is reduced chi square). > > > > Thanking you, > > Joe > > I think you're correct, but there may be some confusion in > terminology. The standard errors are the square-root of the diagonal > elements of (covar * reduced_chi_square), where reduced_chi_square is > > (((data-model)/uncertainty)**2).sum() / (len(ydata)- len(variables)) > > In (untested!) code, it might look like: > > bestvals, covar, info, errmsg, ier = leastsq(objectivefunc, initvals, > > args=args, full_output=1, **kws) > > residual = objectivefunc(bestvals, *args) > reduced_chi_square = (residual**2).sum() / (len(data) - len(initvals)) > > covar = covar * reduced_chi_square > > std_error = array([sqrt(covar[i, i]) for i in range(len(bestvals))]) > > Here, objectivefunc is expected to return the scaled misfit array, > that is "(data - model)/uncertainty". > > Cheers, > > --Matt Newville 630-252-0431 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- /--------------------------------------------------------------- "GNU/Linux: because a PC is a terrible thing to waste" - GNU Generation -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonb at google.com Fri Mar 22 08:43:42 2013 From: simonb at google.com (Simon Baldwin) Date: Fri, 22 Mar 2013 12:43:42 +0000 Subject: [SciPy-User] int32 overflow and sqrt of -ve number in scipy.stats.wilcoxon Message-ID: I have run into two distinct but possibly connected failure modes in scipy.stats.wilcoxon. Before bug-reporting, I'd appreciate any thoughts on these. The first is an int32 overflow that occurs where a count of data ties exceeds 46341. This occurs in my real data set, comprising around 0.7M samples. The cause is 32-bit counts returned by dfreps() in futilmodule.c. Minimal failing demonstration: import numpy import scipy.stats numpy.seterr(all='raise') # Raises: FloatingPointError: overflow encountered in int_scalars, morestats.py, line 1242 scipy.stats.wilcoxon([0.1] * 46341) # 46341^2 > 2^31-1 The second arises from an attempt to take sqrt of a negative number. Again, this is in wilcoxon data tie handling and occurs in real data. Demonstration: import numpy import scipy.stats numpy.seterr(all='raise') # Raises: FloatingPointError: invalid value encountered in sqrt, morestats.py, line 1244 scipy.stats.wilcoxon([0.1] * 10) Both on scipy version 0.9, the latest I have access to. I've taken a quick look at the 0.11 code for wilcoxon, and didn't notice any visible functional change. Thanks. -- Google UK Limited | Registered Office: Belgrave House, 76 Buckingham Palace Road, London SW1W 9TQ | Registered in England Number: 3977902 From josef.pktd at gmail.com Fri Mar 22 10:59:47 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 22 Mar 2013 10:59:47 -0400 Subject: [SciPy-User] int32 overflow and sqrt of -ve number in scipy.stats.wilcoxon In-Reply-To: References: Message-ID: On Fri, Mar 22, 2013 at 8:43 AM, Simon Baldwin wrote: > I have run into two distinct but possibly connected failure modes in > scipy.stats.wilcoxon. Before bug-reporting, I'd appreciate any > thoughts on these. > > The first is an int32 overflow that occurs where a count of data ties > exceeds 46341. This occurs in my real data set, comprising around > 0.7M samples. The cause is 32-bit counts returned by dfreps() in > futilmodule.c. Minimal failing demonstration: > > import numpy > import scipy.stats > numpy.seterr(all='raise') > # Raises: FloatingPointError: overflow encountered in int_scalars, > morestats.py, line 1242 > scipy.stats.wilcoxon([0.1] * 46341) # 46341^2 > 2^31-1 this sounds like the same problem that rankdata had and Warren recently fixed. My guess is that we can implement wilcoxon using the new rankdata (maybe, if we can get the tie counts). Otherwise, it would need a change to the c code. (old code that wasn't designed for modern computers ?) > > The second arises from an attempt to take sqrt of a negative number. > Again, this is in wilcoxon data tie handling and occurs in real data. > Demonstration: > > import numpy > import scipy.stats > numpy.seterr(all='raise') > # Raises: FloatingPointError: invalid value encountered in sqrt, > morestats.py, line 1244 > scipy.stats.wilcoxon([0.1] * 10) That might be a numerical precision issue in a corner case (-0) that should be explicitly handled. Please open a ticket. Josef > > Both on scipy version 0.9, the latest I have access to. I've taken a > quick look at the 0.11 code for wilcoxon, and didn't notice any > visible functional change. > > Thanks. > > -- > Google UK Limited | Registered Office: Belgrave House, 76 Buckingham > Palace Road, London SW1W 9TQ | Registered in England Number: 3977902 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From simonb at google.com Fri Mar 22 11:57:35 2013 From: simonb at google.com (Simon Baldwin) Date: Fri, 22 Mar 2013 15:57:35 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?int32_overflow_and_sqrt_of_-ve_number_in?= =?utf-8?q?=09scipy=2Estats=2Ewilcoxon?= References: Message-ID: gmail.com> writes: > > On Fri, Mar 22, 2013 at 8:43 AM, Simon Baldwin google.com> wrote: > > I have run into two distinct but possibly connected failure modes in > > scipy.stats.wilcoxon. Before bug-reporting, I'd appreciate any > > thoughts on these. > ... > > this sounds like the same problem that rankdata had and Warren recently fixed. > > My guess is that we can implement wilcoxon using the new rankdata > (maybe, if we can get the tie counts). > Otherwise, it would need a change to the c code. > > (old code that wasn't designed for modern computers ?) Thanks for the note. The offending computation is corr += 0.5*si*(si*si-1.0) so I think just converting si from int32 to int64 or float before use would solve the majority of cases. > > ... > > That might be a numerical precision issue in a corner case (-0) that should be > explicitly handled. I think it may be a more egregious error than that. In this example the value being passed to sqrt is -443.05555555555554. So far I haven't found an easy workround, not least because I don't yet understand the ties loop in scipy.stats.wilcoxon. > > Please open a ticket. Will do. Thanks. -- Google UK Limited | Registered Office: Belgrave House, 76 Buckingham Palace Road, London SW1W 9TQ | Registered in England Number: 3977902 From jsseabold at gmail.com Fri Mar 22 16:50:13 2013 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 22 Mar 2013 16:50:13 -0400 Subject: [SciPy-User] Using LU Decomposition from UMFPACK? Message-ID: I thought I built scipy with UMFPACK support but I can't seem to get this to work. I would like to do a sparse LU decomposition and recover the U matrix. I built SuiteSparse so long ago, so might there have been a build issue I haven't noticed until now? scipy.test() does not show any errors, though there are 28 skipped tests. Does this work for anyone? [~/] [1]: from scipy import version [~/] [2]: version.full_version [2]: '0.13.0.dev-61f05fe' [~/] [3]: from scipy.sparse.linalg.dsolve import umfpack [~/] [4]: umf = umfpack.UmfpackContext() Exception AttributeError: "'UmfpackContext' object has no attribute '_symbolic'" in > ignored --------------------------------------------------------------------------- ImportError Traceback (most recent call last) in () ----> 1 umf = umfpack.UmfpackContext("di") /usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/dsolve/umfpack/umfpack.pyc in __init__(self, family, **kwargs) 279 a warning is issued (default: 1e12)""" 280 if _um is None: --> 281 raise ImportError('Scipy was built without UMFPACK support. ' 282 'You need to install the UMFPACK library and ' 283 'header files before building scipy.') ImportError: Scipy was built without UMFPACK support. You need to install the UMFPACK library and header files before building scipy. [~/] [5]: from scipy import show_config [~/] [6]: show_config() amd_info: libraries = ['amd'] library_dirs = ['/home/skipper/atlas_build2/lib'] define_macros = [('SCIPY_AMD_H', None)] swig_opts = ['-I/home/skipper/atlas_build2/include'] include_dirs = ['/home/skipper/atlas_build2/include'] umfpack_info: libraries = ['umfpack', 'amd'] library_dirs = ['/home/skipper/atlas_build2/lib'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] swig_opts = ['-I/home/skipper/atlas_build2/include', '-I/home/skipper/atlas_build2/include'] include_dirs = ['/home/skipper/atlas_build2/include'] atlas_threads_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/skipper/atlas_build2/lib'] define_macros = [('NO_ATLAS_INFO', -1)] language = f77 include_dirs = ['/home/skipper/atlas_build2/include'] blas_opt_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/skipper/atlas_build2/lib'] define_macros = [('NO_ATLAS_INFO', -1)] language = c include_dirs = ['/home/skipper/atlas_build2/include'] atlas_blas_threads_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/skipper/atlas_build2/lib'] define_macros = [('NO_ATLAS_INFO', -1)] language = c include_dirs = ['/home/skipper/atlas_build2/include'] lapack_opt_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/home/skipper/atlas_build2/lib'] define_macros = [('NO_ATLAS_INFO', -1)] language = f77 include_dirs = ['/home/skipper/atlas_build2/include'] lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE mkl_info: NOT AVAILABLE Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Fri Mar 22 17:04:33 2013 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 22 Mar 2013 17:04:33 -0400 Subject: [SciPy-User] Using LU Decomposition from UMFPACK? In-Reply-To: References: Message-ID: On Fri, Mar 22, 2013 at 4:50 PM, Skipper Seabold wrote: > I thought I built scipy with UMFPACK support but I can't seem to get this > to work. I would like to do a sparse LU decomposition and recover the U > matrix. > > I built SuiteSparse so long ago, so might there have been a build issue I > haven't noticed until now? > > scipy.test() does not show any errors, though there are 28 skipped tests. > > Does this work for anyone? > > [~/] > [1]: from scipy import version > > [~/] > [2]: version.full_version > [2]: '0.13.0.dev-61f05fe' > > [~/] > [3]: from scipy.sparse.linalg.dsolve import umfpack > > [~/] > [4]: umf = umfpack.UmfpackContext() > Exception AttributeError: "'UmfpackContext' object has no attribute > '_symbolic'" in 0x56c0b10>> ignored > --------------------------------------------------------------------------- > ImportError Traceback (most recent call last) > in () > ----> 1 umf = umfpack.UmfpackContext("di") > > /usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/dsolve/umfpack/umfpack.pyc > in __init__(self, family, **kwargs) > 279 a warning is issued (default: 1e12)""" > 280 if _um is None: > --> 281 raise ImportError('Scipy was built without UMFPACK > support. ' > 282 'You need to install the UMFPACK > library and ' > 283 'header files before building > scipy.') > > ImportError: Scipy was built without UMFPACK support. You need to install > the UMFPACK library and header files before building scipy. > > [~/] > [5]: from scipy import show_config > > [~/] > [6]: show_config() > amd_info: > libraries = ['amd'] > library_dirs = ['/home/skipper/atlas_build2/lib'] > define_macros = [('SCIPY_AMD_H', None)] > swig_opts = ['-I/home/skipper/atlas_build2/include'] > include_dirs = ['/home/skipper/atlas_build2/include'] > umfpack_info: > libraries = ['umfpack', 'amd'] > library_dirs = ['/home/skipper/atlas_build2/lib'] > > define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] > > swig_opts = ['-I/home/skipper/atlas_build2/include', > '-I/home/skipper/atlas_build2/include'] > > include_dirs = ['/home/skipper/atlas_build2/include'] > atlas_threads_info: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/home/skipper/atlas_build2/lib'] > define_macros = [('NO_ATLAS_INFO', -1)] > language = f77 > include_dirs = ['/home/skipper/atlas_build2/include'] > blas_opt_info: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/home/skipper/atlas_build2/lib'] > define_macros = [('NO_ATLAS_INFO', -1)] > language = c > include_dirs = ['/home/skipper/atlas_build2/include'] > atlas_blas_threads_info: > libraries = ['ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/home/skipper/atlas_build2/lib'] > define_macros = [('NO_ATLAS_INFO', -1)] > language = c > include_dirs = ['/home/skipper/atlas_build2/include'] > lapack_opt_info: > libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] > library_dirs = ['/home/skipper/atlas_build2/lib'] > define_macros = [('NO_ATLAS_INFO', -1)] > language = f77 > include_dirs = ['/home/skipper/atlas_build2/include'] > lapack_mkl_info: > NOT AVAILABLE > blas_mkl_info: > NOT AVAILABLE > mkl_info: > NOT AVAILABLE > > Skipper > Ah, might it be this? From SuiteSparse/AMD/Doc/ChangeLog Jun 1, 2012: version 2.3.0 * changed from UFconfig to SuiteSparse_config Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Fri Mar 22 17:39:53 2013 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 22 Mar 2013 17:39:53 -0400 Subject: [SciPy-User] Using LU Decomposition from UMFPACK? In-Reply-To: References: Message-ID: On Fri, Mar 22, 2013 at 5:04 PM, Skipper Seabold wrote: > On Fri, Mar 22, 2013 at 4:50 PM, Skipper Seabold wrote: > >> I thought I built scipy with UMFPACK support but I can't seem to get this >> to work. I would like to do a sparse LU decomposition and recover the U >> matrix. >> >> I built SuiteSparse so long ago, so might there have been a build issue I >> haven't noticed until now? >> >> scipy.test() does not show any errors, though there are 28 skipped tests. >> >> Does this work for anyone? >> >> [~/] >> [1]: from scipy import version >> >> [~/] >> [2]: version.full_version >> [2]: '0.13.0.dev-61f05fe' >> >> [~/] >> [3]: from scipy.sparse.linalg.dsolve import umfpack >> >> [~/] >> [4]: umf = umfpack.UmfpackContext() >> Exception AttributeError: "'UmfpackContext' object has no attribute >> '_symbolic'" in > > 0x56c0b10>> ignored >> >> --------------------------------------------------------------------------- >> ImportError Traceback (most recent call >> last) >> in () >> ----> 1 umf = umfpack.UmfpackContext("di") >> >> /usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/dsolve/umfpack/umfpack.pyc >> in __init__(self, family, **kwargs) >> 279 a warning is issued (default: 1e12)""" >> 280 if _um is None: >> --> 281 raise ImportError('Scipy was built without UMFPACK >> support. ' >> 282 'You need to install the UMFPACK >> library and ' >> 283 'header files before building >> scipy.') >> >> ImportError: Scipy was built without UMFPACK support. You need to install >> the UMFPACK library and header files before building scipy. >> >> [~/] >> [5]: from scipy import show_config >> >> [~/] >> [6]: show_config() >> amd_info: >> libraries = ['amd'] >> library_dirs = ['/home/skipper/atlas_build2/lib'] >> define_macros = [('SCIPY_AMD_H', None)] >> swig_opts = ['-I/home/skipper/atlas_build2/include'] >> include_dirs = ['/home/skipper/atlas_build2/include'] >> umfpack_info: >> libraries = ['umfpack', 'amd'] >> library_dirs = ['/home/skipper/atlas_build2/lib'] >> >> define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] >> >> swig_opts = ['-I/home/skipper/atlas_build2/include', >> '-I/home/skipper/atlas_build2/include'] >> >> include_dirs = ['/home/skipper/atlas_build2/include'] >> atlas_threads_info: >> libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] >> library_dirs = ['/home/skipper/atlas_build2/lib'] >> define_macros = [('NO_ATLAS_INFO', -1)] >> language = f77 >> include_dirs = ['/home/skipper/atlas_build2/include'] >> blas_opt_info: >> libraries = ['ptf77blas', 'ptcblas', 'atlas'] >> library_dirs = ['/home/skipper/atlas_build2/lib'] >> define_macros = [('NO_ATLAS_INFO', -1)] >> language = c >> include_dirs = ['/home/skipper/atlas_build2/include'] >> atlas_blas_threads_info: >> libraries = ['ptf77blas', 'ptcblas', 'atlas'] >> library_dirs = ['/home/skipper/atlas_build2/lib'] >> define_macros = [('NO_ATLAS_INFO', -1)] >> language = c >> include_dirs = ['/home/skipper/atlas_build2/include'] >> lapack_opt_info: >> libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] >> library_dirs = ['/home/skipper/atlas_build2/lib'] >> define_macros = [('NO_ATLAS_INFO', -1)] >> language = f77 >> include_dirs = ['/home/skipper/atlas_build2/include'] >> lapack_mkl_info: >> NOT AVAILABLE >> blas_mkl_info: >> NOT AVAILABLE >> mkl_info: >> NOT AVAILABLE >> >> Skipper >> > > Ah, might it be this? From SuiteSparse/AMD/Doc/ChangeLog > > Jun 1, 2012: version 2.3.0 > > * changed from UFconfig to SuiteSparse_config > I suppose not. Also, does this look right? Should they be shared objects? (I'm just about out of my depth) [~/src/SuiteSparse] |27 $ ls ~/atlas_build2/lib/ libamd.a libcholmod.a liblapack.a libptlapack.a libtstatlas.a libatlas.a libf77blas.a libptcblas.a libsatlas.so libumfpack.a libcblas.a libf77refblas.a libptf77blas.a libtatlas.so I have the -fPIC flag set when building SuiteSparse. Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Mar 23 12:17:55 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 23 Mar 2013 12:17:55 -0400 Subject: [SciPy-User] expanding optimize.brentq In-Reply-To: References: Message-ID: In case someone else is interested, I'm pretty much done with brentq_expanding. https://github.com/statsmodels/statsmodels/pull/723#diff-2 It doesn't work as well without specifying anything as I hoped. But specifying soft bounds, works pretty well in my test cases. Josef From jsseabold at gmail.com Sat Mar 23 13:20:31 2013 From: jsseabold at gmail.com (Skipper Seabold) Date: Sat, 23 Mar 2013 13:20:31 -0400 Subject: [SciPy-User] Using LU Decomposition from UMFPACK? In-Reply-To: References: Message-ID: On Fri, Mar 22, 2013 at 5:39 PM, Skipper Seabold wrote: > On Fri, Mar 22, 2013 at 5:04 PM, Skipper Seabold wrote: > >> On Fri, Mar 22, 2013 at 4:50 PM, Skipper Seabold wrote: >> >>> I thought I built scipy with UMFPACK support but I can't seem to get >>> this to work. I would like to do a sparse LU decomposition and recover the >>> U matrix. >>> >>> I built SuiteSparse so long ago, so might there have been a build issue >>> I haven't noticed until now? >>> >>> scipy.test() does not show any errors, though there are 28 skipped tests. >>> >>> Does this work for anyone? >>> >>> [~/] >>> [1]: from scipy import version >>> >>> [~/] >>> [2]: version.full_version >>> [2]: '0.13.0.dev-61f05fe' >>> >>> [~/] >>> [3]: from scipy.sparse.linalg.dsolve import umfpack >>> >>> [~/] >>> [4]: umf = umfpack.UmfpackContext() >>> Exception AttributeError: "'UmfpackContext' object has no attribute >>> '_symbolic'" in >> >> 0x56c0b10>> ignored >>> >>> --------------------------------------------------------------------------- >>> ImportError Traceback (most recent call >>> last) >>> in () >>> ----> 1 umf = umfpack.UmfpackContext("di") >>> >>> /usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/dsolve/umfpack/umfpack.pyc >>> in __init__(self, family, **kwargs) >>> 279 a warning is issued (default: 1e12)""" >>> 280 if _um is None: >>> --> 281 raise ImportError('Scipy was built without UMFPACK >>> support. ' >>> 282 'You need to install the UMFPACK >>> library and ' >>> 283 'header files before building >>> scipy.') >>> >>> ImportError: Scipy was built without UMFPACK support. You need to >>> install the UMFPACK library and header files before building scipy. >>> >>> [~/] >>> [5]: from scipy import show_config >>> >>> [~/] >>> [6]: show_config() >>> amd_info: >>> libraries = ['amd'] >>> library_dirs = ['/home/skipper/atlas_build2/lib'] >>> define_macros = [('SCIPY_AMD_H', None)] >>> swig_opts = ['-I/home/skipper/atlas_build2/include'] >>> include_dirs = ['/home/skipper/atlas_build2/include'] >>> umfpack_info: >>> libraries = ['umfpack', 'amd'] >>> library_dirs = ['/home/skipper/atlas_build2/lib'] >>> >>> define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] >>> >>> swig_opts = ['-I/home/skipper/atlas_build2/include', >>> '-I/home/skipper/atlas_build2/include'] >>> >>> include_dirs = ['/home/skipper/atlas_build2/include'] >>> atlas_threads_info: >>> libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] >>> library_dirs = ['/home/skipper/atlas_build2/lib'] >>> define_macros = [('NO_ATLAS_INFO', -1)] >>> language = f77 >>> include_dirs = ['/home/skipper/atlas_build2/include'] >>> blas_opt_info: >>> libraries = ['ptf77blas', 'ptcblas', 'atlas'] >>> library_dirs = ['/home/skipper/atlas_build2/lib'] >>> define_macros = [('NO_ATLAS_INFO', -1)] >>> language = c >>> include_dirs = ['/home/skipper/atlas_build2/include'] >>> atlas_blas_threads_info: >>> libraries = ['ptf77blas', 'ptcblas', 'atlas'] >>> library_dirs = ['/home/skipper/atlas_build2/lib'] >>> define_macros = [('NO_ATLAS_INFO', -1)] >>> language = c >>> include_dirs = ['/home/skipper/atlas_build2/include'] >>> lapack_opt_info: >>> libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] >>> library_dirs = ['/home/skipper/atlas_build2/lib'] >>> define_macros = [('NO_ATLAS_INFO', -1)] >>> language = f77 >>> include_dirs = ['/home/skipper/atlas_build2/include'] >>> lapack_mkl_info: >>> NOT AVAILABLE >>> blas_mkl_info: >>> NOT AVAILABLE >>> mkl_info: >>> NOT AVAILABLE >>> >>> Skipper >>> >> >> Ah, might it be this? From SuiteSparse/AMD/Doc/ChangeLog >> >> Jun 1, 2012: version 2.3.0 >> >> * changed from UFconfig to SuiteSparse_config >> > > I suppose not. Also, does this look right? Should they be shared objects? > (I'm just about out of my depth) > > [~/src/SuiteSparse] > |27 $ ls ~/atlas_build2/lib/ > libamd.a libcholmod.a liblapack.a libptlapack.a libtstatlas.a > libatlas.a libf77blas.a libptcblas.a libsatlas.so libumfpack.a > libcblas.a libf77refblas.a libptf77blas.a libtatlas.so > > I have the -fPIC flag set when building SuiteSparse. > The issue was that there was a problem in the building of SuiteSparse. I had not correctly linked against blas and never knew it. Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergio_r at mail.com Sat Mar 23 17:37:23 2013 From: sergio_r at mail.com (Sergio Rojas) Date: Sat, 23 Mar 2013 17:37:23 -0400 Subject: [SciPy-User] scipy 0.12.0b1 errors on python 323 Message-ID: <20130323213724.287210@gmx.com> Hello all, I just installed Python 3.2.3 along with numpy 1.7.0 and scypi 0.12.0b1. While numpy.test('full') finished errors=0 and failures=0, scipy.test('full') finished with errors=1 failures=1. ====================================================================== ERROR: Failure: ImportError (scipy.weave only supports Python 2.x) ---------------------------------------------------------------------- ====================================================================== FAIL: test_mathieu_modsem2 (test_basic.TestCephes) ---------------------------------------------------------------------- My concern is whether these tests are critical or important for numerical computing and if there is a way to fix them. Thanks in advance Sergio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Mar 23 17:47:45 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 23 Mar 2013 22:47:45 +0100 Subject: [SciPy-User] scipy 0.12.0b1 errors on python 323 In-Reply-To: <20130323213724.287210@gmx.com> References: <20130323213724.287210@gmx.com> Message-ID: On Sat, Mar 23, 2013 at 10:37 PM, Sergio Rojas wrote: > Hello all, > > I just installed Python 3.2.3 along with > numpy 1.7.0 and scypi 0.12.0b1. > > While numpy.test('full') finished errors=0 and failures=0, > scipy.test('full') finished with errors=1 failures=1. > > ====================================================================== > ERROR: Failure: ImportError (scipy.weave only supports Python 2.x) > ---------------------------------------------------------------------- > ====================================================================== > FAIL: test_mathieu_modsem2 (test_basic.TestCephes) > ---------------------------------------------------------------------- > > My concern is whether these tests are critical or important for > numerical computing and if there is a way to fix them. > Hi Sergio, the error is expected (scipy.weave was never ported to Python 3.x), the failure will be fixed for the final 0.12.0 release. It shouldn't affect any other functionality, so unless you use scipy.special.mathieu_modsem2 you can ignore the failure. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Sat Mar 23 22:04:16 2013 From: jsseabold at gmail.com (Skipper Seabold) Date: Sat, 23 Mar 2013 22:04:16 -0400 Subject: [SciPy-User] Using LU Decomposition from UMFPACK? In-Reply-To: References: Message-ID: On Sat, Mar 23, 2013 at 1:20 PM, Skipper Seabold wrote: > On Fri, Mar 22, 2013 at 5:39 PM, Skipper Seabold wrote: > >> On Fri, Mar 22, 2013 at 5:04 PM, Skipper Seabold wrote: >> >>> On Fri, Mar 22, 2013 at 4:50 PM, Skipper Seabold wrote: >>> >>>> I thought I built scipy with UMFPACK support but I can't seem to get >>>> this to work. I would like to do a sparse LU decomposition and recover the >>>> U matrix. >>>> >>>> I built SuiteSparse so long ago, so might there have been a build issue >>>> I haven't noticed until now? >>>> >>>> scipy.test() does not show any errors, though there are 28 skipped >>>> tests. >>>> >>>> Does this work for anyone? >>>> >>>> [~/] >>>> [1]: from scipy import version >>>> >>>> [~/] >>>> [2]: version.full_version >>>> [2]: '0.13.0.dev-61f05fe' >>>> >>>> [~/] >>>> [3]: from scipy.sparse.linalg.dsolve import umfpack >>>> >>>> [~/] >>>> [4]: umf = umfpack.UmfpackContext() >>>> Exception AttributeError: "'UmfpackContext' object has no attribute >>>> '_symbolic'" in >>> >>> 0x56c0b10>> ignored >>>> >>>> --------------------------------------------------------------------------- >>>> ImportError Traceback (most recent call >>>> last) >>>> in () >>>> ----> 1 umf = umfpack.UmfpackContext("di") >>>> >>>> /usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/dsolve/umfpack/umfpack.pyc >>>> in __init__(self, family, **kwargs) >>>> 279 a warning is issued (default: 1e12)""" >>>> 280 if _um is None: >>>> --> 281 raise ImportError('Scipy was built without UMFPACK >>>> support. ' >>>> 282 'You need to install the UMFPACK >>>> library and ' >>>> 283 'header files before building >>>> scipy.') >>>> >>>> ImportError: Scipy was built without UMFPACK support. You need to >>>> install the UMFPACK library and header files before building scipy. >>>> >>>> [~/] >>>> [5]: from scipy import show_config >>>> >>>> [~/] >>>> [6]: show_config() >>>> amd_info: >>>> libraries = ['amd'] >>>> library_dirs = ['/home/skipper/atlas_build2/lib'] >>>> define_macros = [('SCIPY_AMD_H', None)] >>>> swig_opts = ['-I/home/skipper/atlas_build2/include'] >>>> include_dirs = ['/home/skipper/atlas_build2/include'] >>>> umfpack_info: >>>> libraries = ['umfpack', 'amd'] >>>> library_dirs = ['/home/skipper/atlas_build2/lib'] >>>> >>>> define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] >>>> >>>> swig_opts = ['-I/home/skipper/atlas_build2/include', >>>> '-I/home/skipper/atlas_build2/include'] >>>> >>>> include_dirs = ['/home/skipper/atlas_build2/include'] >>>> atlas_threads_info: >>>> libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] >>>> library_dirs = ['/home/skipper/atlas_build2/lib'] >>>> define_macros = [('NO_ATLAS_INFO', -1)] >>>> language = f77 >>>> include_dirs = ['/home/skipper/atlas_build2/include'] >>>> blas_opt_info: >>>> libraries = ['ptf77blas', 'ptcblas', 'atlas'] >>>> library_dirs = ['/home/skipper/atlas_build2/lib'] >>>> define_macros = [('NO_ATLAS_INFO', -1)] >>>> language = c >>>> include_dirs = ['/home/skipper/atlas_build2/include'] >>>> atlas_blas_threads_info: >>>> libraries = ['ptf77blas', 'ptcblas', 'atlas'] >>>> library_dirs = ['/home/skipper/atlas_build2/lib'] >>>> define_macros = [('NO_ATLAS_INFO', -1)] >>>> language = c >>>> include_dirs = ['/home/skipper/atlas_build2/include'] >>>> lapack_opt_info: >>>> libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] >>>> library_dirs = ['/home/skipper/atlas_build2/lib'] >>>> define_macros = [('NO_ATLAS_INFO', -1)] >>>> language = f77 >>>> include_dirs = ['/home/skipper/atlas_build2/include'] >>>> lapack_mkl_info: >>>> NOT AVAILABLE >>>> blas_mkl_info: >>>> NOT AVAILABLE >>>> mkl_info: >>>> NOT AVAILABLE >>>> >>>> Skipper >>>> >>> >>> Ah, might it be this? From SuiteSparse/AMD/Doc/ChangeLog >>> >>> Jun 1, 2012: version 2.3.0 >>> >>> * changed from UFconfig to SuiteSparse_config >>> >> >> I suppose not. Also, does this look right? Should they be shared objects? >> (I'm just about out of my depth) >> >> [~/src/SuiteSparse] >> |27 $ ls ~/atlas_build2/lib/ >> libamd.a libcholmod.a liblapack.a libptlapack.a libtstatlas.a >> libatlas.a libf77blas.a libptcblas.a libsatlas.so libumfpack.a >> libcblas.a libf77refblas.a libptf77blas.a libtatlas.so >> >> I have the -fPIC flag set when building SuiteSparse. >> > > The issue was that there was a problem in the building of SuiteSparse. I > had not correctly linked against blas and never knew it. > Ok, I figured it out after some more fumbling around in the dark. This thread [1] indicates that you need to change the default [umfpack] section in site.cfg to read [umfpack] umfpack_libs = umfpack, cholmod, colamd, amd I didn't have any luck with my old ATLAS/LAPACK, but I was able to link SuiteSparse (0.4.2) against OpenBlas (current master) and add the following to site.cfg. [umfpack] umfpack_libs = umfpack, cholmod, colamd, amd, suitesparseconfig, spqr, rt You only need rt if you didn't disable in in SuiteSparse_config.mk, and make sure to include the correct library directory for it. It's likely system-wide if the others aren't. Now I can do my sparse LU decompositions, which is a great relief. from scipy import sparse umf = sparse.linalg.umfpack.UmfpackContext("di") L, U, P, Q, R, recip = umf.lu(W) Skipper [1] http://mail.scipy.org/pipermail/scipy-dev/2011-August/016459.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergio_r at mail.com Sun Mar 24 09:00:53 2013 From: sergio_r at mail.com (Sergio Rojas) Date: Sun, 24 Mar 2013 09:00:53 -0400 Subject: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 fails Message-ID: <20130324130053.287200@gmx.com> When trying to install scipy-0.12.0b1 under python3.3.0 the following error ends the building of scipy: python3 setup.py config_fc --fcompiler=gnu95 build ... error: Command "g++ -pthread -Wno-unused-result -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Iscipy/interpolate/src -I/home/myPROG/Python330GNU/Linux64b/lib/python3.3/site-packages/numpy/core/include -I/home/myPROG/Python330GNU/Linux64b/include/python3.3m -c scipy/interpolate/src/_interpolate.cpp -o build/temp.linux-x86_64-3.3/scipy/interpolate/src/_interpolate.o" failed with exit status 1 Since I was able to install and test scipy-0.12.0b1 under python3.2.3 in the same environment, could some body point out whether this might be a bug associated to python3.3.0? Sergio -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Mar 24 09:08:27 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 24 Mar 2013 15:08:27 +0200 Subject: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 fails In-Reply-To: <20130324130053.287200@gmx.com> References: <20130324130053.287200@gmx.com> Message-ID: 24.03.2013 15:00, Sergio Rojas kirjoitti: > When trying to install scipy-0.12.0b1 under python3.3.0 > the following error ends the building of scipy: > > python3 setup.py config_fc --fcompiler=gnu95 build > ... > error: Command "g++ -pthread -Wno-unused-result -DNDEBUG -g -fwrapv -O3 > -Wall -fPIC -Iscipy/interpolate/src > -I/home/myPROG/Python330GNU/Linux64b/lib/python3.3/site-packages/numpy/core/include > -I/home/myPROG/Python330GNU/Linux64b/include/python3.3m -c > scipy/interpolate/src/_interpolate.cpp -o > build/temp.linux-x86_64-3.3/scipy/interpolate/src/_interpolate.o" failed > with exit status 1 Please put the *whole* build log to gist.github.com and paste a link to it here. Python 3.3.0 works for me. -- Pauli Virtanen From sergio_r at mail.com Sun Mar 24 19:07:24 2013 From: sergio_r at mail.com (Sergio Rojas) Date: Sun, 24 Mar 2013 19:07:24 -0400 Subject: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 fails Message-ID: <20130324230724.287200@gmx.com> Thanks for replying Pauli. The output attempting to build scipy-0.12.0b1 under python3.3.0 fails is at: https://gist.github.com/anonymous/5233970 Sergio ---------------------------------------------------------------------- Message: 1 Date: Sun, 24 Mar 2013 09:00:53 -0400 From: "Sergio Rojas" Subject: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 fails To: scipy-user at scipy.org Message-ID: <20130324130053.287200 at gmx.com> Content-Type: text/plain; charset="utf-8" When trying to install scipy-0.12.0b1 under python3.3.0 the following error ends the building of scipy: python3 setup.py config_fc --fcompiler=gnu95 build ... error: Command "g++ -pthread -Wno-unused-result -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Iscipy/interpolate/src -I/home/myPROG/Python330GNU/Linux64b/lib/python3.3/site-packages/numpy/core/include -I/home/myPROG/Python330GNU/Linux64b/include/python3.3m -c scipy/interpolate/src/_interpolate.cpp -o build/temp.linux-x86_64-3.3/scipy/interpolate/src/_interpolate.o" failed with exit status 1 Since I was able to install and test scipy-0.12.0b1 under python3.2.3 in the same environment, could some body point out whether this might be a bug associated to python3.3.0? Sergio -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20130324/4d2871a9/attachment-0001.html ------------------------------ Message: 2 Date: Sun, 24 Mar 2013 15:08:27 +0200 From: Pauli Virtanen Subject: Re: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 fails To: scipy-user at scipy.org Message-ID: Content-Type: text/plain; charset=ISO-8859-1 24.03.2013 15:00, Sergio Rojas kirjoitti: > When trying to install scipy-0.12.0b1 under python3.3.0 > the following error ends the building of scipy: > > python3 setup.py config_fc --fcompiler=gnu95 build > ... > error: Command "g++ -pthread -Wno-unused-result -DNDEBUG -g -fwrapv -O3 > -Wall -fPIC -Iscipy/interpolate/src > -I/home/myPROG/Python330GNU/Linux64b/lib/python3.3/site-packages/numpy/core/include > -I/home/myPROG/Python330GNU/Linux64b/include/python3.3m -c > scipy/interpolate/src/_interpolate.cpp -o > build/temp.linux-x86_64-3.3/scipy/interpolate/src/_interpolate.o" failed > with exit status 1 Please put the *whole* build log to gist.github.com and paste a link to it here. Python 3.3.0 works for me. -- Pauli Virtanen ------------------------------ _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user End of SciPy-User Digest, Vol 115, Issue 29 ******************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Mar 24 19:31:15 2013 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 25 Mar 2013 01:31:15 +0200 Subject: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 fails In-Reply-To: <20130324230724.287200@gmx.com> References: <20130324230724.287200@gmx.com> Message-ID: Hi, 25.03.2013 01:07, Sergio Rojas kirjoitti: > Thanks for replying Pauli. > > The output attempting to build scipy-0.12.0b1 under python3.3.0 fails > is at: > > https://gist.github.com/anonymous/5233970 The issue seems to that something is wrong with the combination of your Python installation and your C++ compiler: /home/myPROG/Python330GNU/Linux64b/include/python3.3m/longintrepr.h:47:2: error: #error "30-bit long digits requested, but the necessary types are not available on this platform" It's probably nothing Scipy-specific, just that your setup is not able to compile Python extension modules using C++. This is not a typical failure, and I don't know how to fix it. Does your g++ compiler version match your gcc version? OTOH, I don't have better suggestions than trying to google for the error message. -- Pauli Virtanen From djpine at gmail.com Sun Mar 24 19:53:35 2013 From: djpine at gmail.com (David Pine) Date: Sun, 24 Mar 2013 19:53:35 -0400 Subject: [SciPy-User] a routine for fitting a straight line to data Message-ID: I would like to submit a routine to scipy for performing least squares fitting of a straight line f(x) = ax + b to an (x,y) data set. There are a number of ways of doing this currently using scipy or numpy but all have serious drawbacks. Here is what is currently available, as far as I can tell, and what seem to me to be their drawbacks. 1. numpy.polyfit : a. It is slower than it needs to be. polyfit uses matrix methods that are needed to find best fits to general polynomials (quadratic, cubic, quartic, and higher orders), but matrix methods are overkill when you just want to fit a straight line f(x)= ax + b to data set. A direct approach can yield fits significantly faster. b. polyfit currently does not allow using absolute error estimates for weighting the data; only relative error estimates are currently possible. This can be fixed, but for the moment it's a problem. c. New or inexperienced uses are unlikely to look for a routine to fit straight lines in a routine that is advertised as being for polynomials. This is a more important point than it may seem. Fitting data to a straight line is probably the most common curve fitting task performed, and the only one that many users will ever use. It makes sense to cater to such users by providing them with a routine that does what they want in as clear and straightforward a manner as possible. I am a physics professor and have seen the confusion first hand with a wide spectrum of students who are new to Python. It should not be this hard for them. 2. scipy.linalg.lstsq a. Using linalg.lstsq to fit a straight line is clunky and very slow (on the order of 10 times slower than polyfit, which is already slower than it needs to be). b. While linalg.lstsq can be used to fit data with error estimates (i.e. using weighting), how to do this is far from obvious. It's unlikely that anyone but an expert would figure out how to do it. c. linalg.lstsq requires the use of matrices, which will be unfamiliar to some users. Moreover, it should not be necessary to use matrices when the task at hand only involves one-dimensional arrays. 3. scipy.curve_fit a. This is a nonlinear fitting routine. As such, it searches for the global minimum in the objective function (chi-squared) rather than just calculating where the global minimum is using the analytical expressions for the best fits. It's the wrong method for the problem, although it will work. Questions: What do others in the scientific Python community think about the need for such a routine? Where should routine to fit data to a straight line go? It would seem to me that it should go in the scipy.optimize package, but I wonder what others think. David Pine From josef.pktd at gmail.com Sun Mar 24 20:19:20 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 24 Mar 2013 20:19:20 -0400 Subject: [SciPy-User] a routine for fitting a straight line to data In-Reply-To: References: Message-ID: On Sun, Mar 24, 2013 at 7:53 PM, David Pine wrote: > I would like to submit a routine to scipy for performing least squares fitting of a straight line f(x) = ax + b to an (x,y) data set. There are a number of ways of doing this currently using scipy or numpy but all have serious drawbacks. Here is what is currently available, as far as I can tell, and what seem to me to be their drawbacks. > > 1. numpy.polyfit : > a. It is slower than it needs to be. polyfit uses matrix methods that are needed to find best fits to general polynomials (quadratic, cubic, quartic, and higher orders), but matrix methods are overkill when you just want to fit a straight line f(x)= ax + b to data set. A direct approach can yield fits significantly faster. > b. polyfit currently does not allow using absolute error estimates for weighting the data; only relative error estimates are currently possible. This can be fixed, but for the moment it's a problem. > c. New or inexperienced uses are unlikely to look for a routine to fit straight lines in a routine that is advertised as being for polynomials. This is a more important point than it may seem. Fitting data to a straight line is probably the most common curve fitting task performed, and the only one that many users will ever use. It makes sense to cater to such users by providing them with a routine that does what they want in as clear and straightforward a manner as possible. I am a physics professor and have seen the confusion first hand with a wide spectrum of students who are new to Python. It should not be this hard for them. > > 2. scipy.linalg.lstsq > a. Using linalg.lstsq to fit a straight line is clunky and very slow (on the order of 10 times slower than polyfit, which is already slower than it needs to be). > b. While linalg.lstsq can be used to fit data with error estimates (i.e. using weighting), how to do this is far from obvious. It's unlikely that anyone but an expert would figure out how to do it. > c. linalg.lstsq requires the use of matrices, which will be unfamiliar to some users. Moreover, it should not be necessary to use matrices when the task at hand only involves one-dimensional arrays. > > 3. scipy.curve_fit > a. This is a nonlinear fitting routine. As such, it searches for the global minimum in the objective function (chi-squared) rather than just calculating where the global minimum is using the analytical expressions for the best fits. It's the wrong method for the problem, although it will work. > > Questions: What do others in the scientific Python community think about the need for such a routine? Where should routine to fit data to a straight line go? It would seem to me that it should go in the scipy.optimize package, but I wonder what others think. scipy.stats.linregress if there is only one x Josef > David Pine > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From warren.weckesser at gmail.com Sun Mar 24 20:22:55 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 24 Mar 2013 20:22:55 -0400 Subject: [SciPy-User] a routine for fitting a straight line to data In-Reply-To: References: Message-ID: On 3/24/13, David Pine wrote: > I would like to submit a routine to scipy for performing least squares > fitting of a straight line f(x) = ax + b to an (x,y) data set. There are a > number of ways of doing this currently using scipy or numpy but all have > serious drawbacks. Here is what is currently available, as far as I can > tell, and what seem to me to be their drawbacks. > > 1. numpy.polyfit : > a. It is slower than it needs to be. polyfit uses matrix methods that > are needed to find best fits to general polynomials (quadratic, cubic, > quartic, and higher orders), but matrix methods are overkill when you just > want to fit a straight line f(x)= ax + b to data set. A direct approach can > yield fits significantly faster. > b. polyfit currently does not allow using absolute error estimates for > weighting the data; only relative error estimates are currently possible. > This can be fixed, but for the moment it's a problem. > c. New or inexperienced uses are unlikely to look for a routine to fit > straight lines in a routine that is advertised as being for polynomials. > This is a more important point than it may seem. Fitting data to a straight > line is probably the most common curve fitting task performed, and the only > one that many users will ever use. It makes sense to cater to such users by > providing them with a routine that does what they want in as clear and > straightforward a manner as possible. I am a physics professor and have > seen the confusion first hand with a wide spectrum of students who are new > to Python. It should not be this hard for them. > > 2. scipy.linalg.lstsq > a. Using linalg.lstsq to fit a straight line is clunky and very slow > (on the order of 10 times slower than polyfit, which is already slower than > it needs to be). > b. While linalg.lstsq can be used to fit data with error estimates > (i.e. using weighting), how to do this is far from obvious. It's unlikely > that anyone but an expert would figure out how to do it. > c. linalg.lstsq requires the use of matrices, which will be unfamiliar > to some users. Moreover, it should not be necessary to use matrices when > the task at hand only involves one-dimensional arrays. > > 3. scipy.curve_fit > a. This is a nonlinear fitting routine. As such, it searches for the > global minimum in the objective function (chi-squared) rather than just > calculating where the global minimum is using the analytical expressions for > the best fits. It's the wrong method for the problem, although it will > work. > > Questions: What do others in the scientific Python community think about > the need for such a routine? Where should routine to fit data to a > straight line go? It would seem to me that it should go in the > scipy.optimize package, but I wonder what others think. > > David Pine David, There is also scipy.stats.linregress, which is a basic 1-D (ie. x and y are 1-D vectors) linear regression. Warren > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From djpine at gmail.com Sun Mar 24 21:27:09 2013 From: djpine at gmail.com (David Pine) Date: Sun, 24 Mar 2013 21:27:09 -0400 Subject: [SciPy-User] a routine for fitting a straight line to data In-Reply-To: References: Message-ID: <41FE3C6C-5337-4B4C-A742-812B9A9340C3@gmail.com> scipy.stats.linregress does not do the job. First, it does not allow for weighting of the data, relative, absolute, or otherwise. It does not return the covariance matrix, which provides estimates of the uncertainties in the fitting parameters, nor does it return chi-squared, which is the standard measure of the quality of the fit in the physical sciences. David On Mar 24, 2013, at 8:19 PM, josef.pktd at gmail.com wrote: > On Sun, Mar 24, 2013 at 7:53 PM, David Pine wrote: >> I would like to submit a routine to scipy for performing least squares fitting of a straight line f(x) = ax + b to an (x,y) data set. There are a number of ways of doing this currently using scipy or numpy but all have serious drawbacks. Here is what is currently available, as far as I can tell, and what seem to me to be their drawbacks. >> >> 1. numpy.polyfit : >> a. It is slower than it needs to be. polyfit uses matrix methods that are needed to find best fits to general polynomials (quadratic, cubic, quartic, and higher orders), but matrix methods are overkill when you just want to fit a straight line f(x)= ax + b to data set. A direct approach can yield fits significantly faster. >> b. polyfit currently does not allow using absolute error estimates for weighting the data; only relative error estimates are currently possible. This can be fixed, but for the moment it's a problem. >> c. New or inexperienced uses are unlikely to look for a routine to fit straight lines in a routine that is advertised as being for polynomials. This is a more important point than it may seem. Fitting data to a straight line is probably the most common curve fitting task performed, and the only one that many users will ever use. It makes sense to cater to such users by providing them with a routine that does what they want in as clear and straightforward a manner as possible. I am a physics professor and have seen the confusion first hand with a wide spectrum of students who are new to Python. It should not be this hard for them. >> >> 2. scipy.linalg.lstsq >> a. Using linalg.lstsq to fit a straight line is clunky and very slow (on the order of 10 times slower than polyfit, which is already slower than it needs to be). >> b. While linalg.lstsq can be used to fit data with error estimates (i.e. using weighting), how to do this is far from obvious. It's unlikely that anyone but an expert would figure out how to do it. >> c. linalg.lstsq requires the use of matrices, which will be unfamiliar to some users. Moreover, it should not be necessary to use matrices when the task at hand only involves one-dimensional arrays. >> >> 3. scipy.curve_fit >> a. This is a nonlinear fitting routine. As such, it searches for the global minimum in the objective function (chi-squared) rather than just calculating where the global minimum is using the analytical expressions for the best fits. It's the wrong method for the problem, although it will work. >> >> Questions: What do others in the scientific Python community think about the need for such a routine? Where should routine to fit data to a straight line go? It would seem to me that it should go in the scipy.optimize package, but I wonder what others think. > > scipy.stats.linregress if there is only one x > > Josef > > >> David Pine >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From Jerome.Kieffer at esrf.fr Mon Mar 25 02:28:52 2013 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Mon, 25 Mar 2013 07:28:52 +0100 Subject: [SciPy-User] a routine for fitting a straight line to data In-Reply-To: <41FE3C6C-5337-4B4C-A742-812B9A9340C3@gmail.com> References: <41FE3C6C-5337-4B4C-A742-812B9A9340C3@gmail.com> Message-ID: <20130325072852.d2c97ffd.Jerome.Kieffer@esrf.fr> On Sun, 24 Mar 2013 21:27:09 -0400 David Pine wrote: > scipy.stats.linregress does not do the job. First, it does not allow for weighting of the data, relative, absolute, or otherwise. It does not return the covariance matrix, which provides estimates of the uncertainties in the fitting parameters, nor does it return chi-squared, which is the standard measure of the quality of the fit in the physical sciences. I implemented a linear regression according to what I found in stats textbooks (and various pages of wikipedia) ... (arrays are 2D, considered as m datasets of length n Sx = (w * x).sum(axis= -1) Sy = (w * y).sum(axis= -1) Sxx = (w * x * x).sum(axis= -1) Sxy = (w * y * x).sum(axis= -1) Syy = (w * y * y).sum(axis= -1) Sw = w.sum(axis= -1) slope = (Sw * Sxy - Sx * Sy) / (Sw * Sxx - Sx * Sx) intercept = (Sy - Sx * slope) / Sw df = n - 2 r_num = ssxym = (Sw * Sxy) - (Sx * Sy) ssxm = Sw * Sxx - Sx * Sx ssym = Sw * Syy - Sy * Sy r_den = numpy.sqrt(ssxm * ssym) correlationR = r_num / r_den correlationR[r_den == 0] = 0.0 correlationR[correlationR > 1.0] = 1.0 # Numerical errors correlationR[correlationR < -1.0] = -1.0 # Numerical errors sterrest = numpy.sqrt((1.0 - correlationR * correlationR) * ssym / ssxm / df) Hope this helps -- J?r?me Kieffer Data analysis unit - ESRF From sloan.lindsey at gmail.com Mon Mar 25 07:55:30 2013 From: sloan.lindsey at gmail.com (Sloan Lindsey) Date: Mon, 25 Mar 2013 12:55:30 +0100 Subject: [SciPy-User] cobyla In-Reply-To: References: <1349039373.72572.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: How does one parse the doc string? I'm interested in having my script know when MAXFUN is reached. I see it coming out as a print line ( I have to use 0.7.2 because of an older centOS install), but I'd really like my script to know that it has happened so I can tag that solution as suspect. Is there any cute way to redirect that output into something I can parse? -Sloan On Mon, Oct 1, 2012 at 9:58 PM, Ralf Gommers wrote: > > > On Sun, Sep 30, 2012 at 11:09 PM, The Helmbolds wrote: >> >> On my system (Windows 7, Python 2.7.x and IDLE, latest SciPy), I observe >> the following behavior with fmin_cobyla and minimize's COBYLA method. >> >> Case 1: When run either in the IDLE interactive shell or within an >> enclosing Python program: >> 1.1. The fmin_cobyla function never returns the Results dictionary, >> and never displays it to Python's stdout. This is true regardless of the >> function call's disp setting. > > > Correct. The fmin_cobyla docstring clearly says what it returns. Result > objects are only returned by the new interfaces in the 0.11.0 release > (minimize, minimize_scalar, root). > >> 1.2. The 'minimize' function always returns the Results dictionary but >> never displays it to Python's stdout. Again, this is true regardless of the >> function call's disp setting. > > > `disp` doesn't print the Results objects. For me it works as advertized (in > IPython), it prints something like: > > Normal return from subroutine COBYLA > > NFVALS = 37 F = 8.000000E-01 MAXCV = 0.000000E+00 > X = 1.400113E+00 1.700056E+00 > > Ralf > >> >> Case 2: When run interactively in Window's Command Prompt box: >> 2.1 The fmin_cobyla function never returns the Result dictionary, >> regardless of the function call's disp setting. Setting disp to True or >> False either displays the Results dictionary in the command box or not >> (respectively). I don't think the Results dictionary gets to the command box >> via stdout. >> 2.2 The 'minimize' function always returns the Result dictionary, >> regardless of the function call's disp setting. Setting disp to True or >> False either displays the Results dictionary in the command box or not >> (respectively). I don't think the Results dictionary gets to the command box >> via stdout. >> >> My thanks to all who helped clarify this situation. >> >> Bob H >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pierrega at consortech.com Mon Mar 25 05:51:42 2013 From: pierrega at consortech.com (Pierre Gauthier) Date: Mon, 25 Mar 2013 05:51:42 -0400 Subject: [SciPy-User] scipy.spatial.Delaunay qhull_option Message-ID: Hi, What is the syntax when I want to set the "En" qhull_option? This is not working: d = scipy.spatial.Delaunay(nd, qhull_options="Qz Qt QJ En 0.0001") Thanks, Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergio_r at mail.com Mon Mar 25 09:21:21 2013 From: sergio_r at mail.com (Sergio Rojas) Date: Mon, 25 Mar 2013 09:21:21 -0400 Subject: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 fails Message-ID: <20130325132121.287210@gmx.com> Thanks Pauli. Your hint > > The issue seems to that something is wrong with the combination of your > Python installation and your C++ compiler: > send me to revise my python3.3.0 installation setup. In building python3.3.0 I passed to the configure script the option "--enable-big-digits" which I was thinking to be the replacement of the option "--with-wide-unicode" that I used when building python3.2.3. (the configuration script of python3.3.0 does not recognize the option "--with-wide-unicode"). So, I took out the option "--enable-big-digits" in building python3.3.0 and then the build of scipy-0.12.0b1 finalized without errors. This time, however, I am getting a failing in the "test_mio.test_save_dict" which did not happened in the building of scipy-0.12.0b1 under python 3.2.3. What might be the consequences of that failure in using scipy? (Ralf has already shed some light on the consequences of using scipy in spite of the other two test failures) What follows is a summary of "scipy.test('full', verbose=2)" under both both python3.3.0 and python3.2.3: Scipy scipy-0.12.0b1 NumPy version 1.7.0 Python version 3.3.0 (default, Mar 25 2013, 06:25:03) [GCC 4.6.1] ====================================================================== ERROR: Failure: ImportError (scipy.weave only supports Python 2.x) ---------------------------------------------------------------------- ====================================================================== FAIL: test_mio.test_save_dict ---------------------------------------------------------------------- ====================================================================== FAIL: test_mathieu_modsem2 (test_basic.TestCephes) ---------------------------------------------------------------------- ---------------------------------------------------------------------- Ran 6426 tests in 277.767s FAILED (KNOWNFAIL=15, SKIP=47, errors=1, failures=2) nose version 1.2.1 **************** Scipy scipy-0.12.0b1 NumPy version 1.7.0 Python version 3.2.3 (default, Mar 23 2013, 07:28:36) [GCC 4.6.1] ====================================================================== ERROR: Failure: ImportError (scipy.weave only supports Python 2.x) ---------------------------------------------------------------------- ====================================================================== FAIL: test_mathieu_modsem2 (test_basic.TestCephes) ---------------------------------------------------------------------- Ran 6426 tests in 277.812s FAILED (KNOWNFAIL=15, SKIP=47, errors=1, failures=1) nose version 1.2.1 Thanks again for your help, Sergio ------------------------------ Message: 2 Date: Mon, 25 Mar 2013 01:31:15 +0200 From: Pauli Virtanen Subject: Re: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 fails To: scipy-user at scipy.org Message-ID: Content-Type: text/plain; charset=ISO-8859-1 Hi, 25.03.2013 01:07, Sergio Rojas kirjoitti: > Thanks for replying Pauli. > > The output attempting to build scipy-0.12.0b1 under python3.3.0 fails > is at: > > https://gist.github.com/anonymous/5233970 The issue seems to that something is wrong with the combination of your Python installation and your C++ compiler: /home/myPROG/Python330GNU/Linux64b/include/python3.3m/longintrepr.h:47:2: error: #error "30-bit long digits requested, but the necessary types are not available on this platform" It's probably nothing Scipy-specific, just that your setup is not able to compile Python extension modules using C++. This is not a typical failure, and I don't know how to fix it. Does your g++ compiler version match your gcc version? OTOH, I don't have better suggestions than trying to google for the error message. -- Pauli Virtanen ------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Mon Mar 25 09:28:27 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Mon, 25 Mar 2013 13:28:27 +0000 Subject: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 fails In-Reply-To: <20130325132121.287210@gmx.com> References: <20130325132121.287210@gmx.com> Message-ID: On 25 March 2013 13:21, Sergio Rojas wrote: > (the configuration script of python3.3.0 does not recognize the option > "--with-wide-unicode"). 'Wide unicode' is no longer necessary - all builds of 3.3 will behave mostly like wide unicode builds, but with less memory use. For details, see: http://docs.python.org/3/whatsnew/3.3.html#pep-393-flexible-string-representation >From a quick search, 'big digits' seems to relate to how large integers are stored. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Mar 25 13:20:49 2013 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 25 Mar 2013 19:20:49 +0200 Subject: [SciPy-User] scipy.spatial.Delaunay qhull_option In-Reply-To: References: Message-ID: 25.03.2013 11:51, Pierre Gauthier kirjoitti: [clip] > What is the syntax when I want to set the ?En? qhull_option? > > This is not working: > > d = scipy.spatial.Delaunay(nd, qhull_options="Qz Qt QJ En 0.0001") It should be the same as on qhull command line: "Qz Qt QJ E0.0001" -- Pauli Virtanen From ralf.gommers at gmail.com Mon Mar 25 15:38:40 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 25 Mar 2013 20:38:40 +0100 Subject: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 fails In-Reply-To: <20130325132121.287210@gmx.com> References: <20130325132121.287210@gmx.com> Message-ID: On Mon, Mar 25, 2013 at 2:21 PM, Sergio Rojas wrote: > Thanks Pauli. Your hint > > > > > > The issue seems to that something is wrong with the combination of your > > Python installation and your C++ compiler: > > > > send me to revise my python3.3.0 installation setup. In building > python3.3.0 > I passed to the configure script the option "--enable-big-digits" > which I was thinking to be the replacement of the option > "--with-wide-unicode" that I used when building python3.2.3. > (the configuration script of python3.3.0 does not recognize the option > "--with-wide-unicode"). > > So, I took out the option "--enable-big-digits" in building python3.3.0 > and then the build of scipy-0.12.0b1 finalized without errors. > > This time, however, I am getting a failing in the "test_mio.test_save_dict" > which did not happened in the building of scipy-0.12.0b1 > under python 3.2.3. What might be the consequences of that failure > in using scipy? (Ralf has already shed some light on the consequences > of using scipy in spite of the other two test failures) > I'm seeing that failure as well on 3.2, will investigate for 0.12. It looks like saving Matlab files is partially broken on 3.x by this. No further impact. Ralf > > What follows is a summary of "scipy.test('full', verbose=2)" under both > both python3.3.0 and python3.2.3: > > Scipy scipy-0.12.0b1 > NumPy version 1.7.0 > Python version 3.3.0 (default, Mar 25 2013, 06:25:03) [GCC 4.6.1] > ====================================================================== > ERROR: Failure: ImportError (scipy.weave only supports Python 2.x) > ---------------------------------------------------------------------- > ====================================================================== > FAIL: test_mio.test_save_dict > ---------------------------------------------------------------------- > ====================================================================== > FAIL: test_mathieu_modsem2 (test_basic.TestCephes) > ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > Ran 6426 tests in 277.767s > FAILED (KNOWNFAIL=15, SKIP=47, errors=1, failures=2) > nose version 1.2.1 > > **************** > > Scipy scipy-0.12.0b1 > NumPy version 1.7.0 > Python version 3.2.3 (default, Mar 23 2013, 07:28:36) [GCC 4.6.1] > ====================================================================== > ERROR: Failure: ImportError (scipy.weave only supports Python 2.x) > ---------------------------------------------------------------------- > ====================================================================== > FAIL: test_mathieu_modsem2 (test_basic.TestCephes) > ---------------------------------------------------------------------- > Ran 6426 tests in 277.812s > FAILED (KNOWNFAIL=15, SKIP=47, errors=1, failures=1) > nose version 1.2.1 > > > Thanks again for your help, > > Sergio > > > > > > > > > > ------------------------------ > > Message: 2 > Date: Mon, 25 Mar 2013 01:31:15 +0200 > From: Pauli Virtanen > Subject: Re: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 > fails > To: scipy-user at scipy.org > Message-ID: > Content-Type: text/plain; charset=ISO-8859-1 > > Hi, > > 25.03.2013 01:07, Sergio Rojas kirjoitti: > > Thanks for replying Pauli. > > > > The output attempting to build scipy-0.12.0b1 under python3.3.0 fails > > is at: > > > > https://gist.github.com/anonymous/5233970 > > The issue seems to that something is wrong with the combination of your > Python installation and your C++ compiler: > > /home/myPROG/Python330GNU/Linux64b/include/python3.3m/longintrepr.h:47:2: error: > #error "30-bit long digits requested, but the necessary types are not > available on this platform" > > It's probably nothing Scipy-specific, just that your setup is not able > to compile Python extension modules using C++. > > This is not a typical failure, and I don't know how to fix it. Does your > g++ compiler version match your gcc version? OTOH, I don't have better > suggestions than trying to google for the error message. > > -- > Pauli Virtanen > > > > ------------------------------ > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Mar 25 16:19:03 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 25 Mar 2013 21:19:03 +0100 Subject: [SciPy-User] cobyla In-Reply-To: References: <1349039373.72572.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: On Mon, Mar 25, 2013 at 12:55 PM, Sloan Lindsey wrote: > How does one parse the doc string? I'm interested in having my script > know when MAXFUN is reached. I see it coming out as a print line ( I > have to use 0.7.2 because of an older centOS install), but I'd really > like my script to know that it has happened so I can tag that solution > as suspect. Is there any cute way to redirect that output into > something I can parse? > I don't think there's a good way to get at it. The output comes from some Fortran code, so redirecting sys.stdout won't help you. Ralf > > -Sloan > > On Mon, Oct 1, 2012 at 9:58 PM, Ralf Gommers > wrote: > > > > > > On Sun, Sep 30, 2012 at 11:09 PM, The Helmbolds > wrote: > >> > >> On my system (Windows 7, Python 2.7.x and IDLE, latest SciPy), I observe > >> the following behavior with fmin_cobyla and minimize's COBYLA method. > >> > >> Case 1: When run either in the IDLE interactive shell or within an > >> enclosing Python program: > >> 1.1. The fmin_cobyla function never returns the Results dictionary, > >> and never displays it to Python's stdout. This is true regardless of the > >> function call's disp setting. > > > > > > Correct. The fmin_cobyla docstring clearly says what it returns. Result > > objects are only returned by the new interfaces in the 0.11.0 release > > (minimize, minimize_scalar, root). > > > >> 1.2. The 'minimize' function always returns the Results dictionary > but > >> never displays it to Python's stdout. Again, this is true regardless of > the > >> function call's disp setting. > > > > > > `disp` doesn't print the Results objects. For me it works as advertized > (in > > IPython), it prints something like: > > > > Normal return from subroutine COBYLA > > > > NFVALS = 37 F = 8.000000E-01 MAXCV = 0.000000E+00 > > X = 1.400113E+00 1.700056E+00 > > > > Ralf > > > >> > >> Case 2: When run interactively in Window's Command Prompt box: > >> 2.1 The fmin_cobyla function never returns the Result dictionary, > >> regardless of the function call's disp setting. Setting disp to True or > >> False either displays the Results dictionary in the command box or not > >> (respectively). I don't think the Results dictionary gets to the > command box > >> via stdout. > >> 2.2 The 'minimize' function always returns the Result dictionary, > >> regardless of the function call's disp setting. Setting disp to True or > >> False either displays the Results dictionary in the command box or not > >> (respectively). I don't think the Results dictionary gets to the > command box > >> via stdout. > >> > >> My thanks to all who helped clarify this situation. > >> > >> Bob H > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Mar 25 17:36:35 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 25 Mar 2013 22:36:35 +0100 Subject: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 fails In-Reply-To: References: <20130325132121.287210@gmx.com> Message-ID: On Mon, Mar 25, 2013 at 8:38 PM, Ralf Gommers wrote: > > > > On Mon, Mar 25, 2013 at 2:21 PM, Sergio Rojas wrote: > >> Thanks Pauli. Your hint >> >> >> > >> > The issue seems to that something is wrong with the combination of your >> > Python installation and your C++ compiler: >> > >> >> send me to revise my python3.3.0 installation setup. In building >> python3.3.0 >> I passed to the configure script the option "--enable-big-digits" >> which I was thinking to be the replacement of the option >> "--with-wide-unicode" that I used when building python3.2.3. >> (the configuration script of python3.3.0 does not recognize the option >> "--with-wide-unicode"). >> >> So, I took out the option "--enable-big-digits" in building python3.3.0 >> and then the build of scipy-0.12.0b1 finalized without errors. >> >> This time, however, I am getting a failing in the >> "test_mio.test_save_dict" >> which did not happened in the building of scipy-0.12.0b1 >> under python 3.2.3. What might be the consequences of that failure >> in using scipy? (Ralf has already shed some light on the consequences >> of using scipy in spite of the other two test failures) >> > > I'm seeing that failure as well on 3.2, will investigate for 0.12. It > looks like saving Matlab files is partially broken on 3.x by this. No > further impact. > fixed in commit 31db7fc > > Ralf > > >> >> What follows is a summary of "scipy.test('full', verbose=2)" under both >> both python3.3.0 and python3.2.3: >> >> Scipy scipy-0.12.0b1 >> NumPy version 1.7.0 >> Python version 3.3.0 (default, Mar 25 2013, 06:25:03) [GCC 4.6.1] >> ====================================================================== >> ERROR: Failure: ImportError (scipy.weave only supports Python 2.x) >> ---------------------------------------------------------------------- >> ====================================================================== >> FAIL: test_mio.test_save_dict >> ---------------------------------------------------------------------- >> ====================================================================== >> FAIL: test_mathieu_modsem2 (test_basic.TestCephes) >> ---------------------------------------------------------------------- >> ---------------------------------------------------------------------- >> Ran 6426 tests in 277.767s >> FAILED (KNOWNFAIL=15, SKIP=47, errors=1, failures=2) >> nose version 1.2.1 >> >> **************** >> >> Scipy scipy-0.12.0b1 >> NumPy version 1.7.0 >> Python version 3.2.3 (default, Mar 23 2013, 07:28:36) [GCC 4.6.1] >> ====================================================================== >> ERROR: Failure: ImportError (scipy.weave only supports Python 2.x) >> ---------------------------------------------------------------------- >> ====================================================================== >> FAIL: test_mathieu_modsem2 (test_basic.TestCephes) >> ---------------------------------------------------------------------- >> Ran 6426 tests in 277.812s >> FAILED (KNOWNFAIL=15, SKIP=47, errors=1, failures=1) >> nose version 1.2.1 >> >> >> Thanks again for your help, >> >> Sergio >> >> >> >> >> >> >> >> >> >> ------------------------------ >> >> Message: 2 >> Date: Mon, 25 Mar 2013 01:31:15 +0200 >> From: Pauli Virtanen >> Subject: Re: [SciPy-User] Build of scipy-0.12.0b1 under python3.3.0 >> fails >> To: scipy-user at scipy.org >> Message-ID: >> Content-Type: text/plain; charset=ISO-8859-1 >> >> Hi, >> 25.03.2013 01:07, Sergio Rojas kirjoitti: >> > Thanks for replying Pauli. >> > >> > The output attempting to build scipy-0.12.0b1 under python3.3.0 fails >> > is at: >> > >> > https://gist.github.com/anonymous/5233970 >> >> The issue seems to that something is wrong with the combination of your >> Python installation and your C++ compiler: >> >> /home/myPROG/Python330GNU/Linux64b/include/python3.3m/longintrepr.h:47:2: error: >> #error "30-bit long digits requested, but the necessary types are not >> available on this platform" >> >> It's probably nothing Scipy-specific, just that your setup is not able >> to compile Python extension modules using C++. >> >> This is not a typical failure, and I don't know how to fix it. Does your >> g++ compiler version match your gcc version? OTOH, I don't have better >> suggestions than trying to google for the error message. >> >> -- >> Pauli Virtanen >> >> >> >> ------------------------------ >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sloan.lindsey at gmail.com Tue Mar 26 07:09:19 2013 From: sloan.lindsey at gmail.com (Sloan Lindsey) Date: Tue, 26 Mar 2013 12:09:19 +0100 Subject: [SciPy-User] cobyla In-Reply-To: References: <1349039373.72572.YahooMailNeo@web31805.mail.mud.yahoo.com> Message-ID: That's a bummer, it would be nice to see it in a future version. For anyone who is interested in doing something similar, I ended up passing a counter variable in the args that is incremented every time the the fn to be minimized is called. I'm still debugging so I'll let everyone know if it doesn't work. -Sloan On Mon, Mar 25, 2013 at 9:19 PM, Ralf Gommers wrote: > > > > On Mon, Mar 25, 2013 at 12:55 PM, Sloan Lindsey > wrote: >> >> How does one parse the doc string? I'm interested in having my script >> know when MAXFUN is reached. I see it coming out as a print line ( I >> have to use 0.7.2 because of an older centOS install), but I'd really >> like my script to know that it has happened so I can tag that solution >> as suspect. Is there any cute way to redirect that output into >> something I can parse? > > > I don't think there's a good way to get at it. The output comes from some > Fortran code, so redirecting sys.stdout won't help you. > > Ralf > > >> >> >> -Sloan >> >> On Mon, Oct 1, 2012 at 9:58 PM, Ralf Gommers >> wrote: >> > >> > >> > On Sun, Sep 30, 2012 at 11:09 PM, The Helmbolds >> > wrote: >> >> >> >> On my system (Windows 7, Python 2.7.x and IDLE, latest SciPy), I >> >> observe >> >> the following behavior with fmin_cobyla and minimize's COBYLA method. >> >> >> >> Case 1: When run either in the IDLE interactive shell or within an >> >> enclosing Python program: >> >> 1.1. The fmin_cobyla function never returns the Results dictionary, >> >> and never displays it to Python's stdout. This is true regardless of >> >> the >> >> function call's disp setting. >> > >> > >> > Correct. The fmin_cobyla docstring clearly says what it returns. Result >> > objects are only returned by the new interfaces in the 0.11.0 release >> > (minimize, minimize_scalar, root). >> > >> >> 1.2. The 'minimize' function always returns the Results dictionary >> >> but >> >> never displays it to Python's stdout. Again, this is true regardless of >> >> the >> >> function call's disp setting. >> > >> > >> > `disp` doesn't print the Results objects. For me it works as advertized >> > (in >> > IPython), it prints something like: >> > >> > Normal return from subroutine COBYLA >> > >> > NFVALS = 37 F = 8.000000E-01 MAXCV = 0.000000E+00 >> > X = 1.400113E+00 1.700056E+00 >> > >> > Ralf >> > >> >> >> >> Case 2: When run interactively in Window's Command Prompt box: >> >> 2.1 The fmin_cobyla function never returns the Result dictionary, >> >> regardless of the function call's disp setting. Setting disp to True or >> >> False either displays the Results dictionary in the command box or not >> >> (respectively). I don't think the Results dictionary gets to the >> >> command box >> >> via stdout. >> >> 2.2 The 'minimize' function always returns the Result dictionary, >> >> regardless of the function call's disp setting. Setting disp to True >> >> or >> >> False either displays the Results dictionary in the command box or not >> >> (respectively). I don't think the Results dictionary gets to the >> >> command box >> >> via stdout. >> >> >> >> My thanks to all who helped clarify this situation. >> >> >> >> Bob H >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From lists at hilboll.de Tue Mar 26 07:36:23 2013 From: lists at hilboll.de (Andreas Hilboll) Date: Tue, 26 Mar 2013 12:36:23 +0100 Subject: [SciPy-User] constrained optimization: question about Jacobian Message-ID: <51518837.3050701@hilboll.de> I want to perform constrained optimization of the function def F(X, data): # x1, x2, x3, t2 are scalars, data is a constant np.array x1, x2, x3, t2 = X T = np.arange(X.size) t1, t3 = T[0], T[-1] Xbefore = x1 + (T - t1) * (x2 - x1) / (t2 - t1) Xafter = x2 + (T - t2) * (x3 - x2) / (t3 - t2) Xbreak = np.where(T <= t2, Xbefore, Xafter) return ((Xbreak - data)**2).sum() where the parameter value t2 must be within 0 <= t2 < data.size. According to the tutorial (http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#constrained-minimization-of-multivariate-scalar-functions-minimize), I need to define the Jacobian. However, I'm unsure of how to define the derivative with regards to t2. Any ideas are greatly appreciated :) Cheers, A. From denis at laxalde.org Tue Mar 26 07:49:41 2013 From: denis at laxalde.org (Denis Laxalde) Date: Tue, 26 Mar 2013 12:49:41 +0100 Subject: [SciPy-User] constrained optimization: question about Jacobian In-Reply-To: <51518837.3050701@hilboll.de> References: <51518837.3050701@hilboll.de> Message-ID: <51518B55.9050309@laxalde.org> Andreas Hilboll a ?crit : > According to the tutorial > (http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#constrained-minimization-of-multivariate-scalar-functions-minimize), > I need to define the Jacobian. However, I'm unsure of how to define the > derivative with regards to t2. The Jacobian is optional for most if not all solvers, so you don't *need* to provide it. From lists at hilboll.de Tue Mar 26 09:15:37 2013 From: lists at hilboll.de (Andreas Hilboll) Date: Tue, 26 Mar 2013 14:15:37 +0100 Subject: [SciPy-User] constrained optimization: question about Jacobian In-Reply-To: <51518B55.9050309@laxalde.org> References: <51518837.3050701@hilboll.de> <51518B55.9050309@laxalde.org> Message-ID: <51519F79.8000003@hilboll.de> >> According to the tutorial >> (http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#constrained-minimization-of-multivariate-scalar-functions-minimize), >> I need to define the Jacobian. However, I'm unsure of how to define the >> derivative with regards to t2. > > The Jacobian is optional for most if not all solvers, so you don't > *need* to provide it. Merci, Denis, I wasn't aware of that. Just followed the tutorial.[*] Now everything's working fine. Cheers, Andreas. [*] I just submitted a PR to mention this: https://github.com/scipy/scipy/pull/481 From doanviettrung at gmail.com Tue Mar 26 19:37:02 2013 From: doanviettrung at gmail.com (DoanVietTrungAtGmail) Date: Wed, 27 Mar 2013 10:37:02 +1100 Subject: [SciPy-User] Size of sparse boolean coo_matrix unexpectedly tiny Message-ID: I created an NxN (N = 10^6) sparse boolean coo_matrix A and randomly filled it with 200,000 Trues. sys.getsizeof(A) said A took 32 bytes - a thousand times smaller than I expected. I am new to Scipy, please tell me what's going on, thanks. >>> rows = [random.randint(0,999999) for j in xrange(200000)] >>> cols = [random.randint(0,999999) for j in xrange(200000)] >>> vals = ones_like(rows) >>> A = sparse.coo_matrix((vals,(rows,cols)), shape = (int(1E6), int(1E6)), dtype = bool) >>> len(vals) 200000 >>> print sys.getsizeof(A) 32 >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From elsuizo37 at gmail.com Tue Mar 26 23:11:32 2013 From: elsuizo37 at gmail.com (El suisse) Date: Wed, 27 Mar 2013 00:11:32 -0300 Subject: [SciPy-User] Sequence not work Message-ID: Hi I have to represent the following sequence: [image: z[2]=2] [image: z[n+1]=2^{n-1/2}\sqrt{1-\sqrt{1-4^{1-n}z[n]^{2}}] [image: n=2,3,4....] and my code is as follows: #!/usr/bin/env python import matplotlib.pyplot as plt import numpy as np num_muestras = 100 z = np.zeros(num_muestras) z[2] = 2.0 #En Python se cuenta desde el cero for n in range(2,num_muestras-1): z[n+1] = np.power(2,n-(1/2)) * (np.sqrt(1- np.sqrt(1-np.power(4,1-n) * np.power(z[n],2))) ) fig1 = plt.figure() ax = plt.gca() plt.ylabel('$z[n+1]=2^{n-1/2}\sqrt{1-\sqrt{1-4^{1-n}z[n]^{2}}}$') plt.xlabel('$n$') plt.grid() ax.plot(z) plt.show() but does not work, z should tend to pi. might be wrong? thanks in advance -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Mar 27 07:14:04 2013 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 Mar 2013 11:14:04 +0000 Subject: [SciPy-User] Sequence not work In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 3:11 AM, El suisse wrote: > Hi > I have to represent the following sequence: > > [image: z[2]=2] > [image: z[n+1]=2^{n-1/2}\sqrt{1-\sqrt{1-4^{1-n}z[n]^{2}}] [image: > n=2,3,4....] > > > and my code is as follows: > > > #!/usr/bin/env python > > > import matplotlib.pyplot as plt > > import numpy as np > > > num_muestras = 100 > > z = np.zeros(num_muestras) > > > > z[2] = 2.0 #En Python se cuenta desde el cero > > for n in range(2,num_muestras-1): > > > z[n+1] = np.power(2,n-(1/2)) * (np.sqrt(1- np.sqrt(1-np.power(4,1-n) * > np.power(z[n],2))) ) > (1/2) should be 0.5 . (1/2) gives 0, not 0.5. The division of integers in Python 2.x defaults to returning an integer, not a floating point number (for a variety of reasons that you can Google for if you want an explanation). -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Wed Mar 27 08:12:48 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 27 Mar 2013 08:12:48 -0400 Subject: [SciPy-User] Sequence not work In-Reply-To: References: Message-ID: On 3/27/13, Robert Kern wrote: > On Wed, Mar 27, 2013 at 3:11 AM, El suisse wrote: > >> Hi >> I have to represent the following sequence: >> >> [image: z[2]=2] >> [image: z[n+1]=2^{n-1/2}\sqrt{1-\sqrt{1-4^{1-n}z[n]^{2}}] [image: >> n=2,3,4....] >> >> >> and my code is as follows: >> >> >> #!/usr/bin/env python >> >> >> import matplotlib.pyplot as plt >> >> import numpy as np >> >> >> num_muestras = 100 >> >> z = np.zeros(num_muestras) >> >> >> >> z[2] = 2.0 #En Python se cuenta desde el cero >> >> for n in range(2,num_muestras-1): >> >> >> z[n+1] = np.power(2,n-(1/2)) * (np.sqrt(1- np.sqrt(1-np.power(4,1-n) >> * >> np.power(z[n],2))) ) >> > > (1/2) should be 0.5 . (1/2) gives 0, not 0.5. The division of integers in > Python 2.x defaults to returning an integer, not a floating point number > (for a variety of reasons that you can Google for if you want an > explanation). Also, in numpy, an integer to a negative power gives 0: In [21]: np.power(4, -3) Out[21]: 0 so change np.power(4, 1-n) to np.power(4.0, 1-n), or even 4**(1-n), since there is really no need to use a numpy function there. And then only run your series out to about num_muestras = 30; you'll see that it converges to pi, but then blows up. Warren > > -- > Robert Kern > From mailinglists at xgm.de Wed Mar 27 12:45:06 2013 From: mailinglists at xgm.de (Florian Lindner) Date: Wed, 27 Mar 2013 17:45:06 +0100 Subject: [SciPy-User] Best way to feed data to Gnuplot? Message-ID: <7072300.23VAXm2rWG@horus> Hello, I want to utilise gnuplot for my plots. I know about matplotlib, but I don't like it (interface, appearance). What is the best way to feed data from a numpy array to gnuplot? At the moment I try to use it like gnuplot -e "plot '-' ...." and write the data to stdin: 1,2,3 2,6,5 e 10,20,30 Is there a better way? What is the best to print the numpy data in the correct format? There is a Gnuplot.py package, however that seems very outdated, last version is from 2008. According to the docs it still depends on Numeric. Thanks! Florian From afraser at lanl.gov Wed Mar 27 13:26:01 2013 From: afraser at lanl.gov (Andrew Fraser) Date: Wed, 27 Mar 2013 11:26:01 -0600 Subject: [SciPy-User] Best way to feed data to Gnuplot? In-Reply-To: <7072300.23VAXm2rWG@horus> References: <7072300.23VAXm2rWG@horus> Message-ID: <51532BA9.7030903@lanl.gov> I switched from gnuplot to matplotlib in 2008. The new code is a little easier to read but the plots from gnuplot looked better. To get access to all of the features of gnuplot, I fed custom commands to it through a pipe. I figured out how to do it by looking at the source for the Gnuplot.py package that you mention. Since I wanted many features not provided by the package, I just used the pipe rather than Gnuplot.py. I wrote data to files and used the pipe to issue commands to a gnuplot process. On 03/27/2013 10:45 AM, Florian Lindner wrote: > Hello, > > I want to utilise gnuplot for my plots. I know about matplotlib, but I don't > like it (interface, appearance). > > What is the best way to feed data from a numpy array to gnuplot? At the moment > I try to use it like gnuplot -e "plot '-' ...." and write the data to stdin: > > 1,2,3 > 2,6,5 > e > 10,20,30 > > Is there a better way? What is the best to print the numpy data in the correct > format? > > There is a Gnuplot.py package, however that seems very outdated, last version > is from 2008. According to the docs it still depends on Numeric. > > > Thanks! > > Florian > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From beamesleach at gmail.com Wed Mar 27 13:39:43 2013 From: beamesleach at gmail.com (Alex Leach) Date: Wed, 27 Mar 2013 17:39:43 -0000 Subject: [SciPy-User] Best way to feed data to Gnuplot? In-Reply-To: <7072300.23VAXm2rWG@horus> References: <7072300.23VAXm2rWG@horus> Message-ID: On Wed, 27 Mar 2013 17:26:01 -0000, Andrew Fraser wrote: > I switched from gnuplot to matplotlib in 2008. The new code is a little > easier to read but the plots from gnuplot looked better. You beat me to it! +1 for matplotlib. I only used gnuplot for about 3 months, also in 2008, before switching to matplotlib. True, the graphs don't look as nice, but my code does; I haven't looked back... On Wed, 27 Mar 2013 16:45:06 -0000, Florian Lindner wrote: > Is there a better way? What is the best to print the numpy data in the > correct > format? Have you considered using matplotlib[1]? It interfaces with numpy arrays really well. In days long gone, I wrote formatted strings to gnuplot-compatible text files, and then called gnuplot with Python's os.system. It wasn't a very elegant solution at all.. matplotlib is at first overwhelmingly large, as it provides both an Object Oriented API[2], and also tries to emulate a very similar interface to matlab (aka matplotlib.pyplot[3]). When used in conjunction with numpy, scipy (and ipython), it goes by the name pylab[4]. Cheers, Alex [1] http://matplotlib.org/ [2] http://matplotlib.org/1.2.1/api/index.html [3] http://matplotlib.org/1.2.1/api/pyplot_summary.html [4] http://www.scipy.org/PyLab From pav at iki.fi Wed Mar 27 15:50:29 2013 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 27 Mar 2013 21:50:29 +0200 Subject: [SciPy-User] Best way to feed data to Gnuplot? In-Reply-To: <51532BA9.7030903@lanl.gov> References: <7072300.23VAXm2rWG@horus> <51532BA9.7030903@lanl.gov> Message-ID: 27.03.2013 19:26, Andrew Fraser kirjoitti: > I switched from gnuplot to matplotlib in 2008. The new code is a little > easier to read but the plots from gnuplot looked better. Interestingly, I've had the opposite experience --- matplotlib plots look as good or better than gnuplot ones, fully publication quality. Here's my matplotlibrc (you'll probably need to grab the stix math fonts, though): # -- Output settings path.simplify: True savefig.dpi: 300 ps.papersize: a4 ps.usedistiller: xpdf # -- Font settings text.usetex: false font.size: 10.0 font.family: serif font.serif: STIXGeneral font.sans-serif: Arial font.cursive: Zapf Chancery font.monospace: Courier New mathtext.fontset: stix axes.labelsize: medium axes.titlesize: medium legend.fontsize: medium xtick.labelsize: medium ytick.labelsize: medium # -- Figure size default settings (interactive use) figure.figsize: 6, 4 figure.subplot.left : 0.2 figure.subplot.bottom: 0.2 figure.subplot.right : 0.95 figure.subplot.top: 0.9 -- Pauli Virtanen From vanforeest at gmail.com Wed Mar 27 15:55:38 2013 From: vanforeest at gmail.com (nicky van foreest) Date: Wed, 27 Mar 2013 20:55:38 +0100 Subject: [SciPy-User] Best way to feed data to Gnuplot? In-Reply-To: References: <7072300.23VAXm2rWG@horus> <51532BA9.7030903@lanl.gov> Message-ID: Hi, I use matplotlib while testing, but use gnuplot for making the final plots, as I find gnuplot easier to configure. The book 'gnuplot in action' by janert is very nice to help you tune your graphs the way you want, and what data format is ideal for gnuplot. I don't use gnuplot.py anymore, because matplotlib is way easier to use. Nicky On 27 March 2013 20:50, Pauli Virtanen wrote: > 27.03.2013 19:26, Andrew Fraser kirjoitti: > > I switched from gnuplot to matplotlib in 2008. The new code is a little > > easier to read but the plots from gnuplot looked better. > > Interestingly, I've had the opposite experience --- matplotlib plots > look as good or better than gnuplot ones, fully publication quality. > > Here's my matplotlibrc (you'll probably need to grab the stix math > fonts, though): > > # -- Output settings > path.simplify: True > savefig.dpi: 300 > ps.papersize: a4 > ps.usedistiller: xpdf > > # -- Font settings > text.usetex: false > > font.size: 10.0 > font.family: serif > font.serif: STIXGeneral > font.sans-serif: Arial > font.cursive: Zapf Chancery > font.monospace: Courier New > > mathtext.fontset: stix > > axes.labelsize: medium > axes.titlesize: medium > legend.fontsize: medium > xtick.labelsize: medium > ytick.labelsize: medium > > > # -- Figure size default settings (interactive use) > figure.figsize: 6, 4 > figure.subplot.left : 0.2 > figure.subplot.bottom: 0.2 > figure.subplot.right : 0.95 > figure.subplot.top: 0.9 > > > -- > Pauli Virtanen > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nils106 at googlemail.com Wed Mar 27 15:23:31 2013 From: nils106 at googlemail.com (Nils Wagner) Date: Wed, 27 Mar 2013 19:23:31 +0000 Subject: [SciPy-User] Strange behaviour of scipy.test() Message-ID: Hi all, I am confused by the current behavior of scipy.test() The number of tests is 0. Any idea ? Nils nwagner at linux-itfv:~> python Python 2.7.2 (default, Aug 19 2011, 20:41:43) [GCC] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test() Running unit tests for scipy NumPy version 1.8.0.dev-1a816c7 NumPy is installed in /home/nwagner/local/lib64/python2.7/site-packages/numpy SciPy version 0.13.0.dev-9ba2fda SciPy is installed in /home/nwagner/local/lib64/python2.7/site-packages/scipy-0.13.0.dev_9ba2fda-py2.7-linux-x86_64.egg/scipy Python version 2.7.2 (default, Aug 19 2011, 20:41:43) [GCC] nose version 1.1.2 ---------------------------------------------------------------------- Ran 0 tests in 0.055s OK -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Thu Mar 28 10:52:44 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 28 Mar 2013 10:52:44 -0400 Subject: [SciPy-User] Strange behaviour of scipy.test() In-Reply-To: References: Message-ID: On 3/27/13, Nils Wagner wrote: > Hi all, > > I am confused by the current behavior of scipy.test() > The number of tests is 0. Any idea ? > > Nils > > nwagner at linux-itfv:~> python > Python 2.7.2 (default, Aug 19 2011, 20:41:43) [GCC] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> import scipy >>>> scipy.test() > Running unit tests for scipy > NumPy version 1.8.0.dev-1a816c7 > NumPy is installed in > /home/nwagner/local/lib64/python2.7/site-packages/numpy > SciPy version 0.13.0.dev-9ba2fda > SciPy is installed in > /home/nwagner/local/lib64/python2.7/site-packages/scipy-0.13.0.dev_9ba2fda-py2.7-linux-x86_64.egg/scipy > Python version 2.7.2 (default, Aug 19 2011, 20:41:43) [GCC] > nose version 1.1.2 > > ---------------------------------------------------------------------- > Ran 0 tests in 0.055s > > OK > > Tests? We don't need no stinkin' tests! Well, actually, a recently merged pull request introduced the problem. See the discussion here:https://github.com/scipy/scipy/pull/454 Warren From mutantturkey at gmail.com Thu Mar 28 15:46:39 2013 From: mutantturkey at gmail.com (Calvin Morrison) Date: Thu, 28 Mar 2013 15:46:39 -0400 Subject: [SciPy-User] Sparse Matricies and NNLS Message-ID: Hi! I have a very large matrix that I am using with the scipy.optimize.nnls function, however the matrix is so large that it takes 30Gb of memory to load with python! I was thinking about using a sparse matrix since it is relatively sparse, but the problem is that even if I was to use the sparse matricies, the nnls function only accepts a ndarray, not a sparse matrix. (when I try and throw a sparse matrix at it I get an error.) So of course I'd need to convert it to a dense array before passing it into nnls, but that would totally void the whole point of a sparse array, because then python would still have to allocate the full dense matrix. Does anyone have an idea about how to efficiently pass a matrix and store it without blowing up the memory usage? Thank you, Calvin From mwmorrison93 at gmail.com Thu Mar 28 16:05:35 2013 From: mwmorrison93 at gmail.com (Michael Morrison) Date: Thu, 28 Mar 2013 13:05:35 -0700 Subject: [SciPy-User] Sparse Matricies and NNLS In-Reply-To: References: Message-ID: Hey Calvin, I was just looking into this same issue recently. Here's the solution someone recommended on stackoverflow: http://stackoverflow.com/questions/1053928/python-numpy-very-large-matrices I haven't actually used it yet, but it seems Pytables is the way to go. Good Luck, Mike On Thu, Mar 28, 2013 at 12:46 PM, Calvin Morrison wrote: > Hi! > > I have a very large matrix that I am using with the > scipy.optimize.nnls function, however the matrix is so large that it > takes 30Gb of memory to load with python! > > I was thinking about using a sparse matrix since it is relatively > sparse, but the problem is that even if I was to use the sparse > matricies, the nnls function only accepts a ndarray, not a sparse > matrix. (when I try and throw a sparse matrix at it I get an error.) > > So of course I'd need to convert it to a dense array before passing it > into nnls, but that would totally void the whole point of a sparse > array, because then python would still have to allocate the full dense > matrix. > > Does anyone have an idea about how to efficiently pass a matrix and > store it without blowing up the memory usage? > > Thank you, > > Calvin > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Mar 28 16:48:26 2013 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 28 Mar 2013 22:48:26 +0200 Subject: [SciPy-User] Sparse Matricies and NNLS In-Reply-To: References: Message-ID: 28.03.2013 21:46, Calvin Morrison kirjoitti: [clip] > I was thinking about using a sparse matrix since it is relatively > sparse, but the problem is that even if I was to use the sparse > matricies, the nnls function only accepts a ndarray, not a sparse > matrix. (when I try and throw a sparse matrix at it I get an error.) The nnls algorithm in Scipy relies on dense matrix algebra, and is moreover written in Fortran. So, there is no way to tell it to use sparse matrices. You'll need to find an implementation of NNLS algorithm that either is matrix-free or works for sparse problems. If you find such code, be sure to reply to this list --- it might be useful to include it in Scipy, provided the license is compatible. -- Pauli Virtanen From mutantturkey at gmail.com Thu Mar 28 17:33:07 2013 From: mutantturkey at gmail.com (Calvin Morrison) Date: Thu, 28 Mar 2013 17:33:07 -0400 Subject: [SciPy-User] Sparse Matricies and NNLS In-Reply-To: References: Message-ID: Paul, It seems nobody wants to touch the nnls algorithm because the only implementation that is floating around is the one from the original publication or automatic conversions of it. Calvin On Mar 28, 2013 4:48 PM, "Pauli Virtanen" wrote: > 28.03.2013 21:46, Calvin Morrison kirjoitti: > [clip] > > I was thinking about using a sparse matrix since it is relatively > > sparse, but the problem is that even if I was to use the sparse > > matricies, the nnls function only accepts a ndarray, not a sparse > > matrix. (when I try and throw a sparse matrix at it I get an error.) > > The nnls algorithm in Scipy relies on dense matrix algebra, and is > moreover written in Fortran. So, there is no way to tell it to use > sparse matrices. > > You'll need to find an implementation of NNLS algorithm that either is > matrix-free or works for sparse problems. If you find such code, be sure > to reply to this list --- it might be useful to include it in Scipy, > provided the license is compatible. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Mar 28 22:58:14 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 28 Mar 2013 22:58:14 -0400 Subject: [SciPy-User] special's hidden treasures: rootfinding for distribution cdfs Message-ID: The fortran code for the distribution functions also has rootfinding build in. We can solve for any value given the other values. For example, solve for the non-centrality parameter for the non-central t-distribution. I never thought I would ever need that. Is this the right function for special.nctdtrinc and friends? I didn't manage to follow the trail. \scipy\special\cdflib\cdftnc.f C SUBROUTINE CDFTNC( WHICH, P, Q, T, DF, PNONC, STATUS, BOUND ) C Cumulative Distribution Function C Non-Central T distribution C C Function C C Calculates any one parameter of the noncentral t distribution give C values for the others. C C Arguments C C WHICH --> Integer indicating which argument C values is to be calculated from the others. C Legal range: 1..3 C iwhich = 1 : Calculate P and Q from T,DF,PNONC C iwhich = 2 : Calculate T from P,Q,DF,PNONC C iwhich = 3 : Calculate DF from P,Q,T C iwhich = 4 : Calculate PNONC from P,Q,DF,T I need iwhich=4 solving for the non-centrality parameter >>> t = stats.nct._ppf(0.1, 9, 10) >>> special.nctdtrinc(9, 0.1, t) 10.000000055648481 confidence interval for non-centrality parameter (AFAIU) >>> special.nctdtrinc(24, [0.975, 0.025], 25) array([ 17.68259, 32.26892]) upper value differs slightly from an R package (which has it's own implementation) cross check >>> nc = special.nctdtrinc(24, 0.025, 25) >>> stats.nct.cdf(25, 24, nc) 0.024999999999859579 a paper refers to a SAS function which seems to do the same '''' following SAS syntax: LowNC_CV = TNONCT(tobs., ?, 1 - ? / 2), ''' Background: If I understand this part of some papers correctly, then I can get confidence intervals for effect sizes for statistical tests that are based on the t-distributions with non-central t-distribution under the alternative, with essentially just a call to scipy.special. ------------------ It's nice to find the answer in scipy special when we need to look for something we never knew that it existed. **Scipy special needs your help.** ========================== \scipy\special\add_newdocs.py (maybe my checkout is a bit outdated) <...> add_newdoc("scipy.special", "ncfdtridfd", """ """) add_newdoc("scipy.special", "ncfdtridfn", """ """) add_newdoc("scipy.special", "ncfdtrinc", """ """) add_newdoc("scipy.special", "nctdtr", """ """) add_newdoc("scipy.special", "nctdtridf", """ """) add_newdoc("scipy.special", "nctdtrinc", """ """) add_newdoc("scipy.special", "nctdtrit", """ """) <...> The information is available in the fortran docstrings. A pull request that fills in some blanks is an easy way to become a scipy contributer. Josef From mailinglists at xgm.de Fri Mar 29 10:13:31 2013 From: mailinglists at xgm.de (Florian Lindner) Date: Fri, 29 Mar 2013 15:13:31 +0100 Subject: [SciPy-User] Best way to feed data to Gnuplot? In-Reply-To: <7072300.23VAXm2rWG@horus> References: <7072300.23VAXm2rWG@horus> Message-ID: <29040443.uxYWd0ZOeR@horus> Am Mittwoch, 27. M?rz 2013, 17:45:06 schrieb Florian Lindner: > Hello, > > I want to utilise gnuplot for my plots. I know about matplotlib, but I don't > like it (interface, appearance). Thanks for your replies! I think I'll reevaluate matplotlib and give it another try... Florian From mailinglists at xgm.de Fri Mar 29 10:27:56 2013 From: mailinglists at xgm.de (Florian Lindner) Date: Fri, 29 Mar 2013 15:27:56 +0100 Subject: [SciPy-User] Find and interpolate zero value Message-ID: <6045024.F2qzHeQDIF@horus> Hello, I have an array like: array([[-1. , 2. ], [-0.5, 4. ], [ 0.5, 6. ]]) first row are coordinates, second row is data. Is there a scipy/numpy way to find zero from the first row and interpolate a data value from the second row? Here I the result would be 5 ( 4 + (6-4)/( 0.5 - (-0.5)) * 0.5 ) How would this best be done with scipy/numpy? Thanks, Florian From pav at iki.fi Fri Mar 29 12:42:45 2013 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 29 Mar 2013 18:42:45 +0200 Subject: [SciPy-User] Find and interpolate zero value In-Reply-To: <6045024.F2qzHeQDIF@horus> References: <6045024.F2qzHeQDIF@horus> Message-ID: 29.03.2013 16:27, Florian Lindner kirjoitti: > Hello, > > I have an array like: > > array([[-1. , 2. ], > [-0.5, 4. ], > [ 0.5, 6. ]]) > > first row are coordinates, second row is data. > > Is there a scipy/numpy way to find zero from the first row and interpolate a > data value from the second row? These may be helpful: >>> numpy.lookfor('interpolate') >>> numpy.lookfor('interpolate 1D', module='scipy') http://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html To find zeros, interpolate y-vs-x instead of x-vs-y -- Pauli Virtanen From pav at iki.fi Fri Mar 29 12:50:16 2013 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 29 Mar 2013 18:50:16 +0200 Subject: [SciPy-User] Find and interpolate zero value In-Reply-To: References: <6045024.F2qzHeQDIF@horus> Message-ID: 29.03.2013 18:42, Pauli Virtanen kirjoitti: [clip] > To find zeros, interpolate y-vs-x instead of x-vs-y If there are multiple roots, you can use splines: http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html See the roots() method. -- Pauli Virtanen From ralf.gommers at gmail.com Fri Mar 29 16:34:49 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 29 Mar 2013 21:34:49 +0100 Subject: [SciPy-User] special's hidden treasures: rootfinding for distribution cdfs In-Reply-To: References: Message-ID: On Fri, Mar 29, 2013 at 3:58 AM, wrote: > The fortran code for the distribution functions also has rootfinding build > in. > We can solve for any value given the other values. > > For example, solve for the non-centrality parameter for the > non-central t-distribution. > I never thought I would ever need that. > > Is this the right function for special.nctdtrinc and friends? I didn't > manage to follow the trail. > > \scipy\special\cdflib\cdftnc.f > > C SUBROUTINE CDFTNC( WHICH, P, Q, T, DF, PNONC, STATUS, BOUND ) > C Cumulative Distribution Function > C Non-Central T distribution > C > C Function > C > C Calculates any one parameter of the noncentral t distribution give > C values for the others. > C > C Arguments > C > C WHICH --> Integer indicating which argument > C values is to be calculated from the others. > C Legal range: 1..3 > C iwhich = 1 : Calculate P and Q from T,DF,PNONC > C iwhich = 2 : Calculate T from P,Q,DF,PNONC > C iwhich = 3 : Calculate DF from P,Q,T > C iwhich = 4 : Calculate PNONC from P,Q,DF,T > > > I need iwhich=4 > > solving for the non-centrality parameter > >>> t = stats.nct._ppf(0.1, 9, 10) > >>> special.nctdtrinc(9, 0.1, t) > 10.000000055648481 > > confidence interval for non-centrality parameter (AFAIU) > > >>> special.nctdtrinc(24, [0.975, 0.025], 25) > array([ 17.68259, 32.26892]) > > upper value differs slightly from an R package (which has it's own > implementation) > > cross check > >>> nc = special.nctdtrinc(24, 0.025, 25) > >>> stats.nct.cdf(25, 24, nc) > 0.024999999999859579 > > a paper refers to a SAS function which seems to do the same > '''' > following SAS syntax: > LowNC_CV = TNONCT(tobs., ?, 1 - ? / 2), > ''' > > Background: If I understand this part of some papers correctly, then I can > get > confidence intervals for effect sizes for statistical tests that are > based on the > t-distributions with non-central t-distribution under the alternative, with > essentially just a call to scipy.special. > > ------------------ > It's nice to find the answer in scipy special when we need to look for > something we never knew that it existed. > > > > **Scipy special needs your help.** > ========================== > > \scipy\special\add_newdocs.py (maybe my checkout is a bit outdated) > > <...> > add_newdoc("scipy.special", "ncfdtridfd", > """ > """) > > add_newdoc("scipy.special", "ncfdtridfn", > """ > """) > > add_newdoc("scipy.special", "ncfdtrinc", > """ > """) > > add_newdoc("scipy.special", "nctdtr", > """ > """) > > add_newdoc("scipy.special", "nctdtridf", > """ > """) > > add_newdoc("scipy.special", "nctdtrinc", > """ > """) > > add_newdoc("scipy.special", "nctdtrit", > """ > """) > <...> > > The information is available in the fortran docstrings. A pull request > that fills in some blanks is an easy way to become a scipy > contributer. > Are you sure that works? Last time I checked we couldn't add docstrings to f2py-wrapped functions with add_newdoc. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Mar 29 16:49:12 2013 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 29 Mar 2013 22:49:12 +0200 Subject: [SciPy-User] special's hidden treasures: rootfinding for distribution cdfs In-Reply-To: References: Message-ID: 29.03.2013 22:34, Ralf Gommers kirjoitti: [clip] > Are you sure that works? Last time I checked we couldn't add docstrings > to f2py-wrapped functions with add_newdoc. It works, it's not really add_newdoc. -- Pauli Virtanen From josef.pktd at gmail.com Fri Mar 29 16:55:31 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 29 Mar 2013 16:55:31 -0400 Subject: [SciPy-User] special's hidden treasures: rootfinding for distribution cdfs In-Reply-To: References: Message-ID: On Fri, Mar 29, 2013 at 4:34 PM, Ralf Gommers wrote: > > > > > On Fri, Mar 29, 2013 at 3:58 AM, wrote: >> >> The fortran code for the distribution functions also has rootfinding build in. >> We can solve for any value given the other values. >> >> For example, solve for the non-centrality parameter for the >> non-central t-distribution. >> I never thought I would ever need that. >> >> Is this the right function for special.nctdtrinc and friends? I didn't >> manage to follow the trail. >> >> \scipy\special\cdflib\cdftnc.f >> >> C SUBROUTINE CDFTNC( WHICH, P, Q, T, DF, PNONC, STATUS, BOUND ) >> C Cumulative Distribution Function >> C Non-Central T distribution >> C >> C Function >> C >> C Calculates any one parameter of the noncentral t distribution give >> C values for the others. >> C >> C Arguments >> C >> C WHICH --> Integer indicating which argument >> C values is to be calculated from the others. >> C Legal range: 1..3 >> C iwhich = 1 : Calculate P and Q from T,DF,PNONC >> C iwhich = 2 : Calculate T from P,Q,DF,PNONC >> C iwhich = 3 : Calculate DF from P,Q,T >> C iwhich = 4 : Calculate PNONC from P,Q,DF,T >> >> >> I need iwhich=4 >> >> solving for the non-centrality parameter >> >>> t = stats.nct._ppf(0.1, 9, 10) >> >>> special.nctdtrinc(9, 0.1, t) >> 10.000000055648481 >> >> confidence interval for non-centrality parameter (AFAIU) >> >> >>> special.nctdtrinc(24, [0.975, 0.025], 25) >> array([ 17.68259, 32.26892]) >> >> upper value differs slightly from an R package (which has it's own >> implementation) >> >> cross check >> >>> nc = special.nctdtrinc(24, 0.025, 25) >> >>> stats.nct.cdf(25, 24, nc) >> 0.024999999999859579 >> >> a paper refers to a SAS function which seems to do the same >> '''' >> following SAS syntax: >> LowNC_CV = TNONCT(tobs., ?, 1 - ? / 2), >> ''' >> >> Background: If I understand this part of some papers correctly, then I can get >> confidence intervals for effect sizes for statistical tests that are >> based on the >> t-distributions with non-central t-distribution under the alternative, with >> essentially just a call to scipy.special. >> >> ------------------ >> It's nice to find the answer in scipy special when we need to look for >> something we never knew that it existed. >> >> >> >> **Scipy special needs your help.** >> ========================== >> >> \scipy\special\add_newdocs.py (maybe my checkout is a bit outdated) >> >> <...> >> add_newdoc("scipy.special", "ncfdtridfd", >> """ >> """) >> >> add_newdoc("scipy.special", "ncfdtridfn", >> """ >> """) >> >> add_newdoc("scipy.special", "ncfdtrinc", >> """ >> """) >> >> add_newdoc("scipy.special", "nctdtr", >> """ >> """) >> >> add_newdoc("scipy.special", "nctdtridf", >> """ >> """) >> >> add_newdoc("scipy.special", "nctdtrinc", >> """ >> """) >> >> add_newdoc("scipy.special", "nctdtrit", >> """ >> """) >> <...> >> >> The information is available in the fortran docstrings. A pull request >> that fills in some blanks is an easy way to become a scipy >> contributer. > > > Are you sure that works? Last time I checked we couldn't add docstrings to f2py-wrapped functions with add_newdoc. > Now I'm not sure, and I never tried. I was just jumping to conclusions based on superficially following https://github.com/scipy/scipy/pull/459 Josef > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ralf.gommers at gmail.com Sat Mar 30 08:31:36 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 30 Mar 2013 13:31:36 +0100 Subject: [SciPy-User] ANN: SciPy 0.12.0 release candidate 1 Message-ID: Hi, I am pleased to announce the availability of the first release candidate of SciPy 0.12.0. This is shaping up to be another solid release, with some cool new features (see highlights below) and a large amount of bug fixes and maintenance work under the hood. The number of contributors also keeps rising steadily, we're at 74 so far for this release. Sources and binaries can be found at http://sourceforge.net/projects/scipy/files/scipy/0.12.0rc1/, release notes are copied below. Please try this release and report any problems on the mailing list. If no issues are found, the final release will be in one week. Cheers, Ralf ========================== SciPy 0.12.0 Release Notes ========================== .. note:: Scipy 0.12.0 is not released yet! .. contents:: SciPy 0.12.0 is the culmination of 7 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.12.x branch, and on adding new features on the master branch. Some of the highlights of this release are: - Completed QHull wrappers in scipy.spatial. - cKDTree now a drop-in replacement for KDTree. - A new global optimizer, basinhopping. - Support for Python 2 and Python 3 from the same code base (no more 2to3). This release requires Python 2.6, 2.7 or 3.1-3.3 and NumPy 1.5.1 or greater. Support for Python 2.4 and 2.5 has been dropped as of this release. New features ============ ``scipy.spatial`` improvements ------------------------------ cKDTree feature-complete ^^^^^^^^^^^^^^^^^^^^^^^^ Cython version of KDTree, cKDTree, is now feature-complete. Most operations (construction, query, query_ball_point, query_pairs, count_neighbors and sparse_distance_matrix) are between 200 and 1000 times faster in cKDTree than in KDTree. With very minor caveats, cKDTree has exactly the same interface as KDTree, and can be used as a drop-in replacement. Voronoi diagrams and convex hulls ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `scipy.spatial` now contains functionality for computing Voronoi diagrams and convex hulls using the Qhull library. (Delaunay triangulation was available since Scipy 0.9.0.) Delaunay improvements ^^^^^^^^^^^^^^^^^^^^^ It's now possible to pass in custom Qhull options in Delaunay triangulation. Coplanar points are now also recorded, if present. Incremental construction of Delaunay triangulations is now also possible. Spectral estimators (``scipy.signal``) -------------------------------------- The functions ``scipy.signal.periodogram`` and ``scipy.signal.welch`` were added, providing DFT-based spectral estimators. ``scipy.optimize`` improvements ------------------------------- Callback functions in L-BFGS-B and TNC ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A callback mechanism was added to L-BFGS-B and TNC minimization solvers. Basin hopping global optimization (``scipy.optimize.basinhopping``) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A new global optimization algorithm. Basinhopping is designed to efficiently find the global minimum of a smooth function. ``scipy.special`` improvements ------------------------------ Revised complex error functions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The computation of special functions related to the error function now uses a new `Faddeeva library from MIT `__ which increases their numerical precision. The scaled and imaginary error functions ``erfcx`` and ``erfi`` were also added, and the Dawson integral ``dawsn`` can now be evaluated for a complex argument. Faster orthogonal polynomials ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Evaluation of orthogonal polynomials (the ``eval_*`` routines) in now faster in ``scipy.special``, and their ``out=`` argument functions properly. ``scipy.sparse.linalg`` features -------------------------------- - In ``scipy.sparse.linalg.spsolve``, the ``b`` argument can now be either a vector or a matrix. - ``scipy.sparse.linalg.inv`` was added. This uses ``spsolve`` to compute a sparse matrix inverse. - ``scipy.sparse.linalg.expm`` was added. This computes the exponential of a sparse matrix using a similar algorithm to the existing dense array implementation in ``scipy.linalg.expm``. Listing Matlab(R) file contents in ``scipy.io`` ----------------------------------------------- A new function ``whosmat`` is available in ``scipy.io`` for inspecting contents of MAT files without reading them to memory. Documented BLAS and LAPACK low-level interfaces (``scipy.linalg``) ------------------------------------------------------------------ The modules `scipy.linalg.blas` and `scipy.linalg.lapack` can be used to access low-level BLAS and LAPACK functions. Polynomial interpolation improvements (``scipy.interpolate``) ------------------------------------------------------------- The barycentric, Krogh, piecewise and pchip polynomial interpolators in ``scipy.interpolate`` accept now an ``axis`` argument. Deprecated features =================== `scipy.lib.lapack` ------------------ The module `scipy.lib.lapack` is deprecated. You can use `scipy.linalg.lapack` instead. The module `scipy.lib.blas` was deprecated earlier in Scipy 0.10.0. `fblas` and `cblas` ------------------- Accessing the modules `scipy.linalg.fblas`, `cblas`, `flapack`, `clapack` is deprecated. Instead, use the modules `scipy.linalg.lapack` and `scipy.linalg.blas`. Backwards incompatible changes ============================== Removal of ``scipy.io.save_as_module`` -------------------------------------- The function ``scipy.io.save_as_module`` was deprecated in Scipy 0.11.0, and is now removed. Its private support modules ``scipy.io.dumbdbm_patched`` and ``scipy.io.dumb_shelve`` are also removed. Other changes ============= Authors ======= * Anton Akhmerov + * Alexander Ebersp?cher + * Anne Archibald * Jisk Attema + * K.-Michael Aye + * bemasc + * Sebastian Berg + * Fran?ois Boulogne + * Matthew Brett * Lars Buitinck * Steven Byrnes + * Tim Cera + * Christian + * Keith Clawson + * David Cournapeau * Nathan Crock + * endolith * Bradley M. Froehle + * Matthew R Goodman * Christoph Gohlke * Ralf Gommers * Robert David Grant + * Yaroslav Halchenko * Charles Harris * Jonathan Helmus * Andreas Hilboll * Hugo + * Oleksandr Huziy * Jeroen Demeyer + * Johannes Sch?nberger + * Steven G. Johnson + * Chris Jordan-Squire * Jonathan Taylor + * Niklas Kroeger + * Jerome Kieffer + * kingson + * Josh Lawrence * Denis Laxalde * Alex Leach + * Lorenzo Luengo + * Stephen McQuay + * MinRK * Sturla Molden + * Eric Moore + * mszep + * Matt Newville + * Vlad Niculae * Travis Oliphant * David Parker + * Fabian Pedregosa * Josef Perktold * Zach Ploskey + * Alex Reinhart + * Richard Lindsley + * Gilles Rochefort + * Ciro Duran Santillli + * Jan Schlueter + * Jonathan Scholz + * Anthony Scopatz * Skipper Seabold * Fabrice Silva + * Scott Sinclair * Jacob Stevenson + * Sturla Molden + * Julian Taylor + * thorstenkranz + * John Travers + * True Price + * Nicky van Foreest * Jacob Vanderplas * Patrick Varilly * Daniel Velkov + * Pauli Virtanen * Stefan van der Walt * Warren Weckesser A total of 74 people contributed to this release. People with a "+" by their names contributed a patch for the first time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Mar 31 07:08:03 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 31 Mar 2013 13:08:03 +0200 Subject: [SciPy-User] a routine for fitting a straight line to data In-Reply-To: <41FE3C6C-5337-4B4C-A742-812B9A9340C3@gmail.com> References: <41FE3C6C-5337-4B4C-A742-812B9A9340C3@gmail.com> Message-ID: On Mon, Mar 25, 2013 at 2:27 AM, David Pine wrote: > scipy.stats.linregress does not do the job. First, it does not allow for > weighting of the data, relative, absolute, or otherwise. Weighting could be easily added to linregress, that would be useful. > It does not return the covariance matrix, which provides estimates of the > uncertainties in the fitting parameters, nor does it return chi-squared, > which is the standard measure of the quality of the fit in the physical > sciences. These are a little trickier to add, would require adding a full_output=False input because just adding a return value would break backwards compatibility. But it's also possible. Would you like to have a go at adding your suggestions to linregress? There's still the issue that if you enhance stats.linregress it will be hard to find. Fitting tools are scattered all over, and it's not clear what the best solution is. A doc section giving an overview of all fit functionality would be a good start. See https://github.com/scipy/scipy/pull/448 for a recent discussion on that topic. Ralf > David > > > On Mar 24, 2013, at 8:19 PM, josef.pktd at gmail.com wrote: > > > On Sun, Mar 24, 2013 at 7:53 PM, David Pine wrote: > >> I would like to submit a routine to scipy for performing least squares > fitting of a straight line f(x) = ax + b to an (x,y) data set. There are a > number of ways of doing this currently using scipy or numpy but all have > serious drawbacks. Here is what is currently available, as far as I can > tell, and what seem to me to be their drawbacks. > >> > >> 1. numpy.polyfit : > >> a. It is slower than it needs to be. polyfit uses matrix methods > that are needed to find best fits to general polynomials (quadratic, cubic, > quartic, and higher orders), but matrix methods are overkill when you just > want to fit a straight line f(x)= ax + b to data set. A direct approach > can yield fits significantly faster. > >> b. polyfit currently does not allow using absolute error estimates > for weighting the data; only relative error estimates are currently > possible. This can be fixed, but for the moment it's a problem. > >> c. New or inexperienced uses are unlikely to look for a routine to > fit straight lines in a routine that is advertised as being for > polynomials. This is a more important point than it may seem. Fitting > data to a straight line is probably the most common curve fitting task > performed, and the only one that many users will ever use. It makes sense > to cater to such users by providing them with a routine that does what they > want in as clear and straightforward a manner as possible. I am a physics > professor and have seen the confusion first hand with a wide spectrum of > students who are new to Python. It should not be this hard for them. > >> > >> 2. scipy.linalg.lstsq > >> a. Using linalg.lstsq to fit a straight line is clunky and very > slow (on the order of 10 times slower than polyfit, which is already slower > than it needs to be). > >> b. While linalg.lstsq can be used to fit data with error estimates > (i.e. using weighting), how to do this is far from obvious. It's unlikely > that anyone but an expert would figure out how to do it. > >> c. linalg.lstsq requires the use of matrices, which will be > unfamiliar to some users. Moreover, it should not be necessary to use > matrices when the task at hand only involves one-dimensional arrays. > >> > >> 3. scipy.curve_fit > >> a. This is a nonlinear fitting routine. As such, it searches for > the global minimum in the objective function (chi-squared) rather than just > calculating where the global minimum is using the analytical expressions > for the best fits. It's the wrong method for the problem, although it will > work. > >> > >> Questions: What do others in the scientific Python community think > about the need for such a routine? Where should routine to fit data to a > straight line go? It would seem to me that it should go in the > scipy.optimize package, but I wonder what others think. > > > > scipy.stats.linregress if there is only one x > > > > Josef > > > > > >> David Pine > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Sun Mar 31 07:59:21 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Sun, 31 Mar 2013 07:59:21 -0400 Subject: [SciPy-User] assignment optimization problem Message-ID: Are there python tools for addressing problems like assignment? At this point, I don't fully understand my problem, but I believe it is a mixture of discrete assignment together with some continuous variables. My son suggests coding it by hand using some kind of simple hill climbing, but maybe I could leverage existing code for this? From robert.kern at gmail.com Sun Mar 31 08:51:11 2013 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 31 Mar 2013 13:51:11 +0100 Subject: [SciPy-User] assignment optimization problem In-Reply-To: References: Message-ID: On Sun, Mar 31, 2013 at 12:59 PM, Neal Becker wrote: > Are there python tools for addressing problems like assignment? At this point, > I don't fully understand my problem, but I believe it is a mixture of discrete > assignment together with some continuous variables. My son suggests coding it > by hand using some kind of simple hill climbing, but maybe I could leverage > existing code for this? There are some tools for simple linear assignment. https://pypi.python.org/pypi/munkres/ https://pypi.python.org/pypi/hungarian/ https://pypi.python.org/pypi/pyLAPJV/ None of them will help if you need to do continuous optimization as well. You may be able to get a satisficing answer by alternating linear assignment and continuous optimization, but I'm pretty sure there are no algorithmic guarantees with that approach. You may be able to cast your problem as a mixed integer programming problem. Check out the tools provided by Coopr and COIN-OR: https://software.sandia.gov/trac/coopr http://www.coin-or.org/ -- Robert Kern From e.antero.tammi at gmail.com Sun Mar 31 09:08:08 2013 From: e.antero.tammi at gmail.com (eat) Date: Sun, 31 Mar 2013 16:08:08 +0300 Subject: [SciPy-User] assignment optimization problem In-Reply-To: References: Message-ID: Hi, On Sun, Mar 31, 2013 at 2:59 PM, Neal Becker wrote: > Are there python tools for addressing problems like assignment? At this > point, > I don't fully understand my problem, but I believe it is a mixture of > discrete > assignment together with some continuous variables. My son suggests > coding it > by hand using some kind of simple hill climbing, but maybe I could leverage > existing code for this? > FWIW, Ihave implemented a pure python assignment solver few years back, based on quite comprehensive book "Assignment Problems" by Rainer Burkard, Mauro Dell'Amico, and Silvano Martello (ISBN: 978-0-898716-63-4), http://www.assignmentproblems.com/ I can mail the code (less than 100 lines) to you, but the code is slightly awkward, since it follows very closely the algorithm described in the book (so you might need to have access for it). BTW, what is typical size of your cost matrix you are aiming to handle? Regards, -eat > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From freebluewater at gmail.com Sat Mar 30 18:26:32 2013 From: freebluewater at gmail.com (freeblue) Date: Sat, 30 Mar 2013 15:26:32 -0700 (PDT) Subject: [SciPy-User] How to use Scipy to solve a Two-Point Boundary Value problem, of a nth system Nonlinear Second-Order Differential Equation Message-ID: Hello to everyone here, I am trying to find the root of the next equation using Newton method: http://www.wolframalpha.com/input/?i=d%2Fdx%28du%2Fdx%29+%3D+%28-3%2F%28k1%29[x]%29*%28k4[x]-%28k2[x]%2Bk3[x]%29*u ^ %281%2F3%29%29+*%28%28du%2Fdx%29^%282%2F3%29%29 which has 2 boundary conditions (u(x=0)=0, u(x=n(max) = m (constant)) ... k1, k2, k3, k4 are arrays, calculated on each x gridpoint Please, do someone knows if it is possible to solve this nth nonlinear second-order differential equation using scipy.integrate.odeint? (nonlinear ODE BVP 1-D. I tried to use Central differencing to "simplify" it but the power number (2/3) makes its hard to use later Newton method. Could be possible then to use scipy? Do you know or see somewhere any relative example ? I am trying to find and write a solution on this the last 3 weeks, so any help will be more than welcome!!!!! thank you in advance, Kas -------------- next part -------------- An HTML attachment was scrubbed... URL: