[SciPy-Dev] Comments on optimize.newton function

Charles R Harris charlesr.harris at gmail.com
Mon May 23 10:37:33 EDT 2011


On Mon, May 23, 2011 at 12:21 AM, Gökhan Sever <gokhansever at gmail.com>wrote:

> On Sun, May 22, 2011 at 9:13 PM, Charles R Harris
> <charlesr.harris at gmail.com> wrote:
> > I'm not so confidant about the rh >= 1 case, but I've attached an example
> > for rh = .95, rd=1e-8. The light blue line is the lhs from above, the
> labled
> > lines are for the rhs and different values of kappa. The heavy horizontal
> > line is the rh and the bracket I was suggesting was between the zero of
> the
> > rhs at x=rd and its crossing with the rh line. There are corner cases
> here
> > depending on the parameter values, so this probably needs more
> exploration,
> > there might be cases with two zeros.
> >
> > Also same thing with rd=1.5e-6. Note that the zero is very near the upper
> > limit.
> >
> > I suspect there will none or two zeros in the supersaturated case.
> >
> > Chuck
>
> Thanks for spending your time and producing those plots. Could you
> please provide the code that you used to create the figures? This
> might help me to better understand some of the points you have made in
> your latest reply.
>
>
I've attached the module with the lhs, rhs functions. I hope I got them
right ;) The plots were done using x = linspace(small number > 0, 5*rd, 500)
and a loop for the values of kappa. They might actually look better as a
semilogx plot.


> Your comment on having no or two zero worries me a bit. I can't easily
> see how apart these two zeros from each if they ever exist. One of
> them could be unrealistic to easily disregard, but yet to verify this
> claim.
>
>
The reason I think there will be two zeros is that the upper branch of the
hyperbolic rhs is concave up while the lhs is concave down, so if they
intersect it will be at two points or a tangent (double zero). At least the
supersaturated case shows up as being a bit squirrelly, which is probably a
good sign for the model ;) Also note that the zeros go off to +inf as rh ->
1, which might be a good argument for using rd/x as the independent
variable.


> I went ahead and tested the secant and fsolve solvers to see if they
> produce any significant differences in terms of the root they return.
> I assume fsolve is a more robust solver. You can see this comparison
> in the attached figure. I use a tolerance value of 1.e-20 for both
> solvers. Again this comparison is based on the estimate of about 20k
> different values. Most of the difference is zero. I focused in to a
> more interesting part of the figure. Still the difference is about
> 1.e-17 which is quite insignificant.
>
>
I think you are right that the secant solver needs a user input for the
initial step size. Although working near singularities is always going to be
a problem, hence variable changes. In fact, the whole newton thing could
probably use a think through.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20110523/597fe2a0/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: gokhan.py
Type: text/x-python
Size: 157 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20110523/597fe2a0/attachment.py>


More information about the SciPy-Dev mailing list