[SciPy-User] Find the convergent solutions of a multivariate equation efficiently

Moore, Eric (NIH/NIDDK) [F] eric.moore2 at nih.gov
Tue Mar 24 08:34:05 EDT 2015


> -----Original Message-----
> From: Camille Chambon [mailto:camillechambon at yahoo.fr]
> Sent: Monday, March 23, 2015 1:07 PM
> To: Benjamin Trendelkamp-Schroer
> Cc: scipy-user at scipy.org
> Subject: Re: [SciPy-User] Find the convergent solutions of a
> multivariate equation efficiently
> 
> Hello Benjamin,
> 
> Thanks for your answer.
> 
> Using scipy.optimize.fixed_point increases computation time a lot. It
> is not acceptable to me.

The fair comparison between the code you posted and optimize.fixed_point requires that you set xtol=0.01.  Is the fixed_point code still slower in that case? It does basically what your code does, but with some bells and whistles. It shouldn’t be meaningfully slower.

There is also a pull request open to add a more sophisticated solver for fixed point problems (https://github.com/scipy/scipy/pull/4460).

> 
> With scipy.optimize.minimize, the objective function must return only
> one scalar. But my function returns two scalars. So I can't see how I
> could apply scipy.optimize.minimize to my problem.

Optimize.minimize is not the tool for this job. 

-Eric

> 
> Thanks for your help anyway.
> 
> Cheers,
> 
> Camille
> 
> Le 23/03/2015 11:16, Benjamin Trendelkamp-Schroer a écrit :
> > Hi Camille,
> >
> > my message did not get forwarded to the list so I am forwarding it to
> > your adress, I hope this will help you.
> >
> > On 21.03.2015 20:00, Camille Chambon wrote:
> >> Hello,
> >>
> >> I have a function:
> >>
> >> def my_function(x1, x2):
> >>       y1 = compute_y1(x1)
> >>       y2 = compute_y2(x2)
> >>       x1_new = compute_new_x1(x1, y1, y2)
> >>       x2_new = compute_new_x2(x2, y1, y2)
> >>       return x1_new, x2_new, y1, y2
> >>
> >> I would like to find the convergent values of y1 and y2, that is
> when
> >> (x1_new - x1) / x1 < epsilon and (x2_new - x2) / x2 < epsilon.
> >>
> >> By now, I initialize x1 and x2 and I call main_function numerous
> times:
> >>
> >> epsilon = 0.01
> >> x1, x2 = 0.1, 0.1
> >> x1_new, x2_new = 100.0, 100.0
> >> while abs((x1_new - x1)/x1) >= epsilon or abs((x2_new - x2)/x2) >=
> > epsilon:
> >>       x1, x2 = x1_new, x2_new
> >>       x1_new, x2_new, y1, y2 = my_function(x1, x2) print 'y1 = ', y1
> >> print 'y2 = ', y2
> >>
> >> It works. But I wonder if it's the most efficient method.
> >>
> >> I read about scipy.optimize.minimize, but I can not see how it could
> >> be applied to my problem.
> >>
> >> Thanks in advance for your help.
> >>
> >> Cheers,
> >>
> >> Camille
> >> _______________________________________________
> >> SciPy-User mailing list
> >> SciPy-User at scipy.org
> >> http://mail.scipy.org/mailman/listinfo/scipy-user
> > Hello Camille,
> >
> > if I understand correctly the function 'my_function' maps old values
> > (x1, x2) to new values (x1_new, x2_new),
> >
> > (x1_new, x2_new) = f(x1, x2)
> >
> > and you want to find those values (x1*, x2*) which fulfill
> >
> > (x1*, x2*) = f(x1*, x2*)
> >
> > A point (x1*, x2*) with this property is called a fixed point for the
> > mapping f and there is a scipy function that might help you to find
> > this fixed point
> >
> >
> http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fix
> > ed_point.html
> >
> > Unfortunately these types of sef-consistent iterations can exhibit
> > rather slow-convergence.
> >
> > If there is a way to formulate your problem as a minimization problem
> > with an objective function g(x1, x2) for which you can compute the
> > gradient G(x1, x2) you will be able to use methods for the
> > minimization of a multi-variate function, e.g. gradient descent or
> > Newton's method, which often converge much faster than self-
> consistent
> > iterations. You can learn more about the available scipy functions at
> >
> > http://docs.scipy.org/doc/scipy/reference/optimize.html
> >
> > Some of them are also derivative free methods. So if you can not
> > obtain the gradient of g you can still use them to find the minimizer
> of g.
> >
> > As Ryan suggested your function f needs to be able to process the
> > tuple x=(x1, x2). I would recommend to store x1 and x2 in a numpy
> > array x and have your function operate on the elements on that
> ndarray internally.
> >
> > A example function similar to yours, assuming that x1 and x2 are
> > vectors consisting of 10 elements each, so that x has a total of  20
> > elements is
> >
> > def f(x):
> >      x1 = x[0:10]
> >      x2 = x[10:]
> >         y1 = ...
> >      y2 = ...
> >
> >      x1_new = ...
> >      x2_new = ...
> >
> >      x_new = np.zeros(20)
> >      x_new[0:10] = x1_new
> >      x_new[10:] = x2_new
> >
> >      return x_new
> >
> > Now input x and output x_new are both numpy arrays with 20 elements
> > and it shoud work with scipy.optimize.fixed_point.
> >
> > I hope this will help you,
> >
> > Benjamin
> >
> >
> >
> >
> 
> _______________________________________________
> SciPy-User mailing list
> SciPy-User at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user


More information about the SciPy-User mailing list