[SciPy-user] optimization using fmin_bfgs with gradient information

Sebastian Walter sebastian.walter at gmail.com
Sun Jul 19 11:41:44 EDT 2009


Hello,
I would like to add your objective function as part of the unit test
for on of the automatic differentiation tools I've been developing:
PYADOLC (download at http://github.com/b45ch1/pyadolc).

Unfortunately, the description of the objective function is
incomplete: there are some references to self.n and so on in the code.
Would it be possible for you to post the full description? That would
be great :)


Sebastian



2009/7/19 Ernest Adrogué <eadrogue at gmx.net>:
> Hi,
> 18/07/09 @ 18:36 (+0200), thus spake Sebastian Walter:
>> I don't find it so hard to believe that you got your gradient function wrong.
>> Could you post the code of your objective function?
>
> Here it goes:
>
> def fobj(self, x):
>
>    # x = (alpha1...alphan, beta0..betan, gamma, rho)
>
>    n = self.n
>
>        # use absolute values for alphas and betas
>
>    y = [abs(i) for i in x[:-2]]
>
>    # alpha0 = n - sum(alpha1...alphan)
>
>    y.insert(0, abs(n-sum(y[:n-1])))
>
>    alpha = dict(zip(self.names, y[:n]))
>    beta = dict(zip(self.names, y[n:]))
>
>    gamma = abs(x[-2])
>    rho = x[-1]
>
>    pseudo_likelihood = 0
>
>    for obs in self.observations:
>
>        mu1 = alpha[obs.ht] * beta[obs.at] * gamma
>        mu2 = alpha[obs.at] * beta[obs.ht]
>        tau = self.tau(mu1, mu2, rho, obs.hg, obs.ag)
>
>        # avoid log(0)
>        mu1 = mu1 > 0 and mu1 or 1e-10
>        mu2 = mu2 > 0 and mu2 or 1e-10
>        tau = tau > 0 and tau or 1e-10
>
>        pseudo_likelihood += math.log(tau)
>        pseudo_likelihood += obs.hg * math.log(mu1) - mu1
>        pseudo_likelihood += obs.ag * math.log(mu2) - mu2
>
>    return -pseudo_likelihood
>
>> Maybe you've just got the wrong sign. Near the optimum this would be OK,
>> but trying to do a descent step away from the optimizer is to fail.
>
> Yes, it must be the gradient that is wrong. It occurs to me
> that it could be related to the fact that I'm changing the value
> of tau() in the objective function when tau is < 0, and I don't think
> tau_prime() used in the gradient reflects this.
>
> Thanks.
>
>
> Ernest
> _______________________________________________
> SciPy-user mailing list
> SciPy-user at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>



More information about the SciPy-User mailing list