[Numpy-discussion] automatic differentiation with PyAutoDiff

srean srean.list at gmail.com
Thu Jun 14 17:06:49 EDT 2012


>
> You're right - there is definitely a difference between a correct
> gradient and a gradient is both correct and fast to compute.
>
> The current quick implementation of pyautodiff is naive in this
> regard.


Oh and by no means was I criticizing your implementation. It is a very
hard problem to solve and as you indicate takes several man years to
deal with. And compared to having no gradient at all, a gradient but
possibly slower to compute is a big improvement :)


> True, even approximating a gradient by finite differences is a subtle
> thing if you want to get the most precision per time spent. Another
> thing I was wondering about was periodically re-running the original
> bytecode on inputs to make sure that the derived bytecode produces the
> same answer (!). Those two sanity checks would detect the two most
> scary errors to my mind as a user:
> a) that autodiff got the original function wrong
> b) that autodiff is mis-computing a gradient.


Was suggesting finite difference just for sanity check, not as an
actual substitute for the gradient. You wont believe how many times
the finite difference check has saved me from going in the exact
opposite direction !



More information about the NumPy-Discussion mailing list