[Speed] Analysis of a Python performance issue

Kevin Modzelewski kmod at dropbox.com
Mon Nov 21 18:26:19 EST 2016


Oh sorry I was unclear, yes this is for the pyston binary itself, and yes
PGO does a better job and I definitely think it should be used.

Separately, we often use non-pgo builds for quick checks, so we also have
the system I described that makes our non-pgo build more reliable by using
the function ordering from the pgo build.

On Mon, Nov 21, 2016 at 11:39 AM, serge guelton <
serge.guelton at telecom-bretagne.eu> wrote:

> On Sat, Nov 19, 2016 at 05:58:19PM -0800, Kevin Modzelewski wrote:
> > I think it's safe to not reinvent the wheel here.  Some searching gives:
> > http://perso.ensta-paristech.fr/~bmonsuez/Cours/B6-4/
> Articles/papers15.pdf
> > http://www.cs.utexas.edu/users/mckinley/papers/dcm-vee-2006.pdf
> > https://github.com/facebook/hhvm/tree/master/hphp/tools/hfsort
>
> Thanks Kevin for the pointers! I'm new to this area of optimization...
> another source of fun and weirdness :-$
>
> > Pyston takes a different approach where we pull the list of hot functions
> > from the PGO build, ie defer all the hard work to the C compiler.
>
> You're talking about the build of Pyston itself, not the jit generated
> code, right? In that case, how is it different to a regular
>
>     -fprofile-generate followed by several runs then -fprofile-use?
>
> PGO builds should perform better than marking some functions as hot, as
> it also includes info for better branch prediction too, right?
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/speed/attachments/20161121/3862942d/attachment.html>


More information about the Speed mailing list