[pypy-commit] extradoc extradoc: rename low-level resume data into backend map
bivab
noreply at buildbot.pypy.org
Tue Aug 14 14:44:34 CEST 2012
Author: David Schneider <david.schneider at picle.org>
Branch: extradoc
Changeset: r4563:049f9e52382c
Date: 2012-08-14 14:40 +0200
http://bitbucket.org/pypy/extradoc/changeset/049f9e52382c/
Log: rename low-level resume data into backend map
diff --git a/talk/vmil2012/paper.tex b/talk/vmil2012/paper.tex
--- a/talk/vmil2012/paper.tex
+++ b/talk/vmil2012/paper.tex
@@ -121,7 +121,6 @@
%___________________________________________________________________________
-\todo{find a better name for \texttt{low-level resume data}}
\todo{mention somewhere that it is to be expected that most guards do not fail}
\section{Introduction}
@@ -555,8 +554,10 @@
condition check two things are generated/compiled.
First a special data
-structure called \emph{low-level resume data} is created. This data structure encodes the
-information about where, i.e. which register or stack location, the IR-variables required to rebuild the state will be stored when the guard is executed.
+structure called \emph{backend map} is created. This data structure encodes the
+mapping from the IR-variables needed by the guard to rebuild the state to the
+low-level locations (registers and stack) where the corresponding values will
+be stored when the guard is executed.
This data
structure stores the values in a succinct manner using an encoding that uses
8 bits to store 7 bits of information, ignoring leading zeros. This encoding is efficient to create and
@@ -567,9 +568,9 @@
Guards are implemented as a conditional jump to this trampoline in case the
guard check fails.
In the trampoline the pointer to the
-\emph{low-level resume data} is loaded and after storing the current execution state
+backend map is loaded and after storing the current execution state
(registers and stack) execution jumps to a generic bailout handler, also known
-as \texttt{compensation code},
+as \emph{compensation code},
that is used to leave the compiled trace in case of a guard failure.
Using the encoded location information the bailout handler reads from the
@@ -615,7 +616,7 @@
loop the guard becomes just a point where control-flow can split. The loop
after the guard and the bridge are just conditional paths.
Figure~\ref{fig:trampoline} shows a diagram of a compiled loop with two guards,
-Guard \#1 jumps to the trampoline, loads the \texttt{low level resume data} and
+Guard \#1 jumps to the trampoline, loads the \texttt{backend map} and
then calls the bailout handler, whereas Guard \#2 has already been patched
and directly jumps to the corresponding bridge. The bridge also contains two
guards that work based on the same principles.
@@ -724,26 +725,26 @@
\end{figure}
The overhead that is incurred by the JIT to manage the \texttt{resume data},
-the \texttt{low-level resume data} as well as the generated machine code is
+the \texttt{backend map} as well as the generated machine code is
shown in Figure~\ref{fig:backend_data}. It shows the total memory consumption
of the code and of the data generated by the machine code backend and an
approximation of the size of the \texttt{resume data} structures for the
different benchmarks mentioned above. The machine code taken into account is
composed of the compiled operations, the trampolines generated for the guards
and a set of support functions that are generated when the JIT starts and which
-are shared by all compiled traces. The size of the \texttt{low-level resume
-data} is the size of the compressed mapping from registers and stack to
+are shared by all compiled traces. The size of the \texttt{backend map}
+is the size of the compressed mapping from registers and stack to
IR-level variables and finally the size of the \texttt{resume data} is an
approximation of the size of the compressed high-level resume data as described
in Section~\ref{sec:Resume Data}.\footnote{
The size of the resume data is not measured at runtime, but reconstructed from
log files.}
-For the different benchmarks the \texttt{low-level resume data} has a size of
+For the different benchmarks the \texttt{backend map} has a size of
about 15\% to 20\% of the amount of memory compared to the size of the
generated machine code. On the other hand the generated machine code has only a
-size ranging from 20.5\% to 37.98\% of the size of the high and low-level
-resume data combined and being compressed as described before.
+size ranging from 20.5\% to 37.98\% of the size of the resume data and the backend map
+combined and being compressed as described before.
Tracing JIT compilers only compile the subset of the code executed in a program
that occurs in a hot loop, for this reason the amount of generated machine
diff --git a/talk/vmil2012/tool/build_tables.py b/talk/vmil2012/tool/build_tables.py
--- a/talk/vmil2012/tool/build_tables.py
+++ b/talk/vmil2012/tool/build_tables.py
@@ -163,7 +163,7 @@
head = [r'Benchmark',
r'Code',
r'Resume data',
- r'll data',
+ r'Backend map',
r'Relation']
table = []
More information about the pypy-commit
mailing list