[pypy-commit] extradoc extradoc: Remove most of \texttt annotations

bivab noreply at buildbot.pypy.org
Tue Aug 14 14:44:35 CEST 2012


Author: David Schneider <david.schneider at picle.org>
Branch: extradoc
Changeset: r4564:54dcd2c56b67
Date: 2012-08-14 14:43 +0200
http://bitbucket.org/pypy/extradoc/changeset/54dcd2c56b67/

Log:	Remove most of \texttt annotations

diff --git a/talk/vmil2012/paper.tex b/talk/vmil2012/paper.tex
--- a/talk/vmil2012/paper.tex
+++ b/talk/vmil2012/paper.tex
@@ -148,7 +148,7 @@
 
 The operations executed by an interpreter are recorded by the tracing JIT in
 case they are frequently executed (this process is described in more detail in
-Section \ref{sec:Resume Data}). During the recording phase \texttt{guards} are
+Section \ref{sec:Resume Data}). During the recording phase guards are
 inserted into the recorded trace at all
 points where the control flow could diverge. As can be seen in
 Figure~\ref{fig:guard_percent} guards account for about 14\% to 22\% of the
@@ -590,8 +590,7 @@
 generated instructions on x86.
 
 As explained in previous sections, when a specific guard has failed often enough
-a new trace, referred to as a \emph{bridge}, starting from this guard is recorded and
-compiled.
+a bridge starting from this guard is recorded and compiled.
 Since the goal of compiling bridges is to improve execution speed on the
 diverged path (failing guard) they should not introduce additional overhead.
 In particular the failure of the guard should not lead
@@ -616,7 +615,7 @@
 loop the guard becomes just a point where control-flow can split. The loop
 after the guard and the bridge are just conditional paths.
 Figure~\ref{fig:trampoline} shows a diagram of a compiled loop with two guards,
-Guard \#1 jumps to the trampoline, loads the \texttt{backend map} and
+Guard \#1 jumps to the trampoline, loads the backend map and
 then calls the bailout handler, whereas Guard \#2 has already been patched
 and directly jumps to the corresponding bridge. The bridge also contains two
 guards that work based on the same principles.
@@ -724,23 +723,23 @@
     \label{fig:resume_data_sizes}
 \end{figure}
 
-The overhead that is incurred by the JIT to manage the \texttt{resume data},
-the \texttt{backend map} as well as the generated machine code is
+The overhead that is incurred by the JIT to manage the resume data,
+the backend map as well as the generated machine code is
 shown in Figure~\ref{fig:backend_data}. It shows the total memory consumption
 of the code and of the data generated by the machine code backend and an
-approximation of the size of the \texttt{resume data} structures for the
+approximation of the size of the resume data structures for the
 different benchmarks mentioned above. The machine code taken into account is
 composed of the compiled operations, the trampolines generated for the guards
 and a set of support functions that are generated when the JIT starts and which
-are shared by all compiled traces. The size of the \texttt{backend map}
+are shared by all compiled traces. The size of the backend map
 is the size of the compressed mapping from registers and stack to
-IR-level variables and finally the size of the \texttt{resume data} is an
+IR-level variables and finally the size of the resume data is an
 approximation of the size of the compressed high-level resume data as described
 in Section~\ref{sec:Resume Data}.\footnote{
 The size of the resume data is not measured at runtime, but reconstructed from
 log files.}
 
-For the different benchmarks the \texttt{backend map} has a size of
+For the different benchmarks the backend map has a size of
 about 15\% to 20\% of the amount of memory compared to the size of the
 generated machine code. On the other hand the generated machine code has only a
 size ranging from 20.5\% to 37.98\% of the size of the resume data and the backend map
@@ -749,8 +748,8 @@
 Tracing JIT compilers only compile the subset of the code executed in a program
 that occurs in a hot loop, for this reason the amount of generated machine
 code will be smaller than in other juts-in-time compilation approaches.  This
-creates a larger discrepancy between the size of the \texttt{resume data} when
-compared to the size of the generated machine code and illustrates why it is important to compress the \texttt{resume data} information.
+creates a larger discrepancy between the size of the resume data when
+compared to the size of the generated machine code and illustrates why it is important to compress the resume data information.
 
 \begin{figure}
     \include{figures/backend_table}
@@ -758,13 +757,13 @@
     \label{fig:backend_data}
 \end{figure}
 
-Why the efficient storing of the \texttt{resume data} is a central concern in the design
+Why the efficient storing of the resume data is a central concern in the design
 of guards is illustrated by Figure~\ref{fig:resume_data_sizes}. This figure shows
-the size of the compressed \texttt{resume data}, the approximated size of
-storing the \texttt{resume data} without compression and
+the size of the compressed resume data, the approximated size of
+storing the resume data without compression and
 an approximation of the best possible compression of the resume data by
 compressing the data using the
-\texttt{xz} compression tool, which is a ``general-purpose data compression
+\emph{xz} compression tool, which is a ``general-purpose data compression
 software with high compression ratio''.\footnote{\url{http://tukaani.org/xz/}}
 
 The results show that the current approach of compression and data sharing only


More information about the pypy-commit mailing list