[pypy-commit] extradoc extradoc: Move some figures around and add sub sections to the evaluation section

bivab noreply at buildbot.pypy.org
Thu Aug 9 17:17:15 CEST 2012


Author: David Schneider <david.schneider at picle.org>
Branch: extradoc
Changeset: r4493:14bfddc82d2e
Date: 2012-08-09 17:16 +0200
http://bitbucket.org/pypy/extradoc/changeset/14bfddc82d2e/

Log:	Move some figures around and add sub sections to the evaluation
	section

diff --git a/talk/vmil2012/paper.tex b/talk/vmil2012/paper.tex
--- a/talk/vmil2012/paper.tex
+++ b/talk/vmil2012/paper.tex
@@ -608,7 +608,16 @@
 \end{description}
 
 From the mentioned benchmarks we collected different datasets to evaluate the
-Frequency, the overhead and overall behaviour of guards.
+Frequency, the overhead and overall behaviour of guards, the results are
+summarized in the remainder of this section.
+
+\subsection{Frequency of Guards}
+\label{sub:guard_frequency}
+\begin{figure*}
+    \include{figures/benchmarks_table}
+    \caption{Benchmark Results}
+    \label{fig:benchmarks}
+\end{figure*}
 Figure~\ref{fig:benchmarks} summarizes the total number of operations that were
 recorded during tracing for each of the benchmarks and what percentage of these
 operations are guards. The number of operations was counted on the unoptimized
@@ -618,29 +627,14 @@
 Figure~\ref{fig:guard_percent}. These numbers show that guards are a rather
 common operation in the traces, which is a reason the put effort into
 optimizing them.
-\todo{some pie charts about operation distribution}
-
-\begin{figure*}
-    \include{figures/benchmarks_table}
-    \caption{Benchmark Results}
-    \label{fig:benchmarks}
-\end{figure*}
-
+\subsection{Overhead of Guards}
+\label{sub:guard_overhead}
 \begin{figure}
     \include{figures/resume_data_table}
     \caption{Resume Data sizes in KiB}
     \label{fig:resume_data_sizes}
 \end{figure}
 
-\begin{figure}
-    \include{figures/failing_guards_table}
-    \caption{Failing guards}
-    \label{fig:failing_guards}
-\end{figure}
-
-
-\todo{add a footnote about why guards have a threshold of 200}
-
 The overhead that is incurred by the JIT to manage the \texttt{resume data},
 the \texttt{low-level resume data} as well as the generated machine code is
 shown in Figure~\ref{fig:backend_data}. It shows the total memory consumption
@@ -667,11 +661,6 @@
 the overhead associated to guards to resume execution from a side exit appears
 to be high.\bivab{put into relation to other JITs, compilers in general}
 
-\begin{figure*}
-    \include{figures/backend_table}
-    \caption{Total size of generated machine code and guard data}
-    \label{fig:backend_data}
-\end{figure*}
 
 Both figures do not take into account garbage collection. Pieces of machine
 code can be globally invalidated or just become cold again. In both cases the
@@ -681,6 +670,23 @@
 
 \todo{compare to naive variant of resume data}
 
+\begin{figure}
+    \include{figures/backend_table}
+    \caption{Total size of generated machine code and guard data}
+    \label{fig:backend_data}
+\end{figure}
+
+\subsection{Guard Failures}
+\label{sub:guard_failure}
+\begin{figure}
+    \include{figures/failing_guards_table}
+    \caption{Failing guards}
+    \label{fig:failing_guards}
+\end{figure}
+
+
+\todo{add a footnote about why guards have a threshold of 200}
+
 \section{Related Work}
 \label{sec:Related Work}
 


More information about the pypy-commit mailing list