[pypy-commit] extradoc extradoc: more benchmark explanations
hakanardo
noreply at buildbot.pypy.org
Wed Aug 8 20:37:14 CEST 2012
Author: Hakan Ardo <hakan at debian.org>
Branch: extradoc
Changeset: r4479:5ec395562141
Date: 2012-08-08 20:35 +0200
http://bitbucket.org/pypy/extradoc/changeset/5ec395562141/
Log: more benchmark explanations
diff --git a/talk/dls2012/licm.pdf b/talk/dls2012/licm.pdf
index 06285c88fab2c8516c3f6d64f1fa92984ef085ea..0bfb4121074fae4028d49aea25f9c0e2fa42dd53
GIT binary patch
[cut]
diff --git a/talk/dls2012/paper.tex b/talk/dls2012/paper.tex
--- a/talk/dls2012/paper.tex
+++ b/talk/dls2012/paper.tex
@@ -63,7 +63,7 @@
\newboolean{showcomments}
-\setboolean{showcomments}{true}
+\setboolean{showcomments}{false}
\ifthenelse{\boolean{showcomments}}
{\newcommand{\nb}[2]{
\fbox{\bfseries\sffamily\scriptsize#1}
@@ -1006,15 +1006,42 @@
hardcoded into the implementation making the benchmark consist of a single loop too.
\item {\bf conv3x3}: two-dimensional convolution with kernel of fixed
size $3 \times 3$ using a custom class to represent two-dimensional
- arrays.
+ arrays. It is implemented as a two nested loops that iterates over the elements of the
+$n\times n$ output matrix ${\bf B} = \left(b_{i,j}\right)$ and calculates each element from the input matrix
+${\bf A} = \left(a_{i,j}\right)$ and a kernel ${\bf K} = \left(k_{i,j}\right)$ using $b_{i,j} = $
+\begin{equation}
+ \label{eq:convsum}
+ \begin{array}{lclclc}
+ k_{3,3} a_{i-1,j-1} &+& k_{3,2} a_{i-1,j} &+& k_{3,1} a_{i-1,j+1} & + \\
+ k_{2,3} a_{i,j-1} &+& k_{2,2} a_{i,j} &+& k_{2,1} a_{i,j+1} & + \\
+ k_{1,3} a_{i+1,j-1} &+& k_{1,2} a_{i+1,j} &+& k_{1,1} a_{i+1,j+1} \\
+ \end{array}
+\end{equation}
+for $1 \leq i \leq n$ and $1 \leq j \leq n$.
+The memory for storing the matrices are again allocated outside the benchmark and $n=1000$ was used.
\item {\bf dilate3x3}: two-dimensional dilation with kernel of fixed
size $3 \times 3$. This is similar to convolution but instead of
- summing over the elements, the maximum is taken. That places a
+ summing over the terms in Equation~\ref{eq:convsum}, the maximum over those terms is taken. That places a
external call to a max function within the loop that prevents some
of the optimizations.
\item {\bf sobel}: a low-level video processing algorithm used to
locate edges in an image. It calculates the gradient magnitude
- using sobel derivatives.
+ using sobel derivatives. A Sobel x-derivative $D_x$ of a $n \times n$ image ${I}$ is formed
+by convolving ${I}$ with
+\begin{equation}
+ {K} = \left(
+ \begin{array}{ccc}
+ -1 & 0 & 1 \\
+ -2 & 0 & 1 \\
+ -1 & 0 & 1 \\
+ \end{array}
+ \right) ,
+\end{equation}
+and a Sobel y-derivative $D_y$ is formed convolving with $K^\top$. The gradient magnitude is
+then formed for each pixel independently by $\sqrt{D_x^2 + D_y^2}$. The two convolutions and the pixelwise
+magnitude calculation are combined in the implementation of this benchmark and calculated in a single pass over
+the input image. This single pass consists of two nested loops with a somewhat larger amount of calculations
+performed each iteration as compared to the other benchmarks.
\end{itemize}
The sobel and conv3x3 benchmarks are implemented
More information about the pypy-commit
mailing list