[pypy-commit] extradoc extradoc: correct some details

hakanardo noreply at buildbot.pypy.org
Wed Aug 15 20:09:28 CEST 2012


Author: Hakan Ardo <hakan at debian.org>
Branch: extradoc
Changeset: r4587:ed267e483232
Date: 2012-08-15 20:09 +0200
http://bitbucket.org/pypy/extradoc/changeset/ed267e483232/

Log:	correct some details

diff --git a/talk/dls2012/licm.pdf b/talk/dls2012/licm.pdf
index 4e41479628229f6b9c2635f91c7f58c4684ae264..53e9a461f7d0e384c8c7fba88a6002c1337aaeb1
GIT binary patch

[cut]

diff --git a/talk/dls2012/paper.tex b/talk/dls2012/paper.tex
--- a/talk/dls2012/paper.tex
+++ b/talk/dls2012/paper.tex
@@ -63,7 +63,7 @@
 
 
 \newboolean{showcomments}
-\setboolean{showcomments}{false}
+\setboolean{showcomments}{true}
 \ifthenelse{\boolean{showcomments}}
   {\newcommand{\nb}[2]{
     \fbox{\bfseries\sffamily\scriptsize#1}
@@ -1007,10 +1007,10 @@
 \item {\bf conv5}$\left(n\right)$: one-dimensional convolution with fixed kernel-size $5$. Similar to conv3, but with 
 ${\bf k} = \left(k_1, k_2, k_3, k_4, k_5\right)$. The enumeration of the elements in $\bf k$ is still 
 hardcoded into the implementation making the benchmark consist of a single loop too.
-\item {\bf conv3x3}$\left(n\right)$: two-dimensional convolution with kernel of fixed
+\item {\bf conv3x3}$\left(n,m\right)$: two-dimensional convolution with kernel of fixed
   size $3 \times 3$ using a custom class to represent two-dimensional
   arrays. It is implemented as two nested loops that iterates over the elements of the 
-$n\times n$ output matrix ${\bf B} = \left(b_{i,j}\right)$ and calculates each element from the input matrix
+$m\times n$ output matrix ${\bf B} = \left(b_{i,j}\right)$ and calculates each element from the input matrix
 ${\bf A} = \left(a_{i,j}\right)$ and a kernel ${\bf K} = \left(k_{i,j}\right)$ using $b_{i,j} = $
 \begin{equation}
   \label{eq:convsum}
@@ -1020,8 +1020,9 @@
     k_{1,3} a_{i+1,j-1} &+& k_{1,2} a_{i+1,j} &+& k_{1,1} a_{i+1,j+1}  \\
   \end{array}
 \end{equation}
-for $1 \leq i \leq n$ and $1 \leq j \leq n$.
-The memory for storing the matrices are again allocated outside the benchmark and $n=1000$ was used.
+for $1 \leq i \leq m$ and $1 \leq j \leq n$.
+The memory for storing the matrices are again allocated outside the benchmark and $(n,m)=(1000,1000)$ 
+as well as $(n,m)=(1000000,3)$ was used.
 \item {\bf dilate3x3}$\left(n\right)$: two-dimensional dilation with kernel of fixed
   size $3 \times 3$. This is similar to convolution but instead of
   summing over the terms in Equation~\ref{eq:convsum}, the maximum over those terms is taken. That places a


More information about the pypy-commit mailing list