[pypy-svn] r60567 - in pypy/extradoc/talk/ecoop2009: . benchmarks

antocuni at codespeak.net antocuni at codespeak.net
Thu Dec 18 14:15:46 CET 2008


Author: antocuni
Date: Thu Dec 18 14:15:45 2008
New Revision: 60567

Modified:
   pypy/extradoc/talk/ecoop2009/benchmarks.tex
   pypy/extradoc/talk/ecoop2009/benchmarks/results.txt
Log:
add tables showing the results. They need some restyling as they are very
ugly.



Modified: pypy/extradoc/talk/ecoop2009/benchmarks.tex
==============================================================================
--- pypy/extradoc/talk/ecoop2009/benchmarks.tex	(original)
+++ pypy/extradoc/talk/ecoop2009/benchmarks.tex	Thu Dec 18 14:15:45 2008
@@ -75,23 +75,79 @@
     return b
 \end{lstlisting}
 
+\anto{these tables are ugly}
 
-
-Table XXX and figure XXX show the time spent to calculate the factorial of
-various numbers, with and without the JIT.  Table XXX and figure XXX show the
-same informations for the Fibonacci program.
-
-Note that do get meaningful timings, we had to calculate the factorial and
-Fibonacci of very high numbers.  This means that the results are incorrect due
-to overflow, but since all the runnings overflows in the very same way, the
-timings are still comparable. \anto{I think we should rephrase this sentence}.
-
-As we can see, the code generated by the JIT is almost 500 times faster than
-the non-jitted case, and it is only about 1.5 times slower than the same
-algorithm written in C\#: the difference in speed it is probably due to both
-the fact that the current CLI backend emits slightly non-optimal code and that
-the underyling .NET JIT compiler is highly optimized to handle bytecode
-generated by C\# compilers.
+\begin{table}[ht]
+  \begin{tabular}{|l|r|r|r|r||r|r|}
+    \hline
+    \textbf{n} & 
+    \textbf{Interp} &
+    \textbf{JIT} &
+    \textbf{JIT 2} &
+    \textbf{C\#} &
+    \textbf{Interp/JIT 2} &
+    \textbf{JIT 2/C\#} \\
+    \hline
+
+    $10$    &   0.031  &  0.422  &  0.000  &  0.000  &      N/A  &    N/A \\
+    $10^7$  &  30.984  &  0.453  &  0.047  &  0.031  &  661.000  &  1.500 \\
+    $10^8$  &     N/A  &  0.859  &  0.453  &  0.359  &      N/A  &  1.261 \\
+    $10^9$  &     N/A  &  4.844  &  4.641  &  3.438  &      N/A  &  1.350 \\
+
+    \hline
+
+  \end{tabular}
+  \caption{Factorial benchmark}
+  \label{tab:factorial}
+\end{table}
+
+
+\begin{table}[ht]
+  \begin{tabular}{|l|r|r|r|r||r|r|}
+    \hline
+    \textbf{n} & 
+    \textbf{Interp} &
+    \textbf{JIT} &
+    \textbf{JIT 2} &
+    \textbf{C\#} &
+    \textbf{Interp/JIT 2} &
+    \textbf{JIT 2/C\#} \\
+    \hline
+
+    $10$    &   0.031  &  0.453  &  0.000  &  0.000  &       N/A  &  N/A   \\
+    $10^7$  &  29.359  &  0.469  &  0.016  &  0.016  &  1879.962  &  0.999 \\
+    $10^8$  &     N/A  &  0.688  &  0.250  &  0.234  &       N/A  &  1.067 \\
+    $10^9$  &     N/A  &  2.953  &  2.500  &  2.453  &       N/A  &  1.019 \\
+
+    \hline
+
+  \end{tabular}
+  \caption{Fibonacci benchmark}
+  \label{tab:fibo}
+\end{table}
+
+
+Tables \ref{tab:factorial} and \ref{tab:fibo} show the time spent to calculate
+the factorial and Fibonacci for various $n$.  As we can see, for small values
+of $n$ the time spent running the JIT compiler is much higher than the time
+spent to simply interpret the program.  This is an expected result, as till
+now we only focused on optimizing the compiled code, not the compilation
+process itself.
+
+On the other hand, to get meaningful timings we had to use very high values of
+$n$.  This means that the results are incorrect due to overflow, but since all
+the runnings overflow in the very same way, the timings are still
+comparable. \anto{I think we should rephrase this sentence}.  For $n$ greater
+than $10^7$, we did not run the interpreted program as it would have took too
+much time, without adding anything to the discussion.
+
+As we can see, the code generated by the JIT can be up to ~1800 times faster
+than the non-jitted case.  Moreover, it often runs at the same speed as the
+equivalent program written in C\#, being only 1.5 slower in the worst case.
+
+The difference in speed it is probably due to both the fact that the current
+CLI backend emits slightly non-optimal code and that the underyling .NET JIT
+compiler is highly optimized to handle bytecode generated by C\# compilers.
 
 \subsection{Object-oriented features}
 
@@ -142,10 +198,36 @@
 as a local variable.  As a result, the generated code results in a simple loop
 doing additions in-place.
 
-Table XXX show the time spent to run the benchmark with various input
-arguments. Again, we can see that the jitted code is up to 500 times faster
-than the interpreted one.  Moreover, the code generated by the JIT is
-\textbf{faster} than the equivalent C\# code.  
+\begin{table}[ht]
+  \begin{tabular}{|l|r|r|r|r||r|r|}
+    \hline
+    \textbf{n} & 
+    \textbf{Interp} &
+    \textbf{JIT} &
+    \textbf{JIT 2} &
+    \textbf{C\#} &
+    \textbf{Interp/JIT 2} &
+    \textbf{JIT 2/C\#} \\
+    \hline
+
+    $10$    &   0.031  &  0.453  &  0.000  &  0.000  &      N/A  &  N/A   \\
+    $10^7$  &  43.063  &  0.516  &  0.047  &  0.063  &  918.765  &  0.750 \\
+    $10^8$  &     N/A  &  0.875  &  0.453  &  0.563  &      N/A  &  0.806 \\
+    $10^9$  &     N/A  &  4.188  &  3.672  &  5.953  &      N/A  &  0.617 \\
+
+    \hline
+
+  \end{tabular}
+  \caption{Accumulator benchmark}
+  \label{tab:accumulator}
+\end{table}
+
+
+Table \ref{tab:accumulator} show the results for the benchmark.  Again, we can
+see that the speedup of the JIT over the interpreter is comparable to the
+other two benchmarks.  However, the really interesting part is the comparison
+with the equivalent C\# code, as the code generated by the JIT is
+\textbf{faster}.
 
 Probably, the C\# code is slower because:
 

Modified: pypy/extradoc/talk/ecoop2009/benchmarks/results.txt
==============================================================================
--- pypy/extradoc/talk/ecoop2009/benchmarks/results.txt	(original)
+++ pypy/extradoc/talk/ecoop2009/benchmarks/results.txt	Thu Dec 18 14:15:45 2008
@@ -1,21 +1,24 @@
 Factorial
 N               Interp       JIT1          JIT2        C# 
-10              0,03125      0,421875      0           0
-10000000        30,98438     0,453132      0,046875    0,03125
-100000000       N/A          0,859375      0,453125    0,35937
-1000000000      N/A          4,84375       4,640625    3,4375
+10              0.03125      0.421875      0           0
+10000000        30.98438     0.453132      0.046875    0.03125
+100000000       N/A          0.859375      0.453125    0.35937
+1000000000      N/A          4.84375       4.640625    3.4375
 
 
 Fibonacci
 N               Interp       JIT1          JIT2        C# 
-10              0,0312576    0,45313       0           0
-10000000        29,359367    0,46875       0,015617    0,015625
-100000000       N/A          0,6875        0,25001     0,234375
-1000000000      N/A          2,9531        2,5         2,453125
+10              0.0312576    0.45313       0           0
+10000000        29.359367    0.46875       0.015617    0.015625
+100000000       N/A          0.6875        0.25001     0.234375
+1000000000      N/A          2.9531        2.5         2.453125
 
 Accumulator
 N               Interp       JIT1          JIT2        C# 
-10              0,03125      0,453132      0           0
-10000000        43,0625      0,515625      0,04687     0,0625
-100000000       N/A          0,87500       0,453125    0,5625
-1000000000      N/A          4,18750       3,67187     5,953125
+10              0.03125      0.453132      0           0
+10000000        43.0625      0.515625      0.04687     0.0625
+100000000       N/A          0.87500       0.453125    0.5625
+1000000000      N/A          4.18750       3.67187     5.953125
+
+
+



More information about the Pypy-commit mailing list