benchmark

alex23 wuwei23 at gmail.com
Thu Aug 7 03:44:14 EDT 2008


Steven D'Aprano wrote:
> In other words, about 20% of the time he measures is the time taken to
> print junk to the screen.

Which makes his claim that "all the console outputs have been removed
so that the benchmarking activity is not interfered with by the IO
overheads" somewhat confusing...he didn't notice the output? Wrote it
off as a weird Python side-effect?

I find his reluctance to entertain more idiomatic implementations
particularly telling. It's seems he's less interested in actual
performance comparisons and more into showing that writing static lang
style code in dynamic langs is bad, which isn't really anything new to
anyone anywhere, I would've thought.

All up, this reminds me of last years benchmarking fiasco that
demonstrated Storm's ORM handling being -incredibly-faster than
SQLAlchemy's SQL expression handling, something that just didn't seem
to be born out by user experience. Eventually, Mike Beyer reverse
engineered the benchmarks to discover that, surprise surprise, the
tests -weren't equal-; in one test SQLA was issuing a commit per
insert, while Storm was performing -no commits whatsoever-.

Benchmarks, IMO, are like statistics. You can tweak them to prove
pretty much any position you already take.



More information about the Python-list mailing list