[Python-checkins] BAD Benchmark Results for Python Default 2016-02-29

Stewart, David C david.c.stewart at intel.com
Mon Feb 29 18:45:37 EST 2016


Does anybody know why django declined so much?




On 2/29/16, 10:30 AM, "lp_benchmark_robot" <lp_benchmark_robot at intel.com> wrote:

>Results for project Python default, build date 2016-02-29 03:08:49 +0000
>commit:		83814cdca928
>previous commit:	ed30eac90f60
>revision date:	2016-02-28 20:13:44 +0000
>environment:	Haswell-EP
>	cpu:		Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 2x18 cores, stepping 2, LLC 45 MB
>	mem:		128 GB
>	os:		CentOS 7.1
>	kernel:	Linux 3.10.0-229.4.2.el7.x86_64
>
>Baseline results were generated using release v3.4.3, with hash b4cbecbc0781
>from 2015-02-25 12:15:33+00:00
>
>----------------------------------------------------------------------------------
>              benchmark   relative   change since   change since   current rev run
>                          std_dev*       last run       baseline          with PGO
>----------------------------------------------------------------------------------
>:-)           django_v2      0.30%         -2.38%          8.43%            13.60%
>:-|             pybench      0.16%          0.26%          0.52%             5.88%
>:-(            regex_v8      3.06%         -0.78%         -5.37%             5.20%
>:-|               nbody      0.08%         -0.20%         -1.74%            10.47%
>:-|        json_dump_v2      0.26%         -1.34%         -1.58%            11.44%
>:-|      normal_startup      0.81%         -0.40%          1.71%             5.46%
>----------------------------------------------------------------------------------
>* Relative Standard Deviation (Standard Deviation/Average)
>
>If this is not displayed properly please visit our results page here: http://languagesperformance.intel.com/bad-benchmark-results-for-python-default-2016-02-29/
>
>Note: Benchmark results are measured in seconds.
>
>Subject Label Legend:
>Attributes are determined based on the performance evolution of the workloads
>compared to the previous measurement iteration.
>NEUTRAL: performance did not change by more than 1% for any workload
>GOOD: performance improved by more than 1% for at least one workload and there
>is no regression greater than 1%
>BAD: performance dropped by more than 1% for at least one workload and there is
>no improvement greater than 1%
>UGLY: performance improved by more than 1% for at least one workload and also
>dropped by more than 1% for at least one workload
>
>
>Our lab does a nightly source pull and build of the Python project and measures
>performance changes against the previous stable version and the previous nightly
>measurement. This is provided as a service to the community so that quality
>issues with current hardware can be identified quickly.
>
>Intel technologies' features and benefits depend on system configuration and may
>require enabled hardware, software or service activation. Performance varies
>depending on system configuration.


More information about the Python-checkins mailing list