[ python-Bugs-960995 ] test_zlib is too slow

SourceForge.net noreply at sourceforge.net
Sat Jun 5 15:35:57 EDT 2004


Bugs item #960995, was opened at 2004-05-26 17:35
Message generated for change (Comment added) made by nascheme
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960995&group_id=5470

Category: Python Library
Group: Python 2.4
>Status: Closed
>Resolution: Fixed
Priority: 3
Submitted By: Michael Hudson (mwh)
>Assigned to: Neil Schemenauer (nascheme)
Summary: test_zlib is too slow

Initial Comment:
I don't know what it's doing, but I've never seen it fail and 
waiting for it has certainly wasted quite a lot of my life :-)

----------------------------------------------------------------------

>Comment By: Neil Schemenauer (nascheme)
Date: 2004-06-05 19:35

Message:
Logged In: YES 
user_id=35752

Fixed in test_zlib.py 1.26.  I removed a bunch of magic
numbers while I was
at it.

----------------------------------------------------------------------

Comment By: Tim Peters (tim_one)
Date: 2004-05-27 06:24

Message:
Logged In: YES 
user_id=31435

Persevere:  taking tests you don't understand and just 
moving them to artificially bloat the time taken by an 
unrelated test is so lazy on so many counts I won't make you 
feel bad by belaboring the point <wink>.  Moving them to yet 
another -u option doomed to be unused is possibly worse.

IOW, fix the problem, don't shuffle it around.

Or, IOOW, pare the expensive ones down.  Since they never 
fail for anyone, it's not like they're testing something 
delicate.  Does it *need* to try so many distinct cases?  
That will take some thought, but it's a real help that you 
already know the answer <wink>.

----------------------------------------------------------------------

Comment By: Brett Cannon (bcannon)
Date: 2004-05-26 19:52

Message:
Logged In: YES 
user_id=357491

A quick look at the tests Tim lists shows that each of those run the basic 
incremental decompression test 8 times, from the normal size to 2**8 
time the base size; creates a list from [1<<n for n in range(8)] size 
increments.  So we get exponential growth in data size for each test 
which uses a 1921 long string as the base.

It also compresses in 32 byte steps and then decompresses at 4 byte 
steps.  The default, though, is 256 and 64.

Perhaps we should just move these tests to something like test_zlib_long 
and have it require the overloaded largefile resource?

----------------------------------------------------------------------

Comment By: Tim Peters (tim_one)
Date: 2004-05-26 18:45

Message:
Logged In: YES 
user_id=31435

I'm sure most of the cases in test_zlib are quite zippy (yes, 
pun intended).  Do the right thing:  determine which cases 
are the time hogs, and pare them down.  By eyeball, only 
these subtests consume enough time to notice:

test_manydecompimax
test_manydecompimaxflush
test_manydecompinc

s/_many/_some/ isn't enough on its own <wink>.

----------------------------------------------------------------------

Comment By: Raymond Hettinger (rhettinger)
Date: 2004-05-26 18:17

Message:
Logged In: YES 
user_id=80475

I hate this slow test.  If you want to label this as
explictly called resource, regrtest -u zlib , then be my guest.

----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=960995&group_id=5470



More information about the Python-bugs-list mailing list