Hardware take on software testing.

Peter Hansen peter at engcorp.com
Fri Jun 6 22:33:46 EDT 2003


Paddy McCarthy wrote:
> 
> Peter Hansen <peter at engcorp.com> wrote in message news:<3EE0CF2B.57F474E4 at engcorp.com>...
> > ... a new approach to design, testing, and coding, called Test-Driven
> > Development (TDD).
> 
> On TDD when do you know you are done?

Oh, *good* question!  <grin>

> In the Hardware development process we graph the number of bugs found
> over time and end up with an S curve, we also set coverage targets
> (100 percent statement coverage for executable statements is the
> norm), and rather like the TDD approach of software, some teams have
> dedicated Verification engineers who derive a verification spec from
> the design spec and write tests for the design to satisfy this,
> (independantly).

Actually, the verification spec is not at all like how you do it with
TDD, since that spec is written entirely up-front.  With TDD, only one
test at a time is even written, let alone passed by writing new code.
You would "never" write two tests at the same time since you wouldn't
necessarily know whether you needed the second test until you had
written the first, failed and then passed it, and then paused to consider
what the next test should be.  Maybe what you thought you were going
to write to make the first test pass is not what you actually needed
to write (usually you end up with code simpler than you guessed it 
would be) and maybe that second test should be a little different now.
By writing it first you would have been wasting your time.

> If TDD uses no random directed generation, then don't you tend to test
> strictly the assumed behaviour?

Bob Martin wrote "The act of writing a unit test is more an act of 
design than of verification.  It is also more an act of documentation 
than of verification.  The act of writing a unit test closes a remarkable 
number of feedback loops, the least of which is the one pertaining to 
verification of function."

Let me go back to the "graph the number of bugs" thing you mention above.
If you are working in a world where that concept holds much meaning, 
you might have to change gears to picture this: with XP and TDD, you
generally expect to have *no* bugs to graph.  Think about that for a
moment.  You write a test.  You write a bit of code, and get the test
to pass.  You write another test.  You write some code, only enough
to pass the test, and run both tests.  If the code broke something in
the first test, you immediately go and fix what you did wrong.

Now fast-forward to months later, when you have literally hundreds of
little tests, each one having driven the development of a few lines of
code.  You have effectively 100% code coverage.  In fact, you probably
have tests which overlap, but that's a good thing here.  Now you make
a tiny mistake, which traditionally would not be noticed until "test 
time", way down at the end of the project when the QA/verification people 
get their hands on your code.  Instead of being noticed, perhaps, 
months later just before shipping, you immediately fail one or two
tests (or a whole pile) and fix the problem.

Or, in spite of the fact that you actually *drove the development of
the code with the tests*, and that therefore there is really no code
that doesn't need to be there to pass the tests, you manage to let
a bug get through.  Maybe it was more of an error in interpretation
of the functional requirements.  In other words, almost certainly one
of your tests is actually wrong.  Alternatively, the tests are all fine
but you're in the unfortunate (but fortunately rare when you do it this
way) position of having an actual, real _bug_ in spite of all those tests.

What do you do?  Add it to the bug database and see the graph go up?
No, you don't even *have* a bug database!  There are no bugs to go in it,
except this one.  What's the best next step?  Write a test!

The new test fails in the presence of the bug, and now you modify the
code to pass the test (and to keep passing all those other tests) and
check in the change.  Problem solved.  No bug, no bug database, no graph.

Maybe this sounds goofy or unrealistic to some who haven't tried it.
Personally I thought it was novel enough to warrant an experiment when
I first encountered it, but it didn't take long before I was convinced
that this approach was fundamentally different and more powerful than
the previous approaches I'd tried over twenty plus years of coding.
It may not feel right to some people, but since it's pretty darn easy
to read up on the approach and experiment for a few hours or days to
get a feel for it, "don't knock it if you haven't tried it".  :-)

To answer the original question of "how do you know when you're done?"
I would say that TDD itself doesn't really say, but in XP you have
what are called "acceptance tests", which are similar to the unit tests
in that they are a fully automated suite of tests that verify the 
high level functionality of the entire program.  When your code
passes all the units tests you have written to drive it's development,
*and* all the acceptance tests, then you're done.  (That's another one
of the "things of beauty" in XP: the tests aggressively control scope
since you don't have to build anything for which you don't have a test.)

-Peter




More information about the Python-list mailing list