Code correctness, and testing strategies

David wizzardx at gmail.com
Sat May 24 16:17:21 EDT 2008


>> In order to get a new system working, it's nice to be able to throw
>> together a set of modules quickly, and if that doesn't work, scrap it
>> and try something else. There's a rule (forget where) that your first
>> system will always be a prototype, regardless of intent.
>
> That's fine. It's alright to prototype without tests. The only rule is that
> you cannot then use any of that code in production.
>

So, at what point do you start writing unit tests? Do you decide:
"Version 1 I am going to definitely throw away and not put it into
production, but version 2 will definitely go into production, so I
will start it with TDD?".

Where this doesn't work so well is if version 2 is a refactored and
incrementally-improved version of version 1. At some point you need to
decide "this is close to the version that will be in production, so
let's go back and write unit tests for all the existing code".

>> Problem 3: Slows down development in general
>>
>> Having to write tests for all code takes time. Instead of eg: 10 hours
>> coding and say 1/2 an hour manual testing, you spend eg: 2-3 hours
>> writing all the tests, and 10 on the code.
>>
> You are either a very slow coder or a very poor tester: there should be a
> lot more than 1/2 hour testing for 10 hours coding. I would say the
> comparison might be 10 hours coding, 10 hours testing, then about a week
> tracking down the bugs which escaped testing and got out to the customers.
> With proper unit tests you will reduce all 3 of these numbers but
> especially the last one. Any bug which gets away from the programmer and is
> only caught further downstream costs vastly more than bugs caught during
> development, and not just for the programmer but everyone else who
> affected.

Seriously, 10 hours of testing for code developed in 10 hours? What
kind of environment do you write code for? This may be practical for
large companies with hordes of full-time testing & QA staff, but not
for small companies with just a handful of developers (and where you
need to borrow somone from their regular job to do non-developer
testing). In a small company, programmers do the lions share of
testing. For programmers to spend 2 weeks on a project, and then
another 2 weeks testing it is not very practical when they have more
than one project.

As for your other points - agreed, bugs getting to the customer is not
a good thing. But depending on various factors, it may not be the end
of the world if they do. eg: There are many thousands of bugs in open
source bug trackers, but people still use open source for important
things. Sometimes it is better to have software with a few bugs, than
no software (or very expensive, or very long time in development). See
"Worse is Better": http://en.wikipedia.org/wiki/Worse_is_better. See
also: Microsoft ;-)

>
> You seem to think that people are suggesting you write all the tests up
> front: what you should be doing is interleaving design+testing+coding all
> together. That makes it impossible to account for test time separately as
> the test time is tightly mixed with other coding, what you can be sure
> though is that after an initial slowdown while you get used to the process
> your overall productivity will be higher.

Sounds like you are suggesting that I obfuscate my development process
so noone can tell how much time I spent doing what :-)

I think that moderate amounts of unit tests can be beneficial and not
slow down development significantly (similar to a bit more time spent
using version control vs not using it at all). Regression tests is a
good example. But going to the TDD extreme of always coding tests
before *any* code, for *all* projects, does not sit well with me (bad
analogy: similar to wasting time checking in each line into version
control separately, with a paragraph of comments).

>
> The first time you make a change to some code and a test which is
> apparently completely unrelated to the change you made breaks is the point
> when you realise that you have just saved yourself hours of debugging when
> that bug would have surfaced weeks later.
>

The next time your project is running late, your manager and the
customer will be upset if you spend time updating your unit tests
rather than finishing off the project (and handing it over to QA etc)
and adding the unit tests when there's actually time for it.

>> Clients, deadlines, etc require actual software, not
>> tests for software (that couldn't be completed on time because you
>> spent too much time writing tests first ;-)).
>
> Clients generally require *working* software. Unfortunately it is all too
> easy to ship something broken because then you can claim you completed the
> coding on time and any slippage gets lost in the next 5 years of
> maintenance.

That's why you have human testing & QA. Unit tests can help, but they
are a poor substitute. If the customer is happy with the first
version, you can improve it, fix bugs, and add more unit tests later.

David

PS: To people following this thread: I don't mean to be argumentative.
This is a subject I find interesting and I enjoy the debate. I'm
playing devil's advocate (troll?) to provoke further discussion.



More information about the Python-list mailing list