Code correctness, and testing strategies

Ben Finney bignose+hates-spam at benfinney.id.au
Sun Jun 8 06:28:42 EDT 2008


David <wizzardx at gmail.com> writes:

> I'm asking about this, because as suggested by various posters, I
> have written my latest (small) app by following a Behaviour-Driven
> Development style.

Congratulations on taking this step.

> That went well, and the code ended up much more modular than if I
> hadn't followed BDD. And I feel more confident about the code
> quality than before ;-) The testing module has about 2x the lines of
> code as the code being tested.

This ratio isn't unusual, and is in fact a little on the low side in
my experience. If you get decent (well-founded!) confidence in the
resulting application code, then it's certainly a good thing.

> My problem is that I haven't run the app once yet during development
> :-/

That might be an artifact of doing bottom-up implementation
exclusively, leading to a system with working parts that are only
integrated into a whole late in the process.

I prefer to alternate between bottom-up implementation and top-down
implementation.

I usually start by implementing (through BDD) a skeleton of the entire
application, and get it to the point where a single trivial user story
can be satisfied by running this minimally-functional application.

Then, I make an automated acceptance test for that case, and ensure
that it is run automatically by a build infrastructure (often running
on a separate machine) that:

  - exports the latest working tree from the version control system

  - builds the system

  - runs all acceptance tests, recording each result

  - makes those results available in a summary report for the build
    run, with a way to drill down to the details of any individual
    steps

That automated build is then set up to run either as a scheduled job
periodically (e.g. four times a day), or as triggered by every commit
to the version control system branch nominated for "integration" code.

> Should I go ahead and start manually testing (like I would have from
> the beginning if I wasn't following TDD), or should I start writing
> automated integration tests?

In my experience, small applications often form the foundation for
larger systems.

Time spent ensuring their build success is automatically determined at
every point in the development process pays off tremendously, in the
form of flexibility in being able to use that small application with
confidence, and time saved not needing to guess about how ready the
small app is for the nominated role in the larger system.

> Is it worth the time to write integration tests for small apps, or
> should I leave that for larger apps?

There is a threshold below which setting up automated build
infrastructure is too much overhead for the value of the system being
tested.

However, this needs to be honestly appraised: can you *know*, with
omniscient certainty, that this "small app" isn't going to be pressed
into service in a larger system where its reliability will be
paramount to the success of that larger system?

If there's any suspicion that this "small app" could end up being used
in some larger role, the smart way to bet would be that it's worth the
effort of setting up automated build testing.

> I've tried Googling for integration testing in the context of TDD or
> BDD and haven't found anything. Mostly information about integration
> testing in general.

I've had success using buildbot <URL:http://buildbot.net/> (which is
packaged as 'buildbot' in Debian GNU/Linux) for automated build and
integration testing and reporting.

> When following BDD or TDD, should one write integration tests first
> (like the unit tests), or later?

All the tests should proceed in parallel, in line with the evolving
understanding of the desired behaviour of the system. This is why the
term "behaviour driven development" provide more guidance: the tests
are subordinate to the real goal, which is to get the developers, the
customers, and the system all converging on agreement about what the
behaviour is meant to be :-)

Your customers and your developers will value frequent feedback on
progress, so:

  - satisfying your automated unit tests will allow you to

  - satisfy your automated build tests, which will allow you to

  - satisfy automated user stories ("acceptance tests"), which will
    allow the customer to

  - view an automatically-deployed working system with new behaviour
    (and automated reports for behaviour that is less amenable to
    direct human tinkering), which will result in

  - the customers giving feedback on that behaviour, which will inform

  - the next iteration of behaviour changes to make, which will inform

  - the next iteration of tests at all levels :-)

-- 
 \     "[W]e are still the first generation of users, and for all that |
  `\     we may have invented the net, we still don't really get it."  |
_o__)                                                 -- Douglas Adams |
Ben Finney



More information about the Python-list mailing list