Where to put the error handing test?

Peng Yu pengyu.ut at gmail.com
Tue Nov 24 11:14:19 EST 2009


On Tue, Nov 24, 2009 at 4:58 AM, Dave Angel <davea at ieee.org> wrote:
> Peng Yu wrote:
>>
>> On Mon, Nov 23, 2009 at 9:44 PM, Lie Ryan <lie.1296 at gmail.com> wrote:
>>
>>>
>>> Peng Yu wrote:
>>>
>>>>
>>>> Suppose that I have function f() that calls g(), I can put a test on
>>>> the argument 'x' in either g() or f(). I'm wondering what is the
>>>> common practice.
>>>>
>>>> My thought is that if I put the test in g(x), the code of g(x) is
>>>> safer, but the test is not necessary when g() is called by h().
>>>>
>>>> If I put the test in f(), then g() becomes more efficient when other
>>>> code call g() and guarantee x will pass the test even though the test
>>>> code in not in g(). But there might be some caller of g() that pass an
>>>> 'x' that might not pass the test, if there were the test in g().
>>>>
>>>
>>> Typically, you test for x as early as possible, e.g. just after user
>>> input
>>> (or file or url load or whatever). After that test, you can (or should be
>>> able to) assume that all function calls will always be called with the
>>> correct argument. This is the ideal situation, it's not always easy to
>>> do.
>>>
>>> In any case though, don't optimize early.
>>>
>>
>> Let's suppose that g() is refactored out from f() and is call by not
>> only f() but other functions, and g() is likely to be called by new
>> functions.
>>
>> If I don't optimize early, I should put the test in g(), rather than f(),
>> right?
>>
>>
>
> Your question is so open-ended as to be unanswerable.  All we should do in
> this case is supply some guidelines so you can guess which one might apply
> in your particular case.
>
> You could be referring to a test that triggers alternate handling.  Or you
> could be referring to a test that notices bad input by a user, or bad data
> from an untrusted source.  Or you could be referring to a test that
> discovers bugs in your code.  And there are variations of these, depending
> on whether your user is also writing code (eval, or even import of
> user-supplied mixins), etc.
>
> The first thing that's needed in the function g() is a docstring, defining
> what inputs it expects, and what it'll do with them.  Then if it gets any
> input that doesn't meet those requirements, it might throw an exception.  Or
> it might just get an arbitrary result.  That's all up to the docstring.
>  Without any documentation, nothing is correct.
>
> Functions that are only called by trusted code need not have explicit tests
> on their inputs, since you're writing it all.  Part of debugging is catching
> those cases where f () can pass bad data to g().  If it's caused because bad
> data is passed to f(), then you have a bug in that caller.  Eventually, you
> get to the user.  If the bad data comes from the user, it should be caught
> as soon as possible, and feedback supplied right then.

I'll still confused by the guideline that an error should be caught as
early as possible.

Suppose I have the following call chain

f1() --> f2() --> f3() --> f4()

The input in f1() might cause an error in f4(). However, this error
can of cause be caught by f1(), whenever I want to do so. In the worst
case, I could duplicate the code of f2 and f3, and the test code in f4
to f1(), to catch the error in f1 rather than f4. But I don't think
that this is what you mean.

Then the problem is where to put the test code more effectively. I
would consider 'whether it is obvious to test the condition in the
give function' as the guideline. However, it might be equal obvious to
test the same thing two functions, for example, f1 and f4.

In this case, I thought originally that I should put the test code in
f1 rather than f4, if f1, f2, f3 and f4 are all the functions that I
have in the package that I am making. But it is possible that some
time later I added the function f5(),...,f10() that calls f4(). Since
f4 doesn't have the test code, f5(),...,f10() should have the same
test code. This is clearly a redundancy to the code. If I move the
test code to f4(), there is a redundancy of the code between f1 and
f4.

I'm wondering how you would solve the above problem?

> assert() ought to be the correct way to add tests in g() that test whether
> there's such a bug in f().  Unfortunately, in CPython it defaults to debug
> mode, so scripts that are run will execute those tests by default.
>  Consequently, people leave them out, to avoid slowing down code.
>
>
>
> It comes down to trust.  If you throw the code together without a test
> suite, you'll be a long time finding all the bugs in non-trivial code.  So
> add lots of defensive tests throughout the code, and pretend that's
> equivalent to a good test system.  If you're writing a library to be used by
> others, then define your public interfaces with exceptions for any invalid
> code, and write careful documentation describing what's invalid.  And if
> you're writing an end-user application, test their input as soon as you get
> it, so none of the rest of the application ever gets "invalid" data.

Having the test code for any function and any class (even the ones
that are internal in the package) is basically what I am doing.
However, if I decided to put the test code in f1(), then I can not
have my test code test the error case for f4(). If the rule is to test
each function/class extensively, then I have to put the error handling
code in f4(). But this is contradictory to catch the error as early as
possible and removing code redundancy.

Would you put a global solution to all the above problems that I mentioned?



More information about the Python-list mailing list