Clarity vs. code reuse/generality

Gabriel Genellina gagsl-py2 at yahoo.com.ar
Thu Jul 9 03:57:15 EDT 2009


En Wed, 08 Jul 2009 23:19:25 -0300, kj <no.email at please.post> escribió:

> In <mailman.2700.1246895025.8015.python-list at python.org> Tim Rowe  
> <digitig at gmail.com> writes:
>
>> 2009/7/4 kj <no.email at please.post>:
>
>>> Precisely. =A0As I've stated elsewhere, this is an internal helper
>>> function, to be called only a few times under very well-specified
>>> conditions. =A0The assert statements checks that these conditions
>>> are as intended. =A0I.e. they are checks against the module writer's
>>> programming errors.
>
>> Good for you. I'm convinced that you have used the assertion
>> appropriately, and the fact that so many here are unable to see that
>> looks to me like a good case for teaching the right use of assertions.
>> For what it's worth, I read assertions at the beginning of a procedure
>> as part of the specification of the procedure, and I use them there in
>> order to document the procedure. An assertion in that position is for
>> me a statement to the user of the procedure "it's your responsibility
>> to make sure that you never call this procedure in such a way as to
>> violate these conditions". They're part of a contract, as somebody
>> (maybe you) pointed out.
>
>> As somebody who works in the safety-critical domain, it's refreshing
>> to see somebody teaching students to think about the circumstances in
>> which a procedure can legitimately be called. The hostility you've
>> received to that idea is saddening, and indicative of why there's so
>> much buggy software out there.
>
> Thanks for the encouragement.
>
> When I teach programming, the students are scientists.  For the
> stuff they do, correctness of the code trumps everything else.  I
> teach them to view assertions as way of translating their assumptions
> into code.  And by this I mean not only assumptions about the
> correctness of their code (the typical scope of assertions), but
> also, more broadly, assumptions about the data that they are dealing
> with (which often comes from external sources with abysmal quality
> control).

Nobody says you shouldn't check your data. Only that "assert" is not the  
right way to do that. Ok, it's easier to write:

   assert x>0 and y>0 and x<y

instead of:

   if not (x>0 and y>0 and x<y):
     raise ValueError, "x=%r y=%r" % (x,y)

but the assert statement is completely ignored if your script is run with  
the -O option, and then no check is done.
assert should be used only to verify internal correctness only - to detect  
wrong assumptions in the *code* itself. Once you're confident the code is  
OK, you may turn off assertions (and improve program speed). But you must  
*always* validate input data no matter what - so assert is not the right  
tool.

Even worse, never DO anything inside an assertion - it won't be done in  
the optimized version. This method becomes almost empty when assertions  
are removed (not the intended behavior!):

def process_all(self):
     for item in self.items:
       assert self.process(item)

> My scientific code is jam-packed with assertions.  I can't count
> the number of times that one such lowly assertion saved me from a
> silent but potentially disastrous bug.  And yes I find it distressing
> that so so few programmers in my line of work use assertions at
> all.

I'm just saying that some of those assertions probably should be real "if  
... raise" statements instead - not that you remove the checks. You may  
use this short function, if you prefer:

def fassert(condition, message='', exc_type=AssertionError):
   if not condition:
     raise exc_type(message)

-- 
Gabriel Genellina




More information about the Python-list mailing list