Comparing lists - somewhat OT, but still ...

Paul Rubin http
Sun Oct 16 21:54:52 EDT 2005


Steven D'Aprano <steve at REMOVETHIScyber.com.au> writes:
> But if you are unlikely to discover this worst case behaviour by
> experimentation, you are equally unlikely to discover it in day to
> day usage.

Yes, that's the whole point.  Since you won't discover it by
experimentation and you won't discover it by day to day usage, you may
very well only find out about it when an attacker clobbers you.  If
you want to prevent that, you HAVE to discover it by analysis, or at
least do enough analysis to determine that a successful attack won't
cause you a big catastrophe (ok, this is probably the case for most of
the stuff that most of us do).

Sure, there are some applications that are never exposed to hostile
users.  That excludes pretty much anything that connects to the
internet or handles data that came from the internet.  Any general
purpose development strategy that doesn't take hostile users into
account is of limited usefulness.

> Most algorithms have "worst case" behaviour significantly slower
> than their best case or average case, and are still perfectly useful.

Definitely true.  However, a lot more of the time than many
implementers seem to think, you have to take the worst case into
account.  There's no magic bullet like "experiments" or "unit tests"
that results in reliable software.  You have to stay acutely aware of
what you're doing at every level.



More information about the Python-list mailing list