Comparing lists - somewhat OT, but still ...

Christian Stapfer nil at dev.nul
Sun Oct 16 14:28:55 EDT 2005


"Steven D'Aprano" <steve at REMOVETHIScyber.com.au> wrote in message 
news:pan.2005.10.16.16.01.43.591166 at REMOVETHIScyber.com.au...
> On Sun, 16 Oct 2005 15:16:39 +0200, Christian Stapfer wrote:
>
>> Come to think of an experience that I shared
>> with a student who was one of those highly
>> creative experimentalists you seem to have
>> in mind. He had just bought a new PC and
>> wanted to check how fast its floating point
>> unit was as compared to our VAX. After
>> having done his wonderfully creative
>> experimenting, he was utterly dejected: "Our (old)
>> VAX is over 10'000 times faster than my new PC",
>> he told me, almost in despair.
>
> Which it was. It finished executing his code in almost 1/10,000th of the
> time his PC could do.
>
>> Whereupon I,
>> always the uncreative, dogmatic theoretician,
>> who does not believe that much in the decisiveness
>> of the outcome of mere experiments, told him
>> that this was *impossible*, that he *must* have
>> made a mistake...
>
> It wasn't a mistake and it did happen.

Yes, yes, of course, it was a mistake, since
the conclusion that he wanted to draw from
this experiment was completely *wrong*.
Similarly, blind experimentalism *without*
supporting theory is mostly useless.

> The VAX finished the calculation
> 10,000 times faster than his PC.
>You have a strange concept of "impossible".

What about trying, for a change, to suppress
your polemical temperament? It will only lead
to quite unnecessarily long exchanges in this
NG.

>>     It turned out that the VAX compiler had been
>> clever enough to hoist his simple-minded test
>> code out of the driving loop.

But, mind you, his test was meant to determine,
*not* the cleverness of the VAX compiler *but*
the speed of the floating-point unit. So his
experiment was a complete *failure* in this regard.

>
> Optimizations have a tendency to make a complete mess of Big O
> calculations, usually for the better. How does this support your
> theory that Big O is a reliable predictor of program speed?

My example was meant to point out how
problematic it is to assume that experimental
outcomes (without carefully relating them
back to supporting theory) are quite *worthless*.
This story was not about Big-Oh notation but
a cautionary tale about the relation between
experiment and theory more generally.
- Got it now?

> For the record, the VAX 9000 can have up to four vector processors each
> running at up to 125 MFLOPS each, or 500 in total. A Pentium III runs at
> about 850 Mflops. Comparing MIPS or FLOPS from one system to another is
> very risky, for many reasons, but as a very rough and ready measure
> of comparison, a four processor VAX 9000 is somewhere about the
> performance of a P-II or P-III, give or take some fudge factor.

Well, that was in the late 1980s and our VAX
certanly most definitely did *not* have a
vector processor: we were doing work in
industrial automation at the time, not much
number-crunching in sight there.

> So, depending on when your student did this experiment, it is entirely
> conceivable that the VAX might have been faster even without the
> optimization you describe.

Rubbish. Why do you want to go off a tangent like
this? Forget it! I just do not have the time to
start quibbling again.

> Of course, you haven't told us what model VAX,

That's right. And it was *not* important. Since the
tale has a simple moral: Experimental outcomes
*without* supporting theory (be it of the Big-Oh
variety or something else, depending on context)
is mostly worthless.

> or how many processors, or what PC your student had,
> so this comparison might not be relevant.

Your going off another tangent like this is
certainly not relevant to the basic insight
that experiments without supproting theory
are mostly worhtless, I'd say...

>> In fact, our VAX
>> calculated the body of the loop only *once*
>> and thus *immediately* announced that it had finished
>> the whole test - the compiler on this student's
>> PC, on the other hand, had not been clever enough
>> for this type of optimization: hence the difference...
>
> Precisely. And all the Big O notation is the world will not tell you that.
> Only an experiment will. Now, perhaps in the simple case of a bare loop
> doing the same calculation over and over again, you might be able to
> predict ahead of time what optimisations the compiler will do. But for
> more complex algorithms, forget it.
>
> This is a clear case of experimentation leading to the discovery
> of practical results which could not be predicted from Big O calculations.

The only problem being: it was *me*, basing
myself on "theory", who rejected the "experimental
result" that the student had accepted *as*is*.
(The student was actually an engineer, I myself
had been trained as a mathematician. Maybe that
rings a bell?)

> I find it quite mind-boggling that you would use as if it was a triumph
> of abstract theoretical calculation when it was nothing of the sort.

This example was not at all meant to be any
such thing. It was only about: "experimenting
*without* relating experimental outcomes to
theory is mostly worthless". What's more:
constructing an experiment without adequate
supporting theory is also mostly worthless.

>>   I think this is really a cautionary tale for
>> experimentalists: don't *believe* in the decisiveness
>> of the outcomes your experiments, but try to *understand*
>> them instead (i.e. relate them to your theoretical grasp
>> of the situation)...
>
> Or, to put it another way: your student discovered

No. You didn't read the story correctly.
The student had accepted the result of
his experiments at face value. It was only
because I had "theoretical" grounds to reject
that experimental outcome that he did learn
something in the process.
  Why not, for a change, be a good loser?

> something by running an experimental test of his code
> that he would never have learnt in a million
> years of analysis of his algorithm: the VAX compiler
> was very cleverly optimized.

Ok, he did learn *that*, in the end. But he
did *also* learn to thoroughly mistrust the
outcome of a mere experiment. Experiments
(not just in computer science) are quite
frequently botched. How do you discover
botched experiments? - By trying to relate
experimental outcomes to theory.

Regards,
Christian





More information about the Python-list mailing list