What is different with Python ?

Mike Meyer mwm at mired.org
Tue Jun 14 17:40:42 EDT 2005


Andrea Griffini <agriff at tin.it> writes:

> On Mon, 13 Jun 2005 21:33:50 -0500, Mike Meyer <mwm at mired.org> wrote:
>
>>But this same logic applies to why you want to teach abstract things
>>before concrete things. Since you like concrete examples, let's look
>>at a simple one:
>>
>>   a = b + c
>>
> ...
n>>In a very
>>few languages (BCPL being one), this means exactly one thing. But
>>until you know the underlying architecture, you still can't say how
>>many operations it is.
>
> That's exactly why
>
>      mov eax, a
>      add eax, b
>      mov c, eax

Um, you didn't do the translation right.

> or, even more concrete and like what I learned first
>
>      lda $300
>      clc
>      adc $301
>      sta $302
>
> is simpler to understand.

No, it isn't - because you have to worry about more details. In
particular, when programming in an HLL the compiler will take care of
allocating storage for the variables. In assembler, the programmer has
to deal with it. These extra details make the code more complicated.

> Writing programs in assembler takes longer exactly beacuse
> the language is *simpler*. Assembler has less implicit
> semantic because it's closer to the limited brain of our
> stupid silicon friend.

You're right, but you have the wrong reasons. There were studies done
during the 70s/80s that showed that debugged LOC from programmers is
independent of the language being written. Assembler takes longer to
write not because it's simpler, but because it takes more LOC to do
the same operations.

You can avoid that problem - to a degree - if you use an assembler
that eschews the standard assembler syntax for something more
abstract. For instance, whitesmith had a z80 assembler that let you write:

          a = b + c

and it would generate the proper instructions via direct
translation. This didn't reduce the LOC to the level of C, but it
makes a significant dent in it.

Also, you can only claim that being closer to the chip is "simpler"
because you haven't dealt with a sufficiently complicated chip
yet. Try writing code where you have to worry about gate settling
times or pipeline stalls, and then tell me whether you think it's
"simpler" than dealing with an HLL.

> Programming in assembler also really teaches (deeply
> to your soul) who is the terrible "undefined behaviour"
> monster you'll meet when programming in C.
>
>>Anything beyond the abstract statement "a gets the result
>>of adding b to c" is wasted on them.
>
> But saying for example that
>
>      del v[0]
>
> just "removes the first element from v" you will end up
> with programs that do that in a stupid way, actually you
> can easily get unusable programs, and programmers that
> go around saying "python is slow" for that reason.

That's an implementation detail. It's true in Python, but isn't
necessarily true in other languages. Yes, good programmers need to
know that information - or, as I said before, they need to know that
they need to know that information, and where to get it.

>>It's true that in some cases, it's easier to remember the
>>implementation details and work out the cost than to
>>remember the cost directly.
>
> I'm saying something different i.e. that unless you
> understand (you have a least a rough picture, you
> don't really need all the details... but there must
> be no "magic" in it) how the standard C++ library is
> implemented there is no way at all you have any
> chance to remember all the quite important implications
> for your program. It's just IMO impossible to memorize
> such a big quantity of unrelated quirks.
> Things like for example big O, but also undefined
> behaviours risks like having iterators invalidated
> when you add an element to a vector.

That may well be true of the standard C++ library - I don't write
it. But it certainly doesn't appear to be true of, for instance,
Python internals. I've never seen someone explain why, for instance, 
string addition is O(n^2) beyond the very abstract "it creates a new
string with each addition". No concrete details at all.


>>> Are you genuinely saying that abelian groups are
>>> easier to understand than relative integers ?
>>
>>Yup. Then again, my formal training is as a mathematician. I *like*
>>working in the problem space - with the abstact. I tend to design
>>top-down.
>
> The problem with designing top down is that when
> building (for example applications) there is no top.

This is simply false. The top of an application is the
application-level object

> I found this very simple and very powerful rationalization
> about my gut feeling on building complex systems in
> Meyer's "Object Oriented Software Construction" and
> it's one to which I completely agree. Top down is a
> nice way for *explaining* what you already know, or
> for *RE*-writing, not for creating or for learning.

I also like OOSC - because it makes the abstract type system a part of
the language. The approach Meyer takes emphasis the abstract.

You haven't stated how you think complex systems should be built, and
I think we took different lessons away from Meyer, so you'll have to
state it explicitly.

I agree - you don't generally build systems top-down. But building is
not designing. The common alternative to top-down design - bottom-up
design - makes the possibility of a semantic disconnect when you reach
the top to likely.

> IMO no one can really think that teaching abelian
> groups to kids first and only later introducing
> them to relative numbers is the correct path.
> Human brain simply doesn't work like that.

You didn't ask how it should be taught - you asked which I found more
natural. Numbers are an abstract concept, and can be taught as such.

> You are saying this, but I think here it's more your
> love for discussion than really what you think.

Actually, I think it's a terminology problem. Or maybe a problem
defining the levels.

>>The same is true of programmers who started with concrete details on a
>>different platform - unless they relearn those details for that
>>platform.
>
> No. This is another very important key point.
> Humans are quite smart at finding general rules
> from details, you don't have to burn your hand on
> every possible fire. Unfortunately sometimes there
> is the OPPOSITE problem... we infer general rules
> that do not apply from just too few observations.

Your opposite problem is avoided by not teaching the details until
they are needed, and making sure you teach that those are
implementation details, so they student knows not to draw such
conclusions from them.

>>The critical things a good programmer knows about those
>>concrete details is which ones are platform specific and which aren't,
>>and how to go about learning those details when they go to a new
>>platform.
>
> I never observed this problem. You really did ?

As mentioned, you see it all the time in c.l.python. People come from
other languages, and try to write Python as if the rules for that
other language apply.

>>If you confuse the issue by teaching the concrete details at the same
>>time as you're teaching programming, you get people who can't make
>>that distinction. Such people regularly show up with horrid Python
>>code because they were used to the details for C, or Java, or
>>whatever.
>
> Writing C code with python is indeed a problem that
> is present. But I think this is a minor price to pay.
> Also it's something that with time and experience
> it will be fixed.

It can be fixed from the start by teaching the student the difference
between abstract programming concepts and implementation details.

>>> I suppose that over there who is caught reading
>>> TAOCP is slammed in jail ...
>>
>>Those taught the concrete method would never have been exposed to
>>anything so abstract.
>
> Hmmm; TACOP is The Art Of Computer Programming, what
> is the abstract part of it ? The code presented is
> only MIX assembler. There are math prerequisites for
> a few parts, but I think no one could call it "abstract"
> material (no one that actually spent some time reading
> it, that is).

It tackled abstract problems like "sorting". The students I'm talking
about never dealt with anything that abstract.

>>Actually, it's a natural place to teach the details of that kind of
>>thing. An OS has to deal with allocating a number of different kinds
>>of memory, with radically different behaviors and constraints.
>
> hehehe... and a program like TeX instead doesn't even
> need to allocate memory.

You're confusing "using a facility" with "building a facility". Yes,
TeX needs to allocate memory. You don't need to know anything about
the insides of a memory allocator to allocate memory. You can write an
application like TeX without having to write a memory allocator. You
can't write an OS without having to write *several* memory allocators.
Are you going to claim that the right way to learn about memory
allocators is by looking at uses of them, or by looking at
implementations of them?

> Pairing this with that teaching
> abelian groups first to kids (why not fiber spaces then ?)
> and that TAOCP is too "abstract" tells me that apparently
> you're someone that likes to talk just for talking, or
> that your religion doesn't allow you to type in smileys.

Now you're resorting to straw men and name-calling. That's an
indication that you no longer have any real points.

           <mike
-- 
Mike Meyer <mwm at mired.org>			http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.



More information about the Python-list mailing list