Is this a true statement?

Andrew Dalke dalke at acm.org
Mon Jun 25 17:08:35 EDT 2001


Tim Daneliuk wrote:
>I still do not understand/agree with this line of reasoning entirely.
>As I mentioned in a previous post, one has to define what we mean by
>"do" or "program".
 ...
>Now, if we say "Language X can "do" things we cannot "do" in Language Y" we
>are saying one of several possible things:
 ...
>3) There is something about how X and Y are *implemented* which causes
>   this disparity  (almost certainly the case).
 ...
>3) is where the rocks start to get thrown.  If I do 3), have I "extended"
>the language to do something it could not previously?  NO!  I have
>implemented what I could already do with 2) in a more programmer-friendly
>manner, but I have not changed the net computational power of the language.
>
>So, I close with this claim (which I believe is well supported by
>the theory):  Define "Computational Power" to be the number of
>algorithms a given language can compute.  Then:  Increasing
>"Computational Power" for a given language is only possible if
>that language is less than Turing-Complete.  In every other case,
>all we are doing is changing the *programmer's* perception and
>paradigm of work, not the net "Computational Power" of the language.

Software is used for more things than "Computational Power."
Take again the example I posted a few days ago on allocating in
shared memory.

Suppose I have a hardware environment where there is special
shared memory hardware which can be used to pass data between
otherwise different processors.  One example is the CAVE
visualization system, which is an immersive reality system
where images are projected on the walls of a room to make it
look like you are inside of a synthetic environment -- the
closest we can get to Star Trek's holodeck.  One processor
is used to generate the visualization model into shared memory,
then other CPUs (one for each wall) gets the model to generate
the appropate image with the correct viewpoint.

In C++ I can define how 'new' and 'delete' work on a given
class, so that I can allocate and deallocate the objects
to be shared from the shared memory arena while leaving
the other objects (like file I/O) in local non-shared (and
more plentiful and cheaper) memory.

There is no way nor even a proposal of a way to use special
memory allocation routines for specific object, at least not
in a very Pythonic fashion.  So using different memory arenas
is an example where you can "do" something in C++ which is
much harder to do in Python.  It is not a limitation or
restriction of the implementation of Python - it's in the
language itself.

To bring it back to Turing machines, the concept of "shared
memory" or "I/O ports" or "interrupts" or "signals" does not
exist in a TM.  To talk about them in a TM context requires
emulating their physical behaviour, and it is that emulation -
that translation from a language description to a physical
device - that some languages do not provide.

                    Andrew
                    dalke at acm.org






More information about the Python-list mailing list