Mathematica 7 compares to other languages

George Neuner gneuner2 at comcast.net
Mon Dec 8 17:40:58 EST 2008


On Sun, 7 Dec 2008 14:53:49 -0800 (PST), Xah Lee <xahlee at gmail.com>
wrote:

>The phenomenon of creating code that are inefficient is proportional
>to the highlevelness or power of the lang. In general, the higher
>level of the lang, the less possible it is actually to produce a code
>that is as efficient as a lower level lang. 

This depends on whether someone has taken the time to create a high
quality optimizing compiler.


>For example, the level or power of lang can be roughly order as 
>this:
>
>assembly langs
>C, pascal
>C++, java, c#
>unix shells
>perl, python, ruby, php
>lisp
>Mathematica

According to what "power" estimation?  Assembly, C/C++, C#, Pascal,
Java, Python, Ruby and Lisp are all Turing Complete.  I don't know
offhand whether Mathematica is also TC, but if it is then it is at
most equally powerful.

Grammatic complexity is not exactly orthogonal to expressive power,
but it is mostly so.  Lisp's SEXPRs are an existence proof that a
Turing powerful language can have a very simple grammar.  And while a
2D symbolic equation editor may be easier to use than spelling out the
elements of an equation in a linear textual form, it is not in any
real sense "more powerful".


>the lower level the lang, the longer it consumes programer's time, but
>faster the code runs. Higher level langs may or may not be crafted to
>be as efficient.  For example, code written in the level of langs such
>as perl, python, ruby, will never run as fast as C, regardless what
>expert a perler is. 

There is no language level reason that Perl could not run as fast as C
... it's just that no one has cared to implement it.


>C code will never run as fast as assembler langs.

For a large function with many variables and/or subcalls, a good C
compiler will almost always beat an assembler programmer by sheer
brute force - no matter how good the programmer is.  I suspect the
same is true for most HLLs that have good optimizing compilers.

I've spent years doing hard real time programming and I am an expert
in C and a number of assembly languages.  It is (and has been for a
long time) impractical to try to beat a good C compiler for a popular
chip by writing from scratch in assembly.  It's not just that it takes
too long ... it's that most chips are simply too complex for a
programmer to keep all the instruction interaction details straight in
his/her head.  Obviously results vary by programmer, but once a
function grows beyond 100 or so instructions, the compiler starts to
win consistently.  By the time you've got 500 instructions (just a
medium sized C function) it's virtually impossible to beat the
compiler.

In functional languages where individual functions tend to be much
smaller, you'll still find very complex functions in the disassembly
that arose from composition, aggressive inlining, generic
specialization, inlined pattern matching, etc.  Here an assembly
programmer can quite often match the compiler for a particular
function (because it is short), but overall will fail to match the
compiler in composition.

When maximum speed is necessary it's almost always best to start with
an HLL and then hand optimize your optimizing compiler's output.
Humans are quite often able to find additional optimizations in
assembly code that they could not have written as well overall in the
first place.

George



More information about the Python-list mailing list