merits of Lisp vs Python

Kay Schluehr kay.schluehr at gmx.net
Mon Dec 11 17:06:21 EST 2006


Juan R. wrote:

> Kay Schluehr ha escrito:
> > Note also that a homogenous syntax is not that important when
> > analyzing parse trees ( on the contrary, the more different structures
> > the better ) but when synthesizing new ones by fitting different
> > fragments of them together.
>
> Interesting, could you provide some illustration for this?

My approach is strongly grammar based. You start with a grammar
description of your language. This is really not much different from
using Lex/Yacc except that it is situated and adapted to a pre-existing
language ecosystem. I do not intend to start from scratch.

Besides the rules that constitute your host language you might add:

repeat_stmt ::=  'repeat' ':' suite 'until' ':' test

The transformation target ( the "template" ) is

while True:
    <suite>
    if <test>:
        break

The structure of the rule is also the structure of its constituents in
the parse tree. Since you match the repeat_stmt rule and its
corresponding node in the parse tree you immediately get the <suite>
node and the <test> node:

class FiberTransformer(Transformer):
    @transform
    def repeat_stmt(self, node):
         _suite = find_node(node, symbol.suite)
         _ test = find_node(node, symbol.test, depth = 1)
         #
         # create the while_stmt here
         #
         return _while_stmt_node

So analysis works just fine. But what about creating the transformation
target? The problem with the template above is that it can't work
precisely this way as a Python statement, because the rule for a while
statement looks like this:

while_stmt: 'while' test ':' suite

That's why the macro expander has to merge the <suite> node, passed
into the template with the if_stmt of the template, into a new suite
node.

Now think about having created a while_stmt from your original
repeat_stmt. You return the while_stmt and it has to be fitted into the
original syntax tree in place of the repeat_stmt. This must be done
carefully. Otherwise structure in the tree is desroyed or the node is
inserted in a place where the compiler does not expect it.

The framework has to do lots of work to ease the pain for the meta
programmer.

a) create the correct transformation target
b) fit the target into the syntax tree

Nothing depends here particularly on Python but is true for any
language with a fixed grammar description. I've worked exclusively with
LL(1) grammars but I see no reason why this general scheme shall not
work with more powefull grammars and more complicated languages - Rubys
for example.

> > The next question concerns compositionality of language
> > enhancements or composition of even completely independent language
> > definitions and transformers both on source and on binary level. While
> > this is not feasible in general without creating ambiguities, I believe
> > this problem can be reduced to ambiguity detection in the underlying
> > grammars.
>
> A bit ambiguous my reading. What is not feasible in general? Achieving
> compositionality?

Given two languages L1 = (G1,T1), L2 = (G2, T2 ) where G1, G2 are
grammars and T1, T2 transformers that transform source written in L1 or
L2 into some base language
L0 = (G0, Id ). Can G1 and G2 be combined to create a new grammar G3
s.t. the transformers T1 and T2 can be used also to transform  L3 = (G3
= G1(x)G2, T3 = T1(+)T2) ? In the general case G3 will be ambigous and
the answer is NO. But it could also be YES in many relevant cases. So
the question is whether it is necessary and sufficient to check whether
the "crossing" between G1 and G2 is feasible i.e. doesn't produce
ambiguities.




More information about the Python-list mailing list