REPOST: Re: Python and Ruby: a comparison

Alex Martelli aleax at aleax.it
Wed Jan 2 05:36:18 EST 2002


"Edward Diener" <eldiener at earthlink.net> wrote in message
news:3C322751.9070404 at earthlink.net...
    ...
> > languages?!).  But that doesn't, by any stretch of the imagination,
imply
> > either that the language is "left out of the CLR", nor of course that
the
> > language is stripped of its original ideas.  I'll still use (e.g.)
Mercury
> > backtracking, Haskell typeclasses, or Oberon active-objects, even if
> > "lesser" languages will obviously not "see" them at the interface!
    ...
> Yes you can use .NET versions of languages and as long as you don't use
> the features of the language in managed code which CLR supports, it is
> possible to use all the features of the language internally. However if

Right.

> a .NET language version does not implement key portions of CLR, mainly
> garbage collection, the basic necessary data types, and single
> inheritance OOP, it is a poor candidate to be inetgrated with .NET.

Yes, there are indeed certain minimal functional requirements that a
language must support if it is to be meaningfully integrated into the
.NET Platform.  For example, VB6 was lacking in some of these reqs
(e.g., it had no implementation-inheritance); thus, VB7 (aka VB.NET)
"had" to be enriched to include such functionality in some way or
other, and it was.  (Nothing forced MS to enrich VB in the specifc
way they chose -- making it a substantially better language but with
a serious loss of backwards compatibility -- that wasn't mandated by
CLR reqs, but rather a strategic choice by MS, who no doubt took a
kind of political/strategical advantage of "having to change anyway"
to shoehorn some language changes that they otherwise desired).


> I still feel this leads to language subsets and people who learn a
> computer language and only use, and know, a portion of that language. It

I think your feelings are misleading you on this subject.  Language
subsets &c are inevitably caused by languages being "larger" than what
is needed by some substantial group of people, and happen for every
language with reasonably-long history and wide use.

Some languages just accept that: they AIM to be big and cover a lot
of bases, thus subsetting is taken in stride.  C++, Perl, O'CAML, Common
Lisp: nobody has 100% of them in their everyday "active vocabulary",
and very few users indeed reach 100% even in "passive vocabulary"
(portion of the language that is understood if met in code being
read, even though not normally used in code produced by the person).

But even languages which aim to be small, unless they are highly
restricted, end up subsetted anyway.  Few Haskell users routinely
write their own monads: much can be done in Haskell without the
very substantial conceptual jump needed to fully grasp monads and
author your own, thus, enter subsetting.  Very few Python users
routinely write their own metaclasses: ditto ditto.  It's anything
but rare even for a highly productive and competent Python coder
to have problems grasping metaclasses even in "passive vocabulary"
terms.  And, why not?  The average Pythonista does not need them,
they're a hard step up on the conceptual ladder, so WHY bemoan their
lack from either active or passive vocabularies of Pythonistas?

Subsetting becomes even more prevalent as soon as you accept some
"standard library" features as part of a language.  If you never
do (e.g.) CGI web programming, why should you care about that
subset of a language, or its libraries, that exists strictly for
the benefit of CGI authors?  Yet the language (cum libraries) is
better for having those modules too, widening its usability and
applicability (and similarly for monads, metaclasses, etc etc).

It's certainly mistaken to think that a totally new phenomenon
(the introduction of .NET Framework and its CLR) "leads to" an
old, existing, and inevitable one.  Post hoc does not necessarily
mean propter hoc, but the reverse implication IS indeed necessary
(as long as time's arrows don't start flipping around randomly,
causes MUST come before effects).

> has already happened before .NET was even created with VC++, where a
> great many programmers who know only MS's subset of C++ and even some of
> its incorrect syntax, assume they are using C++ effectively and
> correctly and often they are not.

Extremely similar phenomena prevailed even before Mr Gates knew how
to tell a bit from a byte: most scientists I met in the '70s, who
thought they knew and were using Fortran effectively and correctly,
were just as sadly mistaken -- they actually knew and used (and at
times with far from optimal efficiency) some specific dialect of the
Fortran language as supplied by, e.g., Digital Equipment, or IBM, or
some other purveyor yet (or rather, almost invariably, some specific
SUBSET of that specific dialect; few people who learned Fortran on
boxes where you could write, e.g., a literal constant as 'CIAO', ever
knew that the standard way of writing it was 4HCIAO, or could easily
recognize the latter form; just as one example...).


> Managed C++ in .NET is also an
> abortion of C++ which only a computer programming masochist could love.

The ability to play havoc with pointers and memory management in C++,
while inevitably and inextricably part of that language, is hardly a
plus for a vast majority of the application uses to which C++ is (maybe
inappropriately) put on a daily basis.  Doing away with that is the
single highest factor in productivity enhancement when moving, e.g., to
Java.

Yet the loss of templates (generic programming) hurts productivity most
grievously (when imposed upon programmers who have learned to make good
use of templates, of course).  I believe that, today still, "Managed
C++" is the only .NET language that lets you use templates (as the
architects of .NET seem to share with those of Java a horrible blind spot
regarding generic programming -- I keep hearing that Java is due to gain
Generic Programming features any day now, but I still can't see them in
Javasoft's released SDKs).

Much as I might prefer "C# with templates" or whatever, therefore, I'm
quite liable to choose "Managed C++" today if tasked to develop some
largish subsystem in and for .NET (given that no "Python .NET" is at
hand: Python programming, as it hinges on signature based polymorphism,
just like templates, gives similar productivity advantages to templates
in even smoother and more general ways, of course).

> I anticipate this happening with most .NET versions of languages and
> that many programmers of these languages will only know and use the .NET
> subset and not the full language.

If most programmers will indeed start eschewing "unmanaged" memory access
for application-level programming, this will no doubt reduce bugs and
increase productivity.  But giving up on extremely low-level features
(unsuitable for application programming needs) is something that basically
only touches on C++ and similar system-level languages, since application
oriented languages don't offer those anyway (not the sensible ones!-).

Apart from "unmanaged memory access" issues, I totally disagree with your
thesis.  If a given programmer or programming shop wants the semantics
of C#/VB.NET, they will mostly be using C# or VB.NET depending on syntax
sugar tastes.  When somebody goes to the trouble of using the .NET versions
of, say, APL, or Haskell, or Mercury, it definitely will NOT be just in
order to get peculiar syntax sugar on the same semantics: rather, it will
be because SOME supplementary features of those respective languages are
of interest (may be arry operations, typeclasses, backtracking
respectively).

It will therefore be an EXTREMELY RARE phenomenon for programmers to be
using "strange" (non C#/VB.NET) languages in the .NET versions and "only
know and use the .NET subset", assuming that, by the latter, you mean the
semantics supported "at the interface between separate components" by CLR.


Alex






More information about the Python-list mailing list