Other notes

Andrew Dalke dalke at dalkescientific.com
Tue Dec 28 23:10:23 EST 2004


bearophileHUGS:
[on Python's O(n) list insertion/deletion) at any place other than tail
> (My hypothesis: to keep list implementation a bit simpler, to avoid
> wasting memory for the head buffer, and to keep them a little faster,
> avoiding the use of the skip index S).

Add its relative infrequent need.

> 2) I usually prefer explicit verbose syntax, instead of cryptic symbols
> (like decorator syntax), but I like the infix Pascal syntax ".." to
> specify a closed interval (a tuple?) of integers or letters
 
> assert 1..9 == tuple(range(1,10))
> for i in 1..12: pass
> for c in "a".."z": pass

It's been proposed several times.  I thought there was a PEP
but I can't find it.  One problem with it; what does

  for x in 1 .. "a":

do?  (BTW, it needs to be 1 .. 12 not 1..12 because 1. will be
interpreted as the floating point value "1.0".)

What does
  a = MyClass()
  b = AnotherClass()
  for x in a .. b:
    print x

do?  That is, what's the generic protocol?  In Pascal it
works because you also specify the type and Pascal has
an incr while Python doesn't.

> 3) I think it can be useful a way to define infix functions, like this
> imaginary decorator syntax:
> 
> @infix
> def interval(x, y): return range(x, y+1) # 2 parameters needed
> 
> This may allow:
> assert 5 interval 9 == interval(5,9)

Maybe you could give an example of when you need this in
real life?

What does

  1 + 2 * 3 interval 9 / 3 - 7

do?  That is, what's the precedence?  Does this only work
for binary or is there a way to allow unary or other
n-ary (including 0-ary?) functions?


> 4) The printf-style formatting is powerful, but I still think it's
> quite complex for usual purposes, and I usually have to look its syntax
> in the docs. I think the Pascal syntax is nice and simpler to remember
> (especially for someone with a little Pascal/Delphi experience ^_^),

But to someone with C experience or any language which derives
its formatting string from C, Python's is easier to understand than
your Pascal one.

A Python view is that there should be only one obvious way to do
a task.  Supporting both C and Pascal style format strings breaks
that.  Then again, having both the old % and the new PEP 292 string
templates means there's already two different ways to do string
formatting.

> For example this Delphi program:
  ...
> const a = -12345.67890;
  ...
> writeln(a:4:20);
  ...
> Gives:
  ...
> -12345.67890000000000000000
Python says that's 

>>> "%.20f" % -12345.67890
'-12345.67890000000079453457'

I don't think Pascal is IEEE enough.

note also that the Pascal-style formatting strings are less
capable than Python's, though few people use features like

>>> "% 2.3f" % -12.34
'-12.340'
>>> "% 2.3f" % 12.34
' 12.340'


> 5) From the docs about round:
  ...
> But to avoid a bias toward rounding up there is another way do this:

There are even more ways than that.  See
  http://www.python.org/peps/pep-0327.html#rounding-algorithms

The solution chosen was not to change 'round' but to provide
a new data type -- Decimal.  This is in Python 2.4.

> 6) map( function, list, ...) applies function to every item of list and
> return a list of the results. If list is a nested data structure, map
> applies function to just the upper level objects.
> In Mathematica there is another parameter to specify the "level" of the
> apply.
  ..
> I think this semantic can be extended:

A real-life example would also be helpful here.

What does
  map(len, "Blah", level = 200)
return?

In general, most people prefer to not use map and instead
use list comprehensions and (with 2.4) generator comprehensions.


> level=0 means that the map is performed up to the leaves (0 means
> infinitum, this isn't nice, but it can be useful because I think Python
> doesn't contain a built-in Infinite).

You need to learn more about the Pythonic way of thinking
of things.  The usual solution for this is to have "level = None".

> 7) Maybe it can be useful to extended the reload(module) semantic:
> reload(module [, recurse=False])
> If recurse=True it reloads the module and recursively all the modules
> that it imports.

It isn't that simple.  Reloading modules isn't sufficient.
Consider

import spam
a = spam.Eggs()
reload(spam)
print isinstance(a, spam.Eggs)

This will print False because a contains a reference to
the old Eggs which contains a reference to the old spam module.

As I recall, Twisted has some super-reload code that may be
what you want.  See
  http://twistedmatrix.com/documents/current/api/twisted.python.rebuild.html

> 8) Why reload is a function and import is a statement? (So why isn't
> reload a statement too, or both functions?)

import uses the underlying __import__ function.

Consider using the __import__ function directly

math = __import__("math")

The double use of the name "math" is annoying and error prone.

It's more complicated with module hierarchies.

>>> xml = __import__("xml.sax.handler")
>>> xml.sax.handler
<module 'xml.sax.handler' from '/sw/lib/python2.3/xml/sax/handler.pyc'>
>>> xml.sax.saxutils
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
AttributeError: 'module' object has no attribute 'saxutils'
>>> __import__("xml.sax.saxutils")
<module 'xml' from '/sw/lib/python2.3/xml/__init__.pyc'>
>>> xml.sax.saxutils
<module 'xml.sax.saxutils' from '/sw/lib/python2.3/xml/sax/saxutils.pyc'>
>>> 


Reload takes a reference to the module to reload.  Consider
import UserString
x = UserString
reload(x)

This reloads UserString.  It could be a statement but there's
no advantage to doing so.


> 9) Functions without a return statement return None: def foo(x): print x
> I think the compiler/interpreter can give a "compilation warning" where
> such function results are assigned to something: y = foo(x)

You might think so but you'ld be wrong.

Consider

def vikings():
  pass

def f():
  global vikings
  def vikings():
    print "Spammity spam!"
    return 1.0

if random.random() > 0.5:
  f()

x = vikings()

More realistically I've done

class DebugCall:
  def __init__(self, obj):
    self.__obj = obj
  def __call__(self, *args, **kwargs):
    print "Calling", self.__obj, "with", args, kwargs
    x = self.__obj(*args, **kwargs)
    print "Returned with", x
    return x

from wherever import f
f = DebugCall(f)

I don't want it generating a warning for those cases where
the implicit None is returned.

A more useful (IMHO) is to have PyChecker check for cases
where both explicit and implicit returns can occur in the
same function.  Don't know if it does that already.

Why haven't you looked at PyChecker to see what it does?

> (I know that some of such cases cannot be spotted at compilation time,
> but the other cases can be useful too). I don't know if PyChecker does
> this already. Generally speaking I'd like to see some of those checks
> built into the normal interpreter. Instructions like:
> open = "hello"
> Are legal, but maybe a "compilation warning" can be useful here too (and
> maybe even a runtime warning if a Verbose flag is set).

There's a PEP for that.  See
  http://www.python.org/peps/pep-0329.html

Given your questions it would be appropriate for you to read all the
PEPs.

> 10) There can be something in the middle between the def statement and
> the lambda. For example it can be called "fun" (or it can be called
> "def" still). With it maybe both def and lambdas aren't necessary
> anymore. Examples:
> cube = fun x:
> return x**3
> (the last line is indented)
> 
> sorted(data, fun x,y: return x-y)
> (Probably now it's almost impossible to modify this in the language.)

It's been talked about.  Problems are:
  - how is it written?
  - how big is the region between a lambda an a def?

Try giving real world examples of when you would
use this 'fun' and compare it to lambda and def forms.
you'll find there's at most one extra line of code
needed.  That doesn't seem worthwhile.

> 
> 11) This is just a wild idea for an alternative syntax to specify a
> global variable inside a function. From:
> 
> def foo(x):
> global y
> y = y + 2
> (the last two lines are indented)
> 
> To:
> 
> def foo(x): global.y = global.y + 2
> 
> Beside the global.y, maybe it can exist a syntax like upper.y or
> caller.y that means the name y in the upper context. upper.upper.y etc.

It does have the advantage of being explicit rather than
implicit.  

> 12) Mathematica's interactive IDE suggests possible spelling errors;

I don't know anything about the IDEs.  I have enabled the
tab-complete in my interactive Python session which is nice.

I believe IPython is the closest to giving a Mathematica-like feel.
There's also tmPython.

> 13) In Mathematica language the = has the same meaning of Python, but
> := is different:
> 
> lhs := rhs assigns rhs to be the delayed value of lhs. rhs is maintained
> in an unevaluated form. When lhs appears, it is replaced by rhs,
> evaluated afresh each time.
> 
> I don't know if this can be useful...

Could you explain why it might be useful?

> 14) In one of my last emails of notes, I've tried to explain the Pattern
> Matching programming paradigm of Mathematica. Josiah Carlson answered
> me:

Was there a question in all that?

You are proposing Python include a Prolog-style (or
CLIPS or Linda or ..) programming idiom, yes?  Could you
also suggest a case when one would use it?

> 15) NetLogo is a kind of logo derived from StarLogo, implemented in
> Java.

How about the "turtle" standard library?

I must say it's getting pretty annoying to say things like
"when would this be useful?" and "have you read the documentation?"
for your statements.

> Maybe this library can also be faster than the Tkinter pixel
> plotting and the pixel matrix visualisation).

See also matplotlib, chaco, and other libraries that work hard
to make this simple.  Have you done any research on what Python
can do or do you think ... no, sorry, I'm getting snippy.

				Andrew
				dalke at dalkescientific.com




More information about the Python-list mailing list