newbie questions about imports, threads, and optimizations

Thomas Wouters thomas at xs4all.nl
Thu Jun 29 10:13:31 EDT 2000


On Wed, 28 Jun 2000 09:10:04 -0700, Bryce Sellers <bryce.sellers at trw.com> wrote:
>I have been using python off and on (unfortunately, mostly off) for about a
>year now, and have got a couple of questions for someone who knows more than

>First, is there a way to do something like this:

>import string, mystring
>m = 'string'
>`m`.captalize('hello')
>m='mystring'
>`m`.capitalize('hello')
>
>In other words, use a string as a module or function name and execute that
>portion of code.

Not sure what you mean, but you can do this:

eval('string').capitalize('hello') # 'string' could turn out any object.

or this:

sys.modules['string'].capitalize('hello') # 'string' is always a module.

Of course, using string literals isn't that exciting. It also isn't that
exciting if you just want to pass the module name to a function, because you
can pass & assign a module just like any other object. But it might be handy
if you want a user to input a module- or object-name.

>Finally, I've noticed that a lot of the discussions surrounding
>optimizations center on the fact that even the standard library function
>names can be overwritten.  Why can't we just have a command-line argument
>(or something along those lines) that tells python that all variables
>defined in modules are read-only?  Now the interpreter can optimize the
>heck out of it, and throw an exception if anything tries to modify
>something.  Of course, this idea is so simple that I'm sure the only reason
>it hasn't been done is because its a bad idea.  My question is, why is it a
>bad idea?

Wrong assumption. It hasn't been done because it's pretty hard, and a fair
lot of work to boot. Most of Pythons' internal structures are really normal
Python objects. Namespaces (other than function-local namespaces) are really
normal PyDicts, and 'code objects' really are tuples of various items, etc.
This makes it really usable, and makes the introspection abilities almost
'free', but it does mean you can't just change all those things :-)

You have a point though. After the discussion on optimization last week, I
went on thinking about other possibilities: inlining functions, turning
functions into 'macro' calls, stuff like that.

Eventually I realized it might be perfectly possible to compile Python into
native machine code -- by creating a completely static binary, including all
modules. The compiler would have to keep tabs on symbols that get changed in
Python, but that's possible because you're building a completely
'stand-alone' binary. For those symbols, it'd have to create some level of
indirection, so code like that could still work. The places where those
symbols were not assigned to, you can apply the usual array of agressive
optimizers ;)

I haven't yet found a reason why this couldn't work. You'll lose the ability
to influence the program like Python code has now, ending up with a language
not much more complex than C++, so in my completely uneducated opinion, the
Python CompleteCompiler shouldn't have to be more complicated than a usable
and conforming C++ compiler (note, I said 'and', so I'm not exaggerating. ;)
And of course you can start with a compiler that barfs when you assign to a
module or do nifty things like exec or eval() ;)

However, as I've said before, I know didley-squat about real-world compilers.
I think this would be a really cool thing to work on, even though it
basically requires writing a Python implementation from scratch, eventually
designing a way for existing extention modules to be compiled into
extention-code that can be linked with the compiled-python programs and
stuff like that.

Thomas.



More information about the Python-list mailing list