Standard Threads vs Weightless Threads

Björn Lindström bkhl at stp.lingfil.uu.se
Mon Aug 1 17:12:09 EDT 2005


Christopher Subich <spam.csubich+block at block.subich.spam.com> writes:

> the primary benefit (on uniprocessor systems) that if one thread
> executes a blocking task (like IO writes) another thread will receive
> CPU attention.

On multiprocessor systems, of course, the benefit is that threads can
run on separate CPUs. This could possibly be addressed by transparently
running one OS thread for each CPU, each containing a set of light
weight threads, and I hear people are thinking about that for Erlang
(light weight processes language extravaganza).

In the meanwhile, you can of course solve this by running two separate
instances of your program that communicates with each other somehow.

Another benefit is that you can have more threads than the limit the OS
imposes on you, and they can also be made quite a bit less memory
consuming. Some language implementations manages to deal with hundreds
of thousands, and in some cases even millions, of threads this way.

> They're not in the standard library because implementing microthreads
> has thus far required a very large rewrite of the CPython architecture
> -- see Stackless Python.

Ruby's built-in threading also work something like this (if I understand
Stackless Python correctly).

(http://www.rubycentral.com/book/tut_threads.html)

Personally, though, I think treating treating light weight threads as
separate processes (which can only pass messages in between each other,
but do not share scope in any way), like Erlang, seems to make dealing
with massive concurrency easier.

-- 
Björn Lindström <bkhl at stp.lingfil.uu.se>
Student of computational linguistics, Uppsala University, Sweden



More information about the Python-list mailing list