[Python-ideas] The async API of the future

Antoine Pitrou solipsis at pitrou.net
Sat Nov 3 11:14:18 CET 2012


On Sat, 3 Nov 2012 10:37:41 +0100
Sturla Molden <sturla at molden.no> wrote:
> Den 3. nov. 2012 kl. 00:54 skrev Antoine Pitrou <solipsis at pitrou.net>:
> 
> >>>> Or a simpler solution than nesting them into a tree: Let the calls to WaitForMultipleObjects time out at once, and loop over as many events as you need, polling 64 event objects simultaneously.
> >>> 
> >>> Well, that's basically O(number of objects), isn't it?
> >> 
> >> Yes, but nesting would be O(log64 n).
> > 
> > No, you still have O(n) calls to WaitForMultipleObjects, just arranged
> > differently.
> > (in other words, the depth of your tree is O(log n), but its number of
> > nodes is O(n))
> > 
> > 
> 
> True, but is the time latency O(n) or O(log n)? 

Right, that's the difference. However, I think here we are concerned
about CPU load on the server, not individual latency (as long as it is
acceptable, e.g. lower than 5 ms).

> Also, from what I read, the complexity of select.poll is O(n) with respect to file handles, so this should not be any worse (O(log n) katency wait, O(n) polling) I think. 

epoll and kqueue are better than O(number of objects) though.

> Another interesting strategy for high-performance on Windows 64: Just use blocking i/o and one thread per client. The stack-space limitation is a 32-bit problem, and Windows 64 has no problem scheduling an insane number of threads. Even desktop computers today can have 16 GB of RAM, so there is virtually no limitation on the number of i/o threads Windows 64 can multiplex.

That's still a huge waste of RAM, isn't it?
Also, by relying on preemptive threading you have to use Python's
synchronization primitives (locks, etc.), and I don't know how these
would scale.

> But would it scale with Python threads and the GIL as well? You would be better to answer that.

I haven't done any tests with a large number of threads, but the GIL
certainly has a (per-thread as well as per-context switch) overhead.

Regards

Antoine.





More information about the Python-ideas mailing list