How should I handle socket receiving?

Dan Stromberg drsalists at gmail.com
Fri Mar 11 18:56:31 EST 2011


On Fri, Mar 11, 2011 at 3:30 PM, Hans <hansyin at gmail.com> wrote:

> I'm thinking to write a code which to:
> 1. establish tons of udp/tcp connections to a server
> 2. send packets from each connections
> 3. receive packets from each connections and then do something based
> on received content and connection statues.
> 4. repeat step 2 and step 3.
>
> my question is how should I handle receiving traffic from each
> connection respectively?  two ways I'm thinking are:
> 1. setup connections one by one, put socket handler into an
> array(dictionary?), array also record each handler's status. then in a
> big loop, keep using "select" to check sockets, if any socket get
> something received, then find the socket in the array, based on the
> socket status to do things should be done.  this one is like single-
> thread?
>
This should work, except you perhaps don't need to keep the sockets in an
array or find them in that array, because select is going to return a list
of the ones that are ready for I/O.


> 2. (I don't know if this one works, but I prefer this one if it do
> works. ) Setup connection object, write codes in the object to handle
> sending/receiving and status. let each object to check/receive its
> socket by itslef(I don't know how yet).  Then In the main proc, I just
> initiate those objects
>

Each instance of your object(s) could select on just one socket.  Start each
object doing its thing in parallel.  How you do that depends on what kind of
concurrency you choose, but there tend to be lots of nice example programs
that you can readily locate using google, showing how.

CPython's threading is not strong (not sure how much it's improved in 3.2),
so for concurrency, you'd probably best use the multiprocessing module or
CPython 3.2's "futures" - or an alternative python implementation that
threads well, like Jython.  Oh, and you could try greenlets.

In a way this is simpler than #1, in a way it's not.

Supposedly tornado gets lots of great performance -because- it's single
threaded.  And it makes sense that this would be fast, up to a point,
because you eliminate lots of context switching.

However, on a system with lots of cores, and perhaps to future-proof your
code (most likely individual cores aren't getting much faster anymore, but
the number of cores is most likely going up), you're probably better off
with something concurrent like your #2.

HTH
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-list/attachments/20110311/93946b12/attachment-0001.html>


More information about the Python-list mailing list