[New-bugs-announce] [issue9851] multiprocessing socket timeout will break client
hume
report at bugs.python.org
Tue Sep 14 10:28:42 CEST 2010
New submission from hume <humeafo at gmail.com>:
when use multiprocessing managers, while use socket to communicate between server process and client process, if I used the global socket timeout feature(no matter how large the value is) the client will always say
File "c:\python27\lib\multiprocessing\connection.py", line 149, in Client
answer_challenge(c, authkey)
File "c:\python27\lib\multiprocessing\connection.py", line 383, in answer_challenge
message = connection.recv_bytes(256) # reject large message
IOError: [Errno 10035]
this is not reasonable, because this behaviour will make subprocess unable to use socket's timeout features globally.
Another question is line 138 in managers.py:
# do authentication later
self.listener = Listener(address=address, backlog=5)
self.address = self.listener.address
backlog=5 will accept only 5 cocurrent connections, this is stupid, you'd better make this a argument that can be specified by user
----------
components: Library (Lib)
messages: 116375
nosy: hume
priority: normal
severity: normal
status: open
title: multiprocessing socket timeout will break client
versions: Python 2.7
_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue9851>
_______________________________________
More information about the New-bugs-announce
mailing list