[PYTHON-CRYPTO] Memoryleak in SSL.Connection
Andre Reitz
reitz at INWORKS.DE
Thu Apr 1 23:25:53 CEST 2004
Another question....
since my patch about the memory leak (when called setblocking(x)).
i have another problem.
We use m2crypto with a multithreaded python server.
I am pretty sure, that since the Connection object
gets garbage collected (and the __del__ method gets called)
the server sometimes hangs completely.
I suspect that the EOF/Connectionloss of a client may block the server
in m2.bio_free(self.sslbio) or m2.bio_free(self.sockbio) or self.socket.close()
which on the other hand blocks all other threas reading and writing to
their own connections...
IN OTHER WORDS:
Is it possible that:
m2.bio_free(self.sslbio)
m2.bio_free(self.sockbio)
or self.socket.close()
may hang if the client does not finish the connection completely?
What is the correct way for the Server to kill a connection?
set_shutdown(?)
shutdown(?)
Currently we close the python socket and let the M2Crypto-Connection
be garbagecollected....
Thank you very much in advance....
Andre'
class Connection:
...
def __del__(self):
try:
m2.bio_free(self.sslbio)
m2.bio_free(self.sockbio)
except AttributeError:
pass
self.socket.close()
On Thu, 1 Apr 2004 21:49:07 +0200
Andre Reitz <reitz at INWORKS.DE> wrote:
> Bad news.......
>
> This patch doesnt work with python 2.2.3 !!!!
>
> something like this happens
> TypeError: unbound method ...() must be called with ... instance as first argument (got nothing instead)
>
>
> but....
> here is my patch, which should work in any case:
>
> (but not so efficient)
>
>
> Greetings, Andre'
>
>
> --- M2Crypto/SSL/Connection.py 2004-03-25 07:36:04.000000000 +0100
> +++ /home/andre/m2crypto-0.13/M2Crypto/SSL/Connection.py 2004-04-01 21:32:45.000000000 +0200
> @@ -120,34 +120,26 @@
> connection."""
> return m2.ssl_pending(self.ssl)
>
> - def _write_bio(self, data):
> - return m2.ssl_write(self.ssl, data)
> -
> - def _write_nbio(self, data):
> - return m2.ssl_write_nbio(self.ssl, data)
> -
> - def _read_bio(self, size=1024):
> - if size <= 0:
> - raise ValueError, 'size <= 0'
> - return m2.ssl_read(self.ssl, size)
> -
> - def _read_nbio(self, size=1024):
> + def _read(self,size=1024):
> if size <= 0:
> raise ValueError, 'size <= 0'
> + if self._blocking:
> + return m2.ssl_read(self.ssl, size)
> return m2.ssl_read_nbio(self.ssl, size)
>
> - sendall = send = write = _write_bio
> - recv = read = _read_bio
> + def _write(self,data):
> + if self._blocking:
> + m2.ssl_write(self.ssl, data)
> + return m2.ssl_write_nbio(self.ssl, data)
> +
> + sendall = send = write = _write
> + recv = read = _read
>
> + _blocking=1
> def setblocking(self, mode):
> """Set this connection's underlying socket to _mode_."""
> self.socket.setblocking(mode)
> - if mode:
> - self.send = self.write = self._write_bio
> - self.recv = self.read = self._read_bio
> - else:
> - self.send = self.write = self._write_nbio
> - self.recv = self.read = self._read_nbio
> + self._blocking=mode
>
> def fileno(self):
> return self.socket.fileno()
>
>
>
>
>
>
>
>
>
>
>
>
> On Tue, 30 Mar 2004 23:55:21 +0800
> Ng Pheng Siong <ngps at POST1.COM> wrote:
>
> > On Tue, Mar 30, 2004 at 07:25:36PM +1200, Michael Dunstan wrote:
> > > I have seen the same leak when running ZServerSSL. Had to restart the
> > > server every few days. Until the following change was made for
> > > setblocking:
> > >
> > > - self.send = self.write = self._write_bio
> > > + self.send = self.write = Connection._write_bio
> > > _ self.recv = self.read = self._read_bio
> > > + self.recv = self.read = Connection._read_bio
> > > else:
> > > - self.send = self.write = self._write_nbio
> > > + self.send = self.write = Connection._write_nbio
> > > - self.recv = self.read = self._read_nbio
> > > + self.recv = self.read = Connection._read_nbio
> > >
> > > Since applying the patch we have not had any memory problems for some
> > > time. About 8 months now.
> >
> > Great! Thanks for the patch Michael.
> >
> > Now to roll out 0.13p1!
> >
> > > It is quite a simple matter to correct the headers used for the image
> > > 304's in Zope. There is even an issue in the zope collector about this:
> > > http://collector.zope.org/Zope/544. Looks like this incorrect handling
> > > of the headers in zope is intentional to support caching in some
> > > relic version of apache configured as a proxy server.
> >
> > Bad Zope. No biscuit for Zope.
> >
> >
> > Cheers.
> >
> > --
> > Ng Pheng Siong <ngps at netmemetic.com>
> >
> > http://firewall.rulemaker.net -+- Firewall Change Management & Version Control
> > http://sandbox.rulemaker.net/ngps -+- Open Source Python Crypto & SSL
>
More information about the python-crypto
mailing list