[issue26049] Poor performance when reading large xmlrpc data

Sergi Almacellas Abellana report at bugs.python.org
Fri Jan 8 08:36:38 EST 2016


New submission from Sergi Almacellas Abellana:

By default, python xmlrpclib parser reads data by chunks of 1024 bytes [1], which leads to a lot of data concatenations when reading large data, which is very slow in python.

Increasing the chuck size from 1024 bytes to a higher value makes improve in performance. 

On the same machine, we have tested with 20MB of data and we've got the following results: 

Chucks of 1024: 1min 48.933491sec
Chucks of 10 * 1024 * 1024: 0.245282sec

We have chosen 10 * 1024 * 1024, as it is the same value used in issue792570

We have done our tests on python2.7, but same code exists for default branch [2] (and 3.x branches also [3][4][5][6]), so I belive all the versions are affected. 

I can work on a patch if you think this change is acceptable.

IMHO it's logical to allow the developer to customize the chunk size instead of using a hard coded one. 

[1] https://hg.python.org/cpython/file/2.7/Lib/xmlrpclib.py#l1479
[2] https://hg.python.org/cpython/file/default/Lib/xmlrpc/client.py#l1310
[3] https://hg.python.org/cpython/file/3.5/Lib/xmlrpc/client.py#l1310
[4] https://hg.python.org/cpython/file/3.4/Lib/xmlrpc/client.py#l1324
[5] https://hg.python.org/cpython/file/3.3/Lib/xmlrpc/client.py#l1316
[6] https://hg.python.org/cpython/file/3.2/Lib/xmlrpc/client.py#l1301

----------
components: XML
messages: 257756
nosy: pokoli
priority: normal
severity: normal
status: open
title: Poor performance when reading large xmlrpc data

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue26049>
_______________________________________


More information about the Python-bugs-list mailing list