split lines from stdin into a list of unicode strings

Kurt Mueller kurt.alfred.mueller at gmail.com
Thu Sep 5 03:42:36 EDT 2013


Am 29.08.2013 11:12, schrieb Peter Otten:
> kurt.alfred.mueller at gmail.com wrote:
>> On Wednesday, August 28, 2013 1:13:36 PM UTC+2, Dave Angel wrote:
>>> On 28/8/2013 04:32, Kurt Mueller wrote:
>>>> For some text manipulation tasks I need a template to split lines
>>>> from stdin into a list of strings the way shlex.split() does it.
>>>> The encoding of the input can vary.

> You can compromise and read ahead a limited number of lines. Here's my demo 
> script (The interesting part is detect_encoding(), I got a bit distracted by 
> unrelated stuff...). The script does one extra decode/encode cycle -- it 
> should be easy to avoid that if you run into performance issues.

I took your script as a template.
But I used the libmagic library (pyhton-magic) instead of chardet.
See http://linux.die.net/man/3/libmagic
and https://github.com/ahupp/python-magic
( I made tests with files of different size, up to 1.2 [GB] )

I had following issues:

- I a real file, the encoding was detected as 'ascii' for detect_lines=1000.
  In line 1002 there was an umlaut character. So then the line.decode(encoding) failed.
  I think to add the errors parameter, line.decode(encoding, errors='replace')

- If the buffer was bigger than about some Megabytes, the returned encoding
  from libmagic was always None. The big files had very long lines ( more than 4k per line ).
  So with detect_lines=1000 this limit was exceeded.

- The magic.buffer() ( the equivalent of chardet.detect() ) takes about 2 seconds
  per megabyte buffer.



-- 
Kurt Mueller



More information about the Python-list mailing list