split lines from stdin into a list of unicode strings

Kurt Mueller kurt.alfred.mueller at gmail.com
Thu Sep 5 09:25:50 EDT 2013


Am 05.09.2013 10:33, schrieb Peter Otten:
> Kurt Mueller wrote:
>> Am 29.08.2013 11:12, schrieb Peter Otten:
>>> kurt.alfred.mueller at gmail.com wrote:
>>>> On Wednesday, August 28, 2013 1:13:36 PM UTC+2, Dave Angel wrote:
>>>>> On 28/8/2013 04:32, Kurt Mueller wrote:
>>>>>> For some text manipulation tasks I need a template to split lines
>>>>>> from stdin into a list of strings the way shlex.split() does it.
>>>>>> The encoding of the input can vary.
>> I took your script as a template.
>> But I used the libmagic library (pyhton-magic) instead of chardet.
>> See http://linux.die.net/man/3/libmagic
>> and https://github.com/ahupp/python-magic
>> ( I made tests with files of different size, up to 1.2 [GB] )
>> I had following issues:
>> - I a real file, the encoding was detected as 'ascii' for
>> detect_lines=1000.
>>   In line 1002 there was an umlaut character. So then the
>>   line.decode(encoding) failed. I think to add the errors parameter,
>>   line.decode(encoding, errors='replace')
> 
> Tough luck ;) You could try and tackle the problem by skipping leading 
> ascii-only lines. Untested:
> 
> def detect_encoding(instream, encoding, detect_lines, skip_ascii=True):
>     if encoding is None:
>         encoding = instream.encoding
>         if encoding is None:
>             if skip_ascii:
>                 try:
>                     for line in instream:
>                         yield line.decode("ascii")
>                 except UnicodeDecodeError:
>                     pass
>                 else:
>                     return
>             head = [line]
>             head.extend(islice(instream, detect_lines-1))
>             encoding =  chardet.detect("".join(head))["encoding"]
>             instream = chain(head, instream)
>     for line in instream:
>         yield line.decode(encoding)

I find this solution as a generator very nice.
With just some small modifications it runs fine for now.
( line is undefined if skip_ascii is False. )

For ascii only files chardet or libmagic will not be bothered.
And the detect_lines comes not in charge, until there are
some non ascii characters.

------------------------------------------------------------------------------
def decode_stream_lines( inpt_strm, enco_type, numb_inpt, skip_asci=True, ):
    if enco_type is None:
        enco_type = inpt_strm.encoding
        if enco_type is None:
            line_head = []
            if skip_asci:
                try:
                    for line in inpt_strm:
                        yield line.decode( 'ascii' )
                except UnicodeDecodeError:
                    line_head = [ line ] # last line was not ascii
                else:
                    return # all lines were ascii
            line_head.extend( islice( inpt_strm, numb_inpt - 1 ) )
            magc_enco = magic.open( magic.MAGIC_MIME_ENCODING )
            magc_enco.load()
            enco_type = magc_enco.buffer( "".join( line_head ) )
            magc_enco.close()
            print( I_AM + '-ERROR: enco_type=' + repr( enco_type ), file=sys.stderr, )
            if  enco_type.rfind( 'binary' ) >= 0: # binary, application/mswordbinary, application/vnd.ms-excelbinary and the like
                return
            inpt_strm = chain( line_head, inpt_strm )
    for line in inpt_strm:
        yield line.decode( enco_type, errors='replace' )
------------------------------------------------------------------------------


Thank you very much!
-- 
Kurt Mueller



More information about the Python-list mailing list