catch UnicodeDecodeError

wxjmfauth at gmail.com wxjmfauth at gmail.com
Thu Jul 26 06:19:36 EDT 2012


On Thursday, July 26, 2012 9:46:27 AM UTC+2, Jaroslav Dobrek wrote:
> On Jul 25, 8:50 pm, Dave Angel <d... at davea.name> wrote:
> > On 07/25/2012 08:09 AM, jaroslav.dob... at gmail.com wrote:
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > On Wednesday, July 25, 2012 1:35:09 PM UTC+2, Philipp Hagemeister wrote:
> > >> Hi Jaroslav,
> >
> > >> you can catch a UnicodeDecodeError just like any other exception. Can
> > >> you provide a full example program that shows your problem?
> >
> > >> This works fine on my system:
> >
> > >> import sys
> > >> open(&#39;tmp&#39;, &#39;wb&#39;).write(b&#39;\xff\xff&#39;)
> > >> try:
> > >>     buf = open(&#39;tmp&#39;, &#39;rb&#39;).read()
> > >>     buf.decode(&#39;utf-8&#39;)
> > >> except UnicodeDecodeError as ude:
> > >>     sys.exit(&quot;Found a bad char in file &quot; + &quot;tmp&quot;)
> >
> > > Thank you. I got it. What I need to do is explicitly decode text.
> >
> > > But I think trial and error with moving files around will in most cases be faster. Usually, such a problem occurs with some (usually complex) program that I wrote quite a long time ago. I don't like editing old and complex programs that work under all normal circumstances.
> >
> > > What I am missing (especially for Python3) is something like:
> >
> > > try:
> > >     for line in sys.stdin:
> > > except UnicodeDecodeError:
> > >     sys.exit("Encoding problem in line " + str(line_number))
> >
> > > I got the point that there is no such thing as encoding-independent lines. But if no line ending can be found, then the file simply has one single line.
> >
> > i can't understand your question.  if the problem is that the system
> > doesn't magically produce a variable called line_number, then generate
> > it yourself, by counting
> > in the loop.
> 
> 
> That was just a very incomplete and general example.
> 
> My problem is solved. What I need to do is explicitly decode text when
> reading it. Then I can catch exceptions. I might do this in future
> programs.
> 
> I dislike about this solution that it complicates most programs
> unnecessarily. In programs that open, read and process many files I
> don't want to explicitly decode and encode characters all the time. I
> just want to write:
> 
> for line in f:
> 
> or something like that. Yet, writing this means to *implicitly* decode
> text. And, because the decoding is implicit, you cannot say
> 
> try:
>     for line in f: # here text is decoded implicitly
>        do_something()
> except UnicodeDecodeError():
>     do_something_different()
> 
> This isn't possible for syntactic reasons.
> 
> The problem is that vast majority of the thousands of files that I
> process are correctly encoded. But then, suddenly, there is a bad
> character in a new file. (This is so because most files today are
> generated by people who don't know that there is such a thing as
> encodings.) And then I need to rewrite my very complex program just
> because of one single character in one single file.

In my mind you are taking the problem the wrong way.

Basically there is no "real UnicodeDecodeError", you are
just wrongly attempting to read a file with the wrong
codec. Catching a UnicodeDecodeError will not correct
the basic problem, it will "only" show, you are using
a wrong codec.
There is still the possibility, you have to deal with an
ill-formed utf-8 codding, but I doubt it is the case.

Do not forget, a "bit of text" has only a meaning if you
know its coding.

In short, all your files are most probably ok, you do not read
them correctly.

>>> b'abc\xeadef'.decode('utf-8')
Traceback (most recent call last):
  File "<eta last command>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xea in
position 3: invalid continuation byte
>>> # but
>>> b'abc\xeadef'.decode('cp1252')
'abcêdef'
>>> b'abc\xeadef'.decode('mac-roman')
'abcÍdef'
>>> b'abc\xeadef'.decode('iso-8859-1')
'abcêdef'

jmf



More information about the Python-list mailing list