[issue8670] c_types.c_wchar should not assume that sizeof(wchar_t) == sizeof(Py_UNICODE)

Daniel Stutzbach report at bugs.python.org
Sun May 9 08:38:34 CEST 2010


New submission from Daniel Stutzbach <daniel at stutzbachenterprises.com>:

Using a UCS2 Python on a platform with a 32-bit wchar_t, the following code throws an exception (but should not):

>>> ctypes.c_wchar('\u10000')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: one character unicode string expected

The trouble is in the u_set() function in Modules/_ctypes/cfield.c.  The corresponding u_get() function looks correct.

On a UCS4 Python running on a system with a 16-bit wchar_t, u_set() will corrupt the data by silently truncating the character to 16-bits.

For reference, Linux and Mac OS use a 32-bit wchar_t while Windows uses a 16-bit wchar_t.

----------
assignee: theller
components: ctypes
messages: 105374
nosy: stutzbach, theller
priority: normal
severity: normal
stage: unit test needed
status: open
title: c_types.c_wchar should not assume that sizeof(wchar_t) == sizeof(Py_UNICODE)
type: behavior
versions: Python 2.7, Python 3.2

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue8670>
_______________________________________


More information about the Python-bugs-list mailing list